Tutorials
Red Hat OpenShift Service on AWS tutorials
Abstract
Chapter 1. Tutorials overview
Step-by-step tutorials from Red Hat experts to help you get the most out of your Managed OpenShift cluster.
In an effort to make this Cloud Expert tutorial content available quickly, it may not yet be tested on every supported configuration.
Chapter 2. Tutorial: ROSA with HCP activation and account linking
This tutorial describes the process for activating Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) and linking to an AWS account, before deploying the first cluster.
If you have received a private offer for the product, make sure to proceed according to the instructions provided with the private offer before following this tutorial. The private offer is designed either for a case when the product is already activated, which replaces an active subscription, or for first time activations.
2.1. Prerequisites
- Make sure to log in to the Red Hat account that you plan to associate with the AWS account where you have activated ROSA with HCP in previous steps.
- The AWS account used for service billing can only be associated with a single Red Hat account. Typically an AWS payer account is the one that is used to subscribe to ROSA and used for account linking and billing.
- All team members belonging to the same Red Hat organization can use the linked AWS account for service billing while creating ROSA with HCP clusters.
2.2. Subscription enablement and AWS account setup
Activate the ROSA with HCP product at the AWS console page by clicking the Get started button:
Figure 2.1. Get started
If you have activated ROSA before but did not complete the process, you can click the button and complete the account linking as described in the following steps.
Confirm that you want your contact information to be shared with Red Hat and enable the service:
Figure 2.2. Enable ROSA
- You will not be charged by enabling the service in this step. The connection is made for billing and metering that will take place only after you deploy your first cluster. This could take a few minutes.
After the process is completed, you will see a confirmation:
Figure 2.3. ROSA enablement confirmation
Other sections on this verification page show the status of additional prerequisites. In case any of these prerequisites are not met, a corresponding message is shown. Here is an example of insufficient quotas in the selected region:
Figure 2.4. Service quotas
- Click the Increase service quotas button or use the Learn more link to get more information about the about how to manage service quotas. In the case of insufficient quotas, note that quotas are region-specific. You can use the region switcher in the upper right corner of the web console to re-run the quota check for any region you are interested in and then submit service quota increase requests as needed.
If all the prerequisites are met, the page will look like this:
Figure 2.5. Verify ROSA prerequisites
The ELB service-linked role is created for you automatically. You can click any of the small Info blue links to get contextual help and resources.
2.3. AWS and Red Hat account and subscription linking
Click the orange Continue to Red Hat button to proceed with account linking:
Figure 2.6. Continue to Red Hat
If you are not already logged in to your Red Hat account in your current browser’s session, you will be asked to log in to your account:
NoteYour AWS account must be linked to a single Red Hat organization.
Figure 2.7. Log in to your Red Hat account
- You can also register for a new Red Hat account or reset your password on this page.
- Make sure to log in to the Red Hat account that you plan to associate with the AWS account where you have activated ROSA with HCP in previous steps.
- The AWS account used for service billing can only be associated with a single Red Hat account. Typically an AWS payer account is the one that is used to subscribe to ROSA and used for account linking and billing.
- All team members belonging to the same Red Hat organization can use the linked AWS account for service billing while creating ROSA with HCP clusters.
Complete the Red Hat account linking after reviewing the terms and conditions:
NoteThis step is available only if the AWS account was not linked to any Red Hat account before.
This step is skipped if the AWS account is already linked to the user’s logged in Red Hat account.
If the AWS account is linked to a different Red Hat account, an error will be displayed. See Correcting Billing Account Information for HCP clusters for troubleshooting.
Figure 2.8. Complete your account connection
Both the Red Hat and AWS account numbers are shown on this screen.
Click the Connect accounts button if you agree with the service terms.
If this is the first time you are using the Red Hat Hybrid Cloud Console, you will be asked to agree with the general managed services terms and conditions before being able to create the first ROSA cluster:
Figure 2.9. Terms and conditions
Additional terms that need to be reviewed and accepted are shown after clicking the View Terms and Conditions button:
Figure 2.10. Red Hat terms and conditions
Submit your agreement once you have reviewed any additional terms when prompted at this time.
The Hybrid Cloud Console provides a confirmation that AWS account setup was completed and lists the prerequisites for cluster deployment:
Figure 2.11. Complete ROSA prerequisites
The last section of this page shows cluster deployment options, either using the
rosa
CLI or through the web console:Figure 2.12. Deploy the cluster and set up access
2.4. Selecting the AWS billing account for ROSA with HCP during cluster deployment using the CLI
Make sure that you have the most recent ROSA command line interface (CLI) and AWS CLI installed and have completed the ROSA prerequisites covered in the previous section. See Help with ROSA CLI setup and Instructions to install the AWS CLI for more information.
Initiate the cluster deployment using the
rosa create cluster
command. You can click the copy button on the Set up Red Hat OpenShift Service on AWS (ROSA) console page and paste the command in your terminal. This launches the cluster creation process in interactive mode:Figure 2.13. Deploy the cluster and set up access
-
To use a custom AWS profile, one of the non-default profiles specified in your
~/.aws/credentials
, you can add the–profile <profile_name>
selector to the rosa create cluster command so that the command looks like rosa create cluster–profile stage
. If no AWS CLI profile is specified using this option, the default AWS CLI profile will determine the AWS infrastructure profile into which the cluster is deployed. The billing AWS profile is selected in one of the following steps. When deploying a ROSA with HCP cluster, the billing AWS account needs to be specified:
Figure 2.14. Specify the Billing Account
- Only AWS accounts that are linked to the user’s logged in Red Hat account are shown.
- The specified AWS account is charged for using the ROSA service.
An indicator shows if the ROSA contract is enabled or not enabled for a given AWS billing account.
- If you select an AWS billing account that shows the Contract enabled label, on-demand consumption rates are charged only after the capacity of your pre-paid contract is consumed.
- AWS accounts without the Contract enabled label are charged the applicable on-demand consumption rates.
Additional resources
- The detailed cluster deployment steps are beyond the scope of this tutorial. See Creating ROSA with HCP clusters using the default options for more details about how to complete the ROSA with HCP cluster deployment using the CLI.
2.5. Selecting the AWS billing account for ROSA with HCP during cluster deployment using the web console
A cluster can be created using the web console by selecting the second option in the bottom section of the introductory Set up ROSA page:
Figure 2.15. Deploy with web interface
NoteComplete the prerequisites before starting the web console deployment process.
The
rosa
CLI is required for certain tasks, such as creating the account roles. If you are deploying ROSA for the first time, follow this the CLI steps until running therosa whoami
command, before starting the web console deployment steps.The first step when creating a ROSA cluster using the web console is the control plane selection. Make sure the Hosted option is selected before clicking the Next button:
Figure 2.16. Select hosted option
The next step Accounts and roles allows you specifying the infrastructure AWS account, into which the ROSA cluster is deployed and where the resources are consumed and managed:
Figure 2.17. AWS infrastructure account
- Click the How to associate a new AWS account, if you don not see the account into which you want to deploy the ROSA cluster for detailed information on how to create or link account roles for this association.
-
The
rosa
CLI is used for this. -
If you are using multiple AWS accounts and have their profiles configured for the AWS CLI, you can use the
--profile
selector to specify the AWS profile when working with therosa
CLI commands.
The billing AWS account is selected in the immediately following section:
Figure 2.18. AWS billing account
- Only AWS accounts that are linked to the user’s logged in Red Hat account are shown.
- The specified AWS account is charged for using the ROSA service.
An indicator shows if the ROSA contract is enabled or not enabled for a given AWS billing account.
- If you select an AWS billing account that shows the Contract enabled label, on-demand consumption rates are charged only after the capacity of your pre-paid contract is consumed.
- AWS accounts without the Contract enabled label are charged the applicable on-demand consumption rates.
The following steps past the billing AWS account selection are beyond the scope of this tutorial.
Additional resources
- For information on using the CLI to create a cluster, see Creating a ROSA with HCP cluster using the CLI.
- See this learning path for more details on how to complete ROSA cluster deployment using the web console.
Chapter 3. Tutorial: ROSA with HCP private offer acceptance and sharing
This guide describes how to accept a private offer for Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) and how to ensure that all team members can use the private offer for the clusters they provision.
ROSA with HCP costs are composed of the AWS infrastructure costs and the ROSA with HCP service costs. AWS infrastructure costs, such as the EC2 instances that are running the needed workloads, are charged to the AWS account where the infrastructure is deployed. ROSA service costs are charged to the AWS account specified as the "AWS billing account" when deploying a cluster.
The cost components can be billed to different AWS accounts. Detailed description of how the ROSA service cost and AWS infrastructure costs are calculated can be found on the Red Hat OpenShift Service on AWS Pricing page.
3.1. Accepting a private offer
When you get a private offer for ROSA with HCP, you are provided with a unique URL that is accessible only by a specific AWS account ID that was specified by the seller.
NoteVerify that you are logged in using the AWS account that was specified as the buyer. Attempting to access the offer using another AWS account produces a "page not found" error message as shown in Figure 11 in the troubleshooting section below.
You can see the offer selection drop down menu with a regular private offer pre-selected in Figure 1. This type of offer can be accepted only if the ROSA with HCP was not activated before using the public offer or another private offer.
Figure 3.1. Regular private offer
You can see a private offer that was created for an AWS account that previously activated ROSA with HCP using the public offer, showing the product name and the selected private offer labeled as "Upgrade", that replaces the currently running contract for ROSA with HCP in Figure 2.
Figure 3.2. Private offer selection selection screen
The drop down menu allows selecting between multiple offers, if available. The previously activated public offer is shown together with the newly provided agreement based offer that is labeled as "Upgrade" in Figure 3.
Figure 3.3. Private offer selection dropdown
Verify that your offer configuration is selected. Figure 4 shows the bottom part of the offer page with the offer details.
NoteThe contract end date, the number of units included with the offer, and the payment schedule. In this example, 1 cluster and up to 3 nodes utilizing 4 vCPUs are included.
Figure 3.4. Private offer details
Optional: you can add your own purchase order (PO) number to the subscription that is being purchased, so it is included on your subsequent AWS invoices. Also, check the "Additional usage fees" that are charged for any usage above the scope of the "New offer configuration details".
NotePrivate offers have several available configurations.
- It is possible that the private offer you are accepting is set up with a fixed future start date.
- If you do not have another active ROSA with HCP subscription at the time of accepting the private offer, a public offer or an older private offer entitlement, accept the private offer itself and continue with the account linking and cluster deployment steps after the specified service start date.
You must have an active ROSA with HCP entitlement to complete these steps. Service start dates are always reported in the UTC time zone
Create or upgrade your contract.
For private offers accepted by an AWS account that does not have ROSA with HCP activated yet and is creating the first contract for this service, click the Create contract button.
Figure 3.5. Create contract button
For agreement-based offers, click the Upgrade current contract button shown in Figures 4 and 6.
Figure 3.6. Upgrade contract button
Click Confirm.
Figure 3.7. Private offer acceptance confirmation window
If the accepted private offer service start date is set to be immediately following the offer acceptance, click the Set up your account button in the confirmation modal window.
Figure 3.8. Subscription confirmation
If the accepted private offer has a future start date specified, return to the private offer page after the service start date, and click the Setup your account button to proceed with the Red Hat and AWS account linking.
NoteWith no agreement active, the account linking described below is not triggered, the "Account setup" process can be done only after the "Service start date".
These are always in UTC time zone.
3.2. Sharing a private offer
Clicking the Set up your account button in the previous step takes you to the AWS and Red Hat account linking step. At this time, you are already logged in with the AWS account that accepted the offer. If you are not logged in with a Red Hat account, you will be prompted to do so.
ROSA with HCP entitlement is shared with other team members through your Red Hat organization account. All existing users in the same Red Hat organization are able to select the billing AWS account that accepted the private offer by following the above described steps. You can manage users in your Red Hat organization, when logged in as the Red Hat organization administrator, and invite or create new users.
NoteROSA with HCP private offer cannot be shared with AWS linked accounts through the AWS License Manager.
- Add any users that you want to deploy ROSA clusters. Check this user management FAQ for more details about Red Hat account user management tasks.
- Verify that the already logged in Red Hat account includes all users that are meant to be ROSA cluster deployers benefiting from the accepted private offer.
Verify that the Red Hat account number and the AWS account ID are the desired accounts that are to be linked. This linking is unique and a Red Hat account can be connected only with a single AWS (billing) account.
Figure 3.9. AWS and Red Hat accounts connection
If you want to link the AWS account with another Red Hat account than is shown on this page in Figure 9, log out from the Red Hat Hybrid Cloud Console before connecting the accounts and repeat the step of setting the account by returning to the private offer URL that is already accepted.
An AWS account can be connected with a single Red Hat account only. Once Red Hat and AWS accounts are connected, this cannot be changed by the user. If a change is needed, the user must create a support ticket.
- Agree to the terms and conditions and then click Connect accounts.
3.3. AWS billing account selection
- When deploying ROSA with HCP clusters, verify that end users select the AWS billing account that accepted the private offer.
When using the web interface for deploying ROSA with HCP, the Associated AWS infrastructure account" is typically set to the AWS account ID used by the administrator of the cluster that is being created.
- This can be the same AWS account as the billing AWS account.
AWS resources are deployed into this account and all the billing associated with those resources are processed accordingly.
Figure 3.10. Infrastructure and billing AWS account selection during ROSA with HCP cluster deployment
- The drop-down for the AWS billing account on the screenshot above should be set to the AWS account that accepted the private offer, providing the purchased quota is intended to be used by the cluster that is being created. If different AWS accounts are selected in the infrastructure and billing "roles", the blue informative note visible in Figure 10 is shown.
3.4. Troubleshooting
The most frequent issues associated with private offer acceptance and Red Hat account linking.
3.4.1. Accessing a private offer using a different AWS account
If you try accessing the private offer when logged in under AWS account ID that is not defined in the offer, and see the message shown in Figure 11, then verify that you are logged in as the desired AWS billing account.
Figure 3.11. HTTP 404 error when using the private offer URL
- Contact the seller if you need the private offer to be extended to another AWS account.
3.4.2. The private offer cannot be accepted because of active subscription
If you try accessing a private offer that was created for the first time ROSA with HCP activation, while you already have ROSA with HCP activated using another public or private offer, and see the following notice, then contact the seller who provided you with the offer.
The seller can provide you with a new offer that will seamlessly replace your current agreement, without a need to cancel your previous subscription.
Figure 3.12. Existing subscription preventing private offer acceptance
3.4.3. The AWS account is already linked to a different Red Hat account
If you see the error message "AWS account is already linked to a different Red Hat account" when you try to connect the AWS account that accepted the private offer with a presently logged-in Red Hat user, then the AWS account is already connected to another Red Hat user.
Figure 3.13. AWS account is already linked to a different Red Hat account
You can either log in using another Red Hat account or another AWS account.
- However, since this guide pertains to private offers, the assumption is that you are logged in with the AWS account that was specified as the buyer and already accepted the private offer so it is intended to be used as the billing account. Logging in as another AWS account is not expected after a private offer was accepted.
- You can still log in with another Red Hat user which is already connected to the AWS account that accepted the private offer. Other Red Hat users belonging to the same Red Hat organization are able to use the linked AWS account as the ROSA with HCP AWS billing account when creating clusters as seen in Figure 10.
- If you believe that the existing account linking might not be correct, see the "My team members belong to different Red Hat organizations" question below for tips on how you can proceed.
3.4.4. My team members belong to different Red Hat organizations
- An AWS account can be connected to a single Red Hat account only. Any user that wants to create a cluster and benefit from the private offer granted to this AWS account needs to be in the same Red Hat account. This can be achieved by inviting the user to the same Red Hat account and creating a new Red Hat user.
3.4.5. Incorrect AWS billing account was selected when creating a cluster
- If the user selected an incorrect AWS billing account, the fastest way to fix this is to delete the cluster and create a new one, while selecting the correct AWS billing account.
- If this is a production cluster that cannot be easily deleted, please contact Red Hat support to change the billing account for an existing cluster. Expect some turnaround time for this to be resolved.
Chapter 4. Tutorial: Verifying Permissions for a ROSA STS Deployment
To proceed with the deployment of a ROSA cluster, an account must support the required roles and permissions. AWS Service Control Policies (SCPs) cannot block the API calls made by the installer or operator roles.
Details about the IAM resources required for an STS-enabled installation of ROSA can be found here: About IAM resources for ROSA clusters that use STS
This guide is validated for ROSA v4.11.X.
4.1. Prerequisites
4.2. Verifying ROSA permissions
To verify the permissions required for ROSA, we can run the script included in the following section without ever creating any AWS resources.
The script uses the rosa
, aws
, and jq
CLI commands to create files in the working directory that will be used to verify permissions in the account connected to the current AWS configuration.
The AWS Policy Simulator is used to verify the permissions of each role policy against the API calls extracted by jq
; results are then stored in a text file appended with .results
.
This script is designed to verify the permissions for the current account and region.
4.3. Usage Instructions
To use the script, run the following commands in a
bash
terminal (the -p option defines a prefix for the roles):$ mkdir scratch $ cd scratch $ cat << 'EOF' > verify-permissions.sh #!/bin/bash while getopts 'p:' OPTION; do case "$OPTION" in p) PREFIX="$OPTARG" ;; ?) echo "script usage: $(basename \$0) [-p PREFIX]" >&2 exit 1 ;; esac done shift "$(($OPTIND -1))" rosa create account-roles --mode manual --prefix $PREFIX INSTALLER_POLICY=$(cat sts_installer_permission_policy.json | jq ) CONTROL_PLANE_POLICY=$(cat sts_instance_controlplane_permission_policy.json | jq) WORKER_POLICY=$(cat sts_instance_worker_permission_policy.json | jq) SUPPORT_POLICY=$(cat sts_support_permission_policy.json | jq) simulatePolicy () { outputFile="${2}.results" echo $2 aws iam simulate-custom-policy --policy-input-list "$1" --action-names $(jq '.Statement | map(select(.Effect == "Allow"))[].Action | if type == "string" then . else .[] end' "$2" -r) --output text > $outputFile } simulatePolicy "$INSTALLER_POLICY" "sts_installer_permission_policy.json" simulatePolicy "$CONTROL_PLANE_POLICY" "sts_instance_controlplane_permission_policy.json" simulatePolicy "$WORKER_POLICY" "sts_instance_worker_permission_policy.json" simulatePolicy "$SUPPORT_POLICY" "sts_support_permission_policy.json" EOF $ chmod +x verify-permissions.sh $ ./verify-permissions.sh -p SimPolTest
After the script completes, review each results file to ensure that none of the required API calls are blocked:
$ for file in $(ls *.results); do echo $file; cat $file; done
The output will look similar to the following:
sts_installer_permission_policy.json.results EVALUATIONRESULTS autoscaling:DescribeAutoScalingGroups allowed * MATCHEDSTATEMENTS PolicyInputList.1 IAM Policy ENDPOSITION 6 195 STARTPOSITION 17 3 EVALUATIONRESULTS ec2:AllocateAddress allowed * MATCHEDSTATEMENTS PolicyInputList.1 IAM Policy ENDPOSITION 6 195 STARTPOSITION 17 3 EVALUATIONRESULTS ec2:AssociateAddress allowed * MATCHEDSTATEMENTS PolicyInputList.1 IAM Policy ...
NoteIf any actions are blocked, review the error provided by AWS and consult with your Administrator to determine if SCPs are blocking the required API calls.
Chapter 5. Tutorial: Deploying ROSA with a Custom DNS Resolver
A custom DHCP option set enables you to customize your VPC with your own DNS server, domain name, and more. Red Hat OpenShift Service on AWS (ROSA) clusters support using custom DHCP option sets. By default, ROSA clusters require setting the "domain name servers" option to AmazonProvidedDNS
to ensure successful cluster creation and operation. Customers who want to use custom DNS servers for DNS resolution must do additional configuration to ensure successful ROSA cluster creation and operation.
In this tutorial, we will configure our DNS server to forward DNS lookups for specific DNS zones (further detailed below) to an Amazon Route 53 Inbound Resolver.
This tutorial uses the open-source BIND DNS server (named
) to demonstrate the configuration necessary to forward DNS lookups to an Amazon Route 53 Inbound Resolver located in the VPC you plan to deploy a ROSA cluster into. Refer to the documentation of your preferred DNS server for how to configure zone forwarding.
5.1. Prerequisites
-
ROSA CLI (
rosa
) -
AWS CLI (
aws
) - A manually created AWS VPC
- A DHCP option set configured to point to a custom DNS server and set as the default for your VPC
5.2. Setting up your environment
Configure the following environment variables:
$ export VPC_ID=<vpc_ID> 1 $ export REGION=<region> 2 $ export VPC_CIDR=<vpc_CIDR> 3
Ensure all fields output correctly before moving to the next section:
$ echo "VPC ID: ${VPC_ID}, VPC CIDR Range: ${VPC_CIDR}, Region: ${REGION}"
5.3. Create an Amazon Route 53 Inbound Resolver
Use the following procedure to deploy an Amazon Route 53 Inbound Resolver in the VPC we plan to deploy the cluster into.
In this example, we deploy the Amazon Route 53 Inbound Resolver into the same VPC the cluster will use. If you want to deploy it into a separate VPC, you must manually associate the private hosted zone(s) detailed below once cluster creation is started. You cannot associate the zone before the cluster creation process begins. Failure to associate the private hosted zone during the cluster creation process will result in cluster creation failures.
Create a security group and allow access to ports
53/tcp
and53/udp
from the VPC:$ SG_ID=$(aws ec2 create-security-group --group-name rosa-inbound-resolver --description "Security group for ROSA inbound resolver" --vpc-id ${VPC_ID} --region ${REGION} --output text) $ aws ec2 authorize-security-group-ingress --group-id ${SG_ID} --protocol tcp --port 53 --cidr ${VPC_CIDR} --region ${REGION} $ aws ec2 authorize-security-group-ingress --group-id ${SG_ID} --protocol udp --port 53 --cidr ${VPC_CIDR} --region ${REGION}
Create an Amazon Route 53 Inbound Resolver in your VPC:
$ RESOLVER_ID=$(aws route53resolver create-resolver-endpoint \ --name rosa-inbound-resolver \ --creator-request-id rosa-$(date '+%Y-%m-%d') \ --security-group-ids ${SG_ID} \ --direction INBOUND \ --ip-addresses $(aws ec2 describe-subnets --filter Name=vpc-id,Values=${VPC_ID} --region ${REGION} | jq -jr '.Subnets | map("SubnetId=\(.SubnetId) ") | .[]') \ --region ${REGION} \ --output text \ --query 'ResolverEndpoint.Id')
NoteThe above command attaches Amazon Route 53 Inbound Resolver endpoints to all subnets in the provided VPC using dynamically allocated IP addresses. If you prefer to manually specify the subnets and/or IP addresses, run the following command instead:
$ RESOLVER_ID=$(aws route53resolver create-resolver-endpoint \ --name rosa-inbound-resolver \ --creator-request-id rosa-$(date '+%Y-%m-%d') \ --security-group-ids ${SG_ID} \ --direction INBOUND \ --ip-addresses SubnetId=<subnet_ID>,Ip=<endpoint_IP> SubnetId=<subnet_ID>,Ip=<endpoint_IP> \1 --region ${REGION} \ --output text \ --query 'ResolverEndpoint.Id')
- 1
- Replace
<subnet_ID>
with the subnet IDs and<endpoint_IP>
with the static IP addresses you want inbound resolver endpoints added to.
Get the IP addresses of your inbound resolver endpoints to configure in your DNS server configuration:
$ aws route53resolver list-resolver-endpoint-ip-addresses \ --resolver-endpoint-id ${RESOLVER_ID} \ --region=${REGION} \ --query 'IpAddresses[*].Ip'
Example output
[ "10.0.45.253", "10.0.23.131", "10.0.148.159" ]
5.4. Configure your DNS server
Use the following procedure to configure your DNS server to forward the necessary private hosted zones to your Amazon Route 53 Inbound Resolver.
5.4.1. ROSA with HCP
ROSA with HCP clusters require you to configure DNS forwarding for two private hosted zones:
-
<cluster-name>.hypershift.local
-
rosa.<domain-prefix>.<unique-ID>.p3.openshiftapps.com
These Amazon Route 53 private hosted zones are created during cluster creation. The cluster-name
and domain-prefix
are customer-specified values, but the unique-ID
is randomly generated during cluster creation and cannot be preselected. As such, you must wait for the cluster creation process to begin before configuring forwarding for the p3.openshiftapps.com
private hosted zone.
Before the cluster is created, configure your DNS server to forward all DNS requests for
<cluster-name>.hypershift.local
to your Amazon Route 53 Inbound Resolver endpoints. For BIND DNS servers, edit your/etc/named.conf
file in your favorite text editor and add a new zone using the below example:Example
zone "<cluster-name>.hypershift.local" { 1 type forward; forward only; forwarders { 2 10.0.45.253; 10.0.23.131; 10.0.148.159; }; };
- Create your cluster.
Once your cluster has begun the creation process, locate the newly created private hosted zone:
$ aws route53 list-hosted-zones-by-vpc \ --vpc-id ${VPC_ID} \ --vpc-region ${REGION} \ --query 'HostedZoneSummaries[*].Name' \ --output table
Example output
-------------------------------------------------- | ListHostedZonesByVPC | +------------------------------------------------+ | rosa.domain-prefix.lkmb.p3.openshiftapps.com. | | cluster-name.hypershift.local. | +------------------------------------------------+
NoteIt may take a few minutes for the cluster creation process to create the private hosted zones in Route 53. If you do not see an
p3.openshiftapps.com
domain, wait a few minutes and run the command again.Once you know the unique ID of the cluster domain, configure your DNS server to forward all DNS requests for
rosa.<domain-prefix>.<unique-ID>.p3.openshiftapps.com
to your Amazon Route 53 Inbound Resolver endpoints. For BIND DNS servers, edit your/etc/named.conf
file in your favorite text editor and add a new zone using the below example:Example
zone "rosa.<domain-prefix>.<unique-ID>.p3.openshiftapps.com" { 1 type forward; forward only; forwarders { 2 10.0.45.253; 10.0.23.131; 10.0.148.159; }; };
5.4.2. ROSA Classic
ROSA Classic clusters require you to configure DNS forwarding for one private hosted zones:
-
<domain-prefix>.<unique-ID>.p1.openshiftapps.com
This Amazon Route 53 private hosted zones is created during cluster creation. The domain-prefix
is a customer-specified value, but the unique-ID
is randomly generated during cluster creation and cannot be preselected. As such, you must wait for the cluster creation process to begin before configuring forwarding for the p1.openshiftapps.com
private hosted zone.
- Create your cluster.
Once your cluster has begun the creation process, locate the newly created private hosted zone:
$ aws route53 list-hosted-zones-by-vpc \ --vpc-id ${VPC_ID} \ --vpc-region ${REGION} \ --query 'HostedZoneSummaries[*].Name' \ --output table
Example output
---------------------------------------------- | ListHostedZonesByVPC | +--------------------------------------------+ | domain-prefix.agls.p3.openshiftapps.com. | +--------------------------------------------+
NoteIt may take a few minutes for the cluster creation process to create the private hosted zones in Route 53. If you do not see an
p1.openshiftapps.com
domain, wait a few minutes and run the command again.Once you know the unique ID of the cluster domain, configure your DNS server to forward all DNS requests for
<domain-prefix>.<unique-ID>.p1.openshiftapps.com
to your Amazon Route 53 Inbound Resolver endpoints. For BIND DNS servers, edit your/etc/named.conf
file in your favorite text editor and add a new zone using the below example:Example
zone "<domain-prefix>.<unique-ID>.p1.openshiftapps.com" { 1 type forward; forward only; forwarders { 2 10.0.45.253; 10.0.23.131; 10.0.148.159; }; };
Chapter 6. Tutorial: Using AWS WAF and Amazon CloudFront to protect ROSA workloads
AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to your protected web application resources.
You can use an Amazon CloudFront to add a Web Application Firewall (WAF) to your Red Hat OpenShift Service on AWS (ROSA) workloads. Using an external solution protects ROSA resources from experiencing denial of service due to handling the WAF.
6.1. Prerequisites
- A ROSA (HCP or Classic) cluster.
-
You have access to the OpenShift CLI (
oc
). -
You have access to the AWS CLI (
aws
).
6.1.1. Environment setup
Prepare the environment variables:
$ export DOMAIN=apps.example.com 1 $ export AWS_PAGER="" $ export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}" | sed 's/-[a-z0-9]\{5\}$//') $ export REGION=$(oc get infrastructure cluster -o=jsonpath="{.status.platformStatus.aws.region}") $ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) $ export SCRATCH="/tmp/${CLUSTER}/cloudfront-waf" $ mkdir -p ${SCRATCH} $ echo "Cluster: ${CLUSTER}, Region: ${REGION}, AWS Account ID: ${AWS_ACCOUNT_ID}"
- 1
- Replace with the custom domain you want to use for the
IngressController
.
NoteThe "Cluster" output from the previous command might be the name of your cluster, the internal ID of your cluster, or the cluster’s domain prefix. If you prefer to use another identifier, you can manually set this value by running the following command:
$ export CLUSTER=my-custom-value
6.2. Setting up the secondary ingress controller
It is necessary to configure a secondary ingress controller to segment your external WAF-protected traffic from your standard (and default) cluster ingress controller.
Prerequisites
A publicly trusted SAN or wildcard certificate for your custom domain, such as
CN=*.apps.example.com
ImportantAmazon CloudFront uses HTTPS to communicate with your cluster’s secondary ingress controller. As explained in the Amazon CloudFront documentation, you cannot use a self-signed certificate for HTTPS communication between CloudFront and your cluster. Amazon CloudFront verifies that the certificate was issued by a trusted certificate authority.
Procedure
Create a new TLS secret from a private key and a public certificate, where
fullchain.pem
is your full wildcard certificate chain (including any intermediaries) andprivkey.pem
is your wildcard certificate’s private key.Example
$ oc -n openshift-ingress create secret tls waf-tls --cert=fullchain.pem --key=privkey.pem
Create a new
IngressController
resource:Example
waf-ingress-controller.yaml
apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: cloudfront-waf namespace: openshift-ingress-operator spec: domain: apps.example.com 1 defaultCertificate: name: waf-tls endpointPublishingStrategy: loadBalancer: dnsManagementPolicy: Unmanaged providerParameters: aws: type: NLB type: AWS scope: External type: LoadBalancerService routeSelector: 2 matchLabels: route: waf
Apply the
IngressController
:Example
$ oc apply -f waf-ingress-controller.yaml
Verify that your IngressController has successfully created an external load balancer:
$ oc -n openshift-ingress get service/router-cloudfront-waf
Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-cloudfront-waf LoadBalancer 172.30.16.141 a68a838a7f26440bf8647809b61c4bc8-4225395f488830bd.elb.us-east-1.amazonaws.com 80:30606/TCP,443:31065/TCP 2m19s
6.2.1. Configure the AWS WAF
The AWS WAF service is a web application firewall that lets you monitor, protect, and control the HTTP and HTTPS requests that are forwarded to your protected web application resources, like ROSA.
Create a AWS WAF rules file to apply to our web ACL:
$ cat << EOF > ${SCRATCH}/waf-rules.json [ { "Name": "AWS-AWSManagedRulesCommonRuleSet", "Priority": 0, "Statement": { "ManagedRuleGroupStatement": { "VendorName": "AWS", "Name": "AWSManagedRulesCommonRuleSet" } }, "OverrideAction": { "None": {} }, "VisibilityConfig": { "SampledRequestsEnabled": true, "CloudWatchMetricsEnabled": true, "MetricName": "AWS-AWSManagedRulesCommonRuleSet" } }, { "Name": "AWS-AWSManagedRulesSQLiRuleSet", "Priority": 1, "Statement": { "ManagedRuleGroupStatement": { "VendorName": "AWS", "Name": "AWSManagedRulesSQLiRuleSet" } }, "OverrideAction": { "None": {} }, "VisibilityConfig": { "SampledRequestsEnabled": true, "CloudWatchMetricsEnabled": true, "MetricName": "AWS-AWSManagedRulesSQLiRuleSet" } } ] EOF
This will enable the Core (Common) and SQL AWS Managed Rule Sets.
Create an AWS WAF Web ACL using the rules we specified above:
$ WAF_WACL=$(aws wafv2 create-web-acl \ --name cloudfront-waf \ --region ${REGION} \ --default-action Allow={} \ --scope CLOUDFRONT \ --visibility-config SampledRequestsEnabled=true,CloudWatchMetricsEnabled=true,MetricName=${CLUSTER}-waf-metrics \ --rules file://${SCRATCH}/waf-rules.json \ --query 'Summary.Name' \ --output text)
6.3. Configure Amazon CloudFront
Retrieve the newly created custom ingress controller’s NLB hostname:
$ NLB=$(oc -n openshift-ingress get service router-cloudfront-waf \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
Import your certificate into Amazon Certificate Manager, where
cert.pem
is your wildcard certificate,fullchain.pem
is your wildcard certificate’s chain andprivkey.pem
is your wildcard certificate’s private key.NoteRegardless of what region your cluster is deployed, you must import this certificate to
us-east-1
as Amazon CloudFront is a global AWS service.Example
$ aws acm import-certificate --certificate file://cert.pem \ --certificate-chain file://fullchain.pem \ --private-key file://privkey.pem \ --region us-east-1
- Log into the AWS console to create a CloudFront distribution.
Configure the CloudFront distribution by using the following information:
NoteIf an option is not specified in the table below, leave them the default (which may be blank).
Option Value Origin domain
Output from the previous command [1]
Name
rosa-waf-ingress [2]
Viewer protocol policy
Redirect HTTP to HTTPS
Allowed HTTP methods
GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE
Cache policy
CachingDisabled
Origin request policy
AllViewer
Web Application Firewall (WAF)
Enable security protections
Use existing WAF configuration
true
Choose a web ACL
cloudfront-waf
Alternate domain name (CNAME)
*.apps.example.com [3]
Custom SSL certificate
Select the certificate you imported from the step above [4]
-
Run
echo ${NLB}
to get the origin domain. - If you have multiple clusters, ensure the origin name is unique.
- This should match the wildcard domain you used to create the custom ingress controller.
- This should match the alternate domain name entered above.
-
Run
Retrieve the Amazon CloudFront Distribution endpoint:
$ aws cloudfront list-distributions --query "DistributionList.Items[?Origins.Items[?DomainName=='${NLB}']].DomainName" --output text
Update the DNS of your custom wildcard domain with a CNAME to the Amazon CloudFront Distribution endpoint from the step above.
Example
*.apps.example.com CNAME d1b2c3d4e5f6g7.cloudfront.net
6.4. Deploy a sample application
Create a new project for your sample application by running the following command:
$ oc new-project hello-world
Deploy a hello world application:
$ oc -n hello-world new-app --image=docker.io/openshift/hello-openshift
Create a route for the application specifying your custom domain name:
Example
$ oc -n hello-world create route edge --service=hello-openshift hello-openshift-tls \ --hostname hello-openshift.${DOMAIN}
Label the route to admit it to your custom ingress controller:
$ oc -n hello-world label route.route.openshift.io/hello-openshift-tls route=waf
6.5. Test the WAF
Test that the app is accessible behind Amazon CloudFront:
Example
$ curl "https://hello-openshift.${DOMAIN}"
Example output
Hello OpenShift!
Test that the WAF denies a bad request:
Example
$ curl -X POST "https://hello-openshift.${DOMAIN}" \ -F "user='<script><alert>Hello></alert></script>'"
Example output
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <HTML><HEAD><META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1"> <TITLE>ERROR: The request could not be satisfied</TITLE> </HEAD><BODY> <H1>403 ERROR</H1> <H2>The request could not be satisfied.</H2> <HR noshade size="1px"> Request blocked. We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner. <BR clear="all"> If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation. <BR clear="all"> <HR noshade size="1px"> <PRE> Generated by cloudfront (CloudFront) Request ID: nFk9q2yB8jddI6FZOTjdliexzx-FwZtr8xUQUNT75HThPlrALDxbag== </PRE> <ADDRESS> </ADDRESS> </BODY></HTML>
The expected result is a
403 ERROR
, which means the AWS WAF is protecting your application.
6.6. Additional resources
Chapter 7. Tutorial: Using AWS WAF and AWS ALBs to protect ROSA workloads
AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to your protected web application resources.
You can use an AWS Application Load Balancer (ALB) to add a Web Application Firewall (WAF) to your Red Hat OpenShift Service on AWS (ROSA) workloads. Using an external solution protects ROSA resources from experiencing denial of service due to handling the WAF.
It is recommended that you use the more flexible CloudFront method unless you absolutely must use an ALB based solution.
7.1. Prerequisites
Multiple availability zone (AZ) ROSA (HCP or Classic) cluster.
NoteAWS ALBs require at least two public subnets across AZs, per the AWS documentation. For this reason, only multiple AZ ROSA clusters can be used with ALBs.
-
You have access to the OpenShift CLI (
oc
). -
You have access to the AWS CLI (
aws
).
7.1.1. Environment setup
Prepare the environment variables:
$ export AWS_PAGER="" $ export CLUSTER=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}") $ export REGION=$(oc get infrastructure cluster -o=jsonpath="{.status.platformStatus.aws.region}") $ export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') $ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) $ export SCRATCH="/tmp/${CLUSTER}/alb-waf" $ mkdir -p ${SCRATCH} $ echo "Cluster: $(echo ${CLUSTER} | sed 's/-[a-z0-9]\{5\}$//'), Region: ${REGION}, OIDC Endpoint: ${OIDC_ENDPOINT}, AWS Account ID: ${AWS_ACCOUNT_ID}"
7.1.2. AWS VPC and subnets
This section only applies to clusters that were deployed into existing VPCs. If you did not deploy your cluster into an existing VPC, skip this section and proceed to the installation section below.
Set the below variables to the proper values for your ROSA deployment:
$ export VPC_ID=<vpc-id> 1 $ export PUBLIC_SUBNET_IDS=(<space-separated-list-of-ids>) 2 $ export PRIVATE_SUBNET_IDS=(<space-separated-list-of-ids>) 3
- 1
- Replace with the VPC ID of the cluster, for example:
export VPC_ID=vpc-04c429b7dbc4680ba
. - 2
- Replace with a space-separated list of the private subnet IDs of the cluster, making sure to preserve the
()
. For example:export PUBLIC_SUBNET_IDS=(subnet-056fd6861ad332ba2 subnet-08ce3b4ec753fe74c subnet-071aa28228664972f)
. - 3
- Replace with a space-separated list of the private subnet IDs of the cluster, making sure to preserve the
()
. For example:export PRIVATE_SUBNET_IDS=(subnet-0b933d72a8d72c36a subnet-0817eb72070f1d3c2 subnet-0806e64159b66665a)
.
Add a tag to your cluster’s VPC with the cluster identifier:
$ aws ec2 create-tags --resources ${VPC_ID} \ --tags Key=kubernetes.io/cluster/${CLUSTER},Value=shared --region ${REGION}
Add a tag to your public subnets:
$ aws ec2 create-tags \ --resources ${PUBLIC_SUBNET_IDS} \ --tags Key=kubernetes.io/role/elb,Value='1' \ Key=kubernetes.io/cluster/${CLUSTER},Value=shared \ --region ${REGION}
Add a tag to your private subnets:
$ aws ec2 create-tags \ --resources ${PRIVATE_SUBNET_IDS} \ --tags Key=kubernetes.io/role/internal-elb,Value='1' \ Key=kubernetes.io/cluster/${CLUSTER},Value=shared \ --region ${REGION}
7.2. Deploy the AWS Load Balancer Operator
The AWS Load Balancer Operator is used to used to install, manage and configure an instance of aws-load-balancer-controller
in a ROSA cluster. To deploy ALBs in ROSA, we need to first deploy the AWS Load Balancer Operator.
Create a new project to deploy the AWS Load Balancer Operator into by running the following command:
$ oc new-project aws-load-balancer-operator
Create an AWS IAM policy for the AWS Load Balancer Controller if one does not already exist by running the following command:
NoteThe policy is sourced from the upstream AWS Load Balancer Controller policy. This is required by the operator to function.
$ POLICY_ARN=$(aws iam list-policies --query \ "Policies[?PolicyName=='aws-load-balancer-operator-policy'].{ARN:Arn}" \ --output text)
$ if [[ -z "${POLICY_ARN}" ]]; then wget -O "${SCRATCH}/load-balancer-operator-policy.json" \ https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json POLICY_ARN=$(aws --region "$REGION" --query Policy.Arn \ --output text iam create-policy \ --policy-name aws-load-balancer-operator-policy \ --policy-document "file://${SCRATCH}/load-balancer-operator-policy.json") fi
Create an AWS IAM trust policy for AWS Load Balancer Operator:
$ cat <<EOF > "${SCRATCH}/trust-policy.json" { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Condition": { "StringEquals" : { "${OIDC_ENDPOINT}:sub": ["system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-operator-controller-manager", "system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-controller-cluster"] } }, "Principal": { "Federated": "arn:aws:iam::$AWS_ACCOUNT_ID:oidc-provider/${OIDC_ENDPOINT}" }, "Action": "sts:AssumeRoleWithWebIdentity" } ] } EOF
Create an AWS IAM role for the AWS Load Balancer Operator:
$ ROLE_ARN=$(aws iam create-role --role-name "${CLUSTER}-alb-operator" \ --assume-role-policy-document "file://${SCRATCH}/trust-policy.json" \ --query Role.Arn --output text)
Attach the AWS Load Balancer Operator policy to the IAM role we created previously by running the following command:
$ aws iam attach-role-policy --role-name "${CLUSTER}-alb-operator" \ --policy-arn ${POLICY_ARN}
Create a secret for the AWS Load Balancer Operator to assume our newly created AWS IAM role:
$ cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator stringData: credentials: | [default] role_arn = ${ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token EOF
Install the AWS Load Balancer Operator:
$ cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator spec: upgradeStrategy: Default --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator spec: channel: stable-v1.0 installPlanApproval: Automatic name: aws-load-balancer-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: aws-load-balancer-operator.v1.0.0 EOF
Deploy an instance of the AWS Load Balancer Controller using the operator:
NoteIf you get an error here wait a minute and try again, it means the Operator has not completed installing yet.
$ cat << EOF | oc apply -f - apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController metadata: name: cluster spec: credentials: name: aws-load-balancer-operator enabledAddons: - AWSWAFv2 EOF
Check the that the operator and controller pods are both running:
$ oc -n aws-load-balancer-operator get pods
You should see the following, if not wait a moment and retry:
NAME READY STATUS RESTARTS AGE aws-load-balancer-controller-cluster-6ddf658785-pdp5d 1/1 Running 0 99s aws-load-balancer-operator-controller-manager-577d9ffcb9-w6zqn 2/2 Running 0 2m4s
7.3. Deploy a sample application
Create a new project for our sample application:
$ oc new-project hello-world
Deploy a hello world application:
$ oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
Convert the pre-created service resource to a NodePort service type:
$ oc -n hello-world patch service hello-openshift -p '{"spec":{"type":"NodePort"}}'
Deploy an AWS ALB using the AWS Load Balancer Operator:
$ cat << EOF | oc apply -f - apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: hello-openshift-alb namespace: hello-world annotations: alb.ingress.kubernetes.io/scheme: internet-facing spec: ingressClassName: alb rules: - http: paths: - path: / pathType: Exact backend: service: name: hello-openshift port: number: 8080 EOF
Curl the AWS ALB Ingress endpoint to verify the hello world application is accessible:
NoteAWS ALB provisioning takes a few minutes. If you receive an error that says
curl: (6) Could not resolve host
, please wait and try again.$ INGRESS=$(oc -n hello-world get ingress hello-openshift-alb -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') $ curl "http://${INGRESS}"
Example output
Hello OpenShift!
7.3.1. Configure the AWS WAF
The AWS WAF service is a web application firewall that lets you monitor, protect, and control the HTTP and HTTPS requests that are forwarded to your protected web application resources, like ROSA.
Create a AWS WAF rules file to apply to our web ACL:
$ cat << EOF > ${SCRATCH}/waf-rules.json [ { "Name": "AWS-AWSManagedRulesCommonRuleSet", "Priority": 0, "Statement": { "ManagedRuleGroupStatement": { "VendorName": "AWS", "Name": "AWSManagedRulesCommonRuleSet" } }, "OverrideAction": { "None": {} }, "VisibilityConfig": { "SampledRequestsEnabled": true, "CloudWatchMetricsEnabled": true, "MetricName": "AWS-AWSManagedRulesCommonRuleSet" } }, { "Name": "AWS-AWSManagedRulesSQLiRuleSet", "Priority": 1, "Statement": { "ManagedRuleGroupStatement": { "VendorName": "AWS", "Name": "AWSManagedRulesSQLiRuleSet" } }, "OverrideAction": { "None": {} }, "VisibilityConfig": { "SampledRequestsEnabled": true, "CloudWatchMetricsEnabled": true, "MetricName": "AWS-AWSManagedRulesSQLiRuleSet" } } ] EOF
This will enable the Core (Common) and SQL AWS Managed Rule Sets.
Create an AWS WAF Web ACL using the rules we specified above:
$ WAF_ARN=$(aws wafv2 create-web-acl \ --name ${CLUSTER}-waf \ --region ${REGION} \ --default-action Allow={} \ --scope REGIONAL \ --visibility-config SampledRequestsEnabled=true,CloudWatchMetricsEnabled=true,MetricName=${CLUSTER}-waf-metrics \ --rules file://${SCRATCH}/waf-rules.json \ --query 'Summary.ARN' \ --output text)
Annotate the Ingress resource with the AWS WAF Web ACL ARN:
$ oc annotate -n hello-world ingress.networking.k8s.io/hello-openshift-alb \ alb.ingress.kubernetes.io/wafv2-acl-arn=${WAF_ARN}
Wait for 10 seconds for the rules to propagate and test that the app still works:
$ curl "http://${INGRESS}"
Example output
Hello OpenShift!
Test that the WAF denies a bad request:
$ curl -X POST "http://${INGRESS}" \ -F "user='<script><alert>Hello></alert></script>'"
Example output
<html> <head><title>403 Forbidden</title></head> <body> <center><h1>403 Forbidden</h1></center> </body> </html
NoteActivation of the AWS WAF integration can sometimes take several minutes. If you do not receive a
403 Forbidden
error, please wait a few seconds and try again.The expected result is a
403 Forbidden
error, which means the AWS WAF is protecting your application.
7.4. Additional resources
Chapter 8. Tutorial: Deploying OpenShift API for Data Protection on a ROSA cluster
This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.
Prerequisites
Environment
Prepare the environment variables:
NoteChange the cluster name to match your ROSA cluster and ensure you are logged into the cluster as an Administrator. Ensure all fields are outputted correctly before moving on.
$ export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}" | sed 's/-[a-z0-9]\{5\}$//') $ export ROSA_CLUSTER_ID=$(rosa describe cluster -c ${CLUSTER_NAME} --output json | jq -r .id) $ export REGION=$(rosa describe cluster -c ${CLUSTER_NAME} --output json | jq -r .region.id) $ export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') $ export AWS_ACCOUNT_ID=`aws sts get-caller-identity --query Account --output text` $ export CLUSTER_VERSION=`rosa describe cluster -c ${CLUSTER_NAME} -o json | jq -r .version.raw_id | cut -f -2 -d '.'` $ export ROLE_NAME="${CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials" $ export AWS_PAGER="" $ export SCRATCH="/tmp/${CLUSTER_NAME}/oadp" $ mkdir -p ${SCRATCH} $ echo "Cluster ID: ${ROSA_CLUSTER_ID}, Region: ${REGION}, OIDC Endpoint: ${OIDC_ENDPOINT}, AWS Account ID: ${AWS_ACCOUNT_ID}"
8.1. Prepare AWS Account
Create an IAM Policy to allow for S3 Access:
$ POLICY_ARN=$(aws iam list-policies --query "Policies[?PolicyName=='RosaOadpVer1'].{ARN:Arn}" --output text) if [[ -z "${POLICY_ARN}" ]]; then $ cat << EOF > ${SCRATCH}/policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:CreateBucket", "s3:DeleteBucket", "s3:PutBucketTagging", "s3:GetBucketTagging", "s3:PutEncryptionConfiguration", "s3:GetEncryptionConfiguration", "s3:PutLifecycleConfiguration", "s3:GetLifecycleConfiguration", "s3:GetBucketLocation", "s3:ListBucket", "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucketMultipartUploads", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts", "ec2:DescribeSnapshots", "ec2:DescribeVolumes", "ec2:DescribeVolumeAttribute", "ec2:DescribeVolumesModifications", "ec2:DescribeVolumeStatus", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" } ]} EOF $ POLICY_ARN=$(aws iam create-policy --policy-name "RosaOadpVer1" \ --policy-document file:///${SCRATCH}/policy.json --query Policy.Arn \ --tags Key=rosa_openshift_version,Value=${CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-oadp Key=operator_name,Value=openshift-oadp \ --output text) fi $ echo ${POLICY_ARN}
Create an IAM Role trust policy for the cluster:
$ cat <<EOF > ${SCRATCH}/trust-policy.json { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_ENDPOINT}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "${OIDC_ENDPOINT}:sub": [ "system:serviceaccount:openshift-adp:openshift-adp-controller-manager", "system:serviceaccount:openshift-adp:velero"] } } }] } EOF $ ROLE_ARN=$(aws iam create-role --role-name \ "${ROLE_NAME}" \ --assume-role-policy-document file://${SCRATCH}/trust-policy.json \ --tags Key=rosa_cluster_id,Value=${ROSA_CLUSTER_ID} Key=rosa_openshift_version,Value=${CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=openshift-oadp \ --query Role.Arn --output text) $ echo ${ROLE_ARN}
Attach the IAM Policy to the IAM Role:
$ aws iam attach-role-policy --role-name "${ROLE_NAME}" \ --policy-arn ${POLICY_ARN}
8.2. Deploy OADP on the cluster
Create a namespace for OADP:
$ oc create namespace openshift-adp
Create a credentials secret:
$ cat <<EOF > ${SCRATCH}/credentials [default] role_arn = ${ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token EOF $ oc -n openshift-adp create secret generic cloud-credentials \ --from-file=${SCRATCH}/credentials
Deploy the OADP Operator:
NoteThere is currently an issue with version 1.1 of the Operator with backups that have a
PartiallyFailed
status. This does not seem to affect the backup and restore process, but it should be noted as there are issues with it.$ cat << EOF | oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-adp- namespace: openshift-adp name: oadp spec: targetNamespaces: - openshift-adp --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: redhat-oadp-operator namespace: openshift-adp spec: channel: stable-1.2 installPlanApproval: Automatic name: redhat-oadp-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF
Wait for the Operator to be ready:
$ watch oc -n openshift-adp get pods
Example output
NAME READY STATUS RESTARTS AGE openshift-adp-controller-manager-546684844f-qqjhn 1/1 Running 0 22s
Create Cloud Storage:
$ cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: ${CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: ${CLUSTER_NAME}-oadp provider: aws region: $REGION EOF
Check your application’s storage default storage class:
$ oc get pvc -n <namespace> 1
- 1
- Enter your application’s namespace.
Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h
$ oc get storageclass
Example output
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h
Using either gp3-csi, gp2-csi, gp3 or gp2 will work. If the application(s) that are being backed up are all using PV’s with CSI, include the CSI plugin in the OADP DPA configuration.
CSI only: Deploy a Data Protection Application:
$ cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: ${CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: ${CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: ${REGION} configuration: velero: defaultPlugins: - openshift - aws - csi restic: enable: false EOF
NoteIf you run this command for CSI volumes, you can skip the next step.
Non-CSI volumes: Deploy a Data Protection Application:
$ cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: ${CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: ${CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: ${REGION} configuration: velero: defaultPlugins: - openshift - aws restic: enable: false snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials enableSharedConfig: 'true' profile: default region: ${REGION} provider: aws EOF
-
In OADP 1.1.x ROSA STS environments, the container image backup and restore (
spec.backupImages
) value must be set tofalse
as it is not supported. -
The Restic feature (
restic.enable=false
) is disabled and not supported in ROSA STS environments. -
The DataMover feature (
dataMover.enable=false
) is disabled and not supported in ROSA STS environments.
8.3. Perform a backup
The following sample hello-world application has no attached persistent volumes. Either DPA configuration will work.
Create a workload to back up:
$ oc create namespace hello-world $ oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
Expose the route:
$ oc expose service/hello-openshift -n hello-world
Check that the application is working:
$ curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`
Example output
Hello OpenShift!
Back up the workload:
$ cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: ${CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF
Wait until the backup is done:
$ watch "oc -n openshift-adp get backup hello-world -o json | jq .status"
Example output
{ "completionTimestamp": "2022-09-07T22:20:44Z", "expiration": "2022-10-07T22:20:22Z", "formatVersion": "1.1.0", "phase": "Completed", "progress": { "itemsBackedUp": 58, "totalItems": 58 }, "startTimestamp": "2022-09-07T22:20:22Z", "version": 1 }
Delete the demo workload:
$ oc delete ns hello-world
Restore from the backup:
$ cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF
Wait for the Restore to finish:
$ watch "oc -n openshift-adp get restore hello-world -o json | jq .status"
Example output
{ "completionTimestamp": "2022-09-07T22:25:47Z", "phase": "Completed", "progress": { "itemsRestored": 38, "totalItems": 38 }, "startTimestamp": "2022-09-07T22:25:28Z", "warnings": 9 }
Check that the workload is restored:
$ oc -n hello-world get pods
Example output
NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s
$ curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`
Example output
Hello OpenShift!
- For troubleshooting tips please refer to the OADP team’s troubleshooting documentation
- Additional sample applications can be found in the OADP team’s sample applications directory
8.4. Cleanup
Delete the workload:
$ oc delete ns hello-world
Remove the backup and restore resources from the cluster if they are no longer required:
$ oc delete backup hello-world $ oc delete restore hello-world
To delete the backup/restore and remote objects in s3:
$ velero backup delete hello-world $ velero restore delete hello-world
Delete the Data Protection Application:
$ oc -n openshift-adp delete dpa ${CLUSTER_NAME}-dpa
Delete the Cloud Storage:
$ oc -n openshift-adp delete cloudstorage ${CLUSTER_NAME}-oadp
WarningIf this command hangs, you might need to delete the finalizer:
$ oc -n openshift-adp patch cloudstorage ${CLUSTER_NAME}-oadp -p '{"metadata":{"finalizers":null}}' --type=merge
Remove the Operator if it is no longer required:
$ oc -n openshift-adp delete subscription oadp-operator
Remove the namespace for the Operator:
$ oc delete ns redhat-openshift-adp
Remove the Custom Resource Definitions from the cluster if you no longer wish to have them:
$ for CRD in `oc get crds | grep velero | awk '{print $1}'`; do oc delete crd $CRD; done $ for CRD in `oc get crds | grep -i oadp | awk '{print $1}'`; do oc delete crd $CRD; done
Delete the AWS S3 Bucket:
$ aws s3 rm s3://${CLUSTER_NAME}-oadp --recursive $ aws s3api delete-bucket --bucket ${CLUSTER_NAME}-oadp
Detach the Policy from the role:
$ aws iam detach-role-policy --role-name "${ROLE_NAME}" \ --policy-arn "${POLICY_ARN}"
Delete the role:
$ aws iam delete-role --role-name "${ROLE_NAME}"
Chapter 9. Tutorial: AWS Load Balancer Operator on ROSA
This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.
Load Balancers created by the AWS Load Balancer Operator cannot be used for OpenShift Routes, and should only be used for individual services or ingress resources that do not need the full layer 7 capabilities of an OpenShift Route.
The AWS Load Balancer Controller manages AWS Elastic Load Balancers for a Red Hat OpenShift Service on AWS (ROSA) cluster. The controller provisions AWS Application Load Balancers (ALB) when you create Kubernetes Ingress resources and AWS Network Load Balancers (NLB) when implementing Kubernetes Service resources with a type of LoadBalancer.
Compared with the default AWS in-tree load balancer provider, this controller is developed with advanced annotations for both ALBs and NLBs. Some advanced use cases are:
- Using native Kubernetes Ingress objects with ALBs
- Integrate ALBs with the AWS Web Application Firewall (WAF) service
- Specify custom NLB source IP ranges
- Specify custom NLB internal IP addresses
The AWS Load Balancer Operator is used to used to install, manage and configure an instance of aws-load-balancer-controller
in a ROSA cluster.
9.1. Prerequisites
AWS ALBs require a multi-AZ cluster, as well as three public subnets split across three AZs in the same VPC as the cluster. This makes ALBs unsuitable for many PrivateLink clusters. AWS NLBs do not have this restriction.
- A multi-AZ ROSA classic cluster
- BYO VPC cluster
- AWS CLI
- OC CLI
9.1.1. Environment
Prepare the environment variables:
$ export AWS_PAGER="" $ export ROSA_CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}" | sed 's/-[a-z0-9]\{5\}$//') $ export REGION=$(oc get infrastructure cluster -o=jsonpath="{.status.platformStatus.aws.region}") $ export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') $ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) $ export SCRATCH="/tmp/${ROSA_CLUSTER_NAME}/alb-operator" $ mkdir -p ${SCRATCH} $ echo "Cluster: ${ROSA_CLUSTER_NAME}, Region: ${REGION}, OIDC Endpoint: ${OIDC_ENDPOINT}, AWS Account ID: ${AWS_ACCOUNT_ID}"
9.1.2. AWS VPC and subnets
This section only applies to clusters that were deployed into existing VPCs. If you did not deploy your cluster into an existing VPC, skip this section and proceed to the installation section below.
Set the below variables to the proper values for your ROSA deployment:
$ export VPC_ID=<vpc-id> $ export PUBLIC_SUBNET_IDS=<public-subnets> $ export PRIVATE_SUBNET_IDS=<private-subnets> $ export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}")
Add a tag to your cluster’s VPC with the cluster name:
$ aws ec2 create-tags --resources ${VPC_ID} --tags Key=kubernetes.io/cluster/${CLUSTER_NAME},Value=owned --region ${REGION}
Add a tag to your public subnets:
$ aws ec2 create-tags \ --resources ${PUBLIC_SUBNET_IDS} \ --tags Key=kubernetes.io/role/elb,Value='' \ --region ${REGION}
Add a tag to your private subnets:
$ aws ec2 create-tags \ --resources "${PRIVATE_SUBNET_IDS}" \ --tags Key=kubernetes.io/role/internal-elb,Value='' \ --region ${REGION}
9.2. Installation
Create an AWS IAM policy for the AWS Load Balancer Controller:
NoteThe policy is sourced from the upstream AWS Load Balancer Controller policy plus permission to create tags on subnets. This is required by the operator to function.
$ oc new-project aws-load-balancer-operator $ POLICY_ARN=$(aws iam list-policies --query \ "Policies[?PolicyName=='aws-load-balancer-operator-policy'].{ARN:Arn}" \ --output text) $ if [[ -z "${POLICY_ARN}" ]]; then wget -O "${SCRATCH}/load-balancer-operator-policy.json" \ https://raw.githubusercontent.com/rh-mobb/documentation/main/content/rosa/aws-load-balancer-operator/load-balancer-operator-policy.json POLICY_ARN=$(aws --region "$REGION" --query Policy.Arn \ --output text iam create-policy \ --policy-name aws-load-balancer-operator-policy \ --policy-document "file://${SCRATCH}/load-balancer-operator-policy.json") fi $ echo $POLICY_ARN
Create an AWS IAM trust policy for AWS Load Balancer Operator:
$ cat <<EOF > "${SCRATCH}/trust-policy.json" { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Condition": { "StringEquals" : { "${OIDC_ENDPOINT}:sub": ["system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-operator-controller-manager", "system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-controller-cluster"] } }, "Principal": { "Federated": "arn:aws:iam::$AWS_ACCOUNT_ID:oidc-provider/${OIDC_ENDPOINT}" }, "Action": "sts:AssumeRoleWithWebIdentity" } ] } EOF
Create an AWS IAM role for the AWS Load Balancer Operator:
$ ROLE_ARN=$(aws iam create-role --role-name "${ROSA_CLUSTER_NAME}-alb-operator" \ --assume-role-policy-document "file://${SCRATCH}/trust-policy.json" \ --query Role.Arn --output text) $ echo $ROLE_ARN $ aws iam attach-role-policy --role-name "${ROSA_CLUSTER_NAME}-alb-operator" \ --policy-arn $POLICY_ARN
Create a secret for the AWS Load Balancer Operator to assume our newly created AWS IAM role:
$ cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator stringData: credentials: | [default] role_arn = $ROLE_ARN web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token EOF
Install the AWS Load Balancer Operator:
$ cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator spec: upgradeStrategy: Default --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator spec: channel: stable-v1.0 installPlanApproval: Automatic name: aws-load-balancer-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: aws-load-balancer-operator.v1.0.0 EOF
Deploy an instance of the AWS Load Balancer Controller using the operator:
NoteIf you get an error here wait a minute and try again, it means the Operator has not completed installing yet.
$ cat << EOF | oc apply -f - apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController metadata: name: cluster spec: credentials: name: aws-load-balancer-operator EOF
Check the that the operator and controller pods are both running:
$ oc -n aws-load-balancer-operator get pods
You should see the following, if not wait a moment and retry:
NAME READY STATUS RESTARTS AGE aws-load-balancer-controller-cluster-6ddf658785-pdp5d 1/1 Running 0 99s aws-load-balancer-operator-controller-manager-577d9ffcb9-w6zqn 2/2 Running 0 2m4s
9.3. Validating the deployment
Create a new project:
$ oc new-project hello-world
Deploy a hello world application:
$ oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
Configure a NodePort service for the AWS ALB to connect to:
$ cat << EOF | oc apply -f - apiVersion: v1 kind: Service metadata: name: hello-openshift-nodeport namespace: hello-world spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: NodePort selector: deployment: hello-openshift EOF
Deploy an AWS ALB using the AWS Load Balancer Operator:
$ cat << EOF | oc apply -f - apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: hello-openshift-alb namespace: hello-world annotations: alb.ingress.kubernetes.io/scheme: internet-facing spec: ingressClassName: alb rules: - http: paths: - path: / pathType: Exact backend: service: name: hello-openshift-nodeport port: number: 80 EOF
Curl the AWS ALB Ingress endpoint to verify the hello world application is accessible:
NoteAWS ALB provisioning takes a few minutes. If you receive an error that says
curl: (6) Could not resolve host
, please wait and try again.$ INGRESS=$(oc -n hello-world get ingress hello-openshift-alb \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') $ curl "http://${INGRESS}"
Example output
Hello OpenShift!
Deploy an AWS NLB for your hello world application:
$ cat << EOF | oc apply -f - apiVersion: v1 kind: Service metadata: name: hello-openshift-nlb namespace: hello-world annotations: service.beta.kubernetes.io/aws-load-balancer-type: external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: LoadBalancer selector: deployment: hello-openshift EOF
Test the AWS NLB endpoint:
NoteNLB provisioning takes a few minutes. If you receive an error that says
curl: (6) Could not resolve host
, please wait and try again.$ NLB=$(oc -n hello-world get service hello-openshift-nlb \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') $ curl "http://${NLB}"
Example output
Hello OpenShift!
9.4. Cleaning up
Delete the hello world application namespace (and all the resources in the namespace):
$ oc delete project hello-world
Delete the AWS Load Balancer Operator and the AWS IAM roles:
$ oc delete subscription aws-load-balancer-operator -n aws-load-balancer-operator $ aws iam detach-role-policy \ --role-name "${ROSA_CLUSTER_NAME}-alb-operator" \ --policy-arn $POLICY_ARN $ aws iam delete-role \ --role-name "${ROSA_CLUSTER_NAME}-alb-operator"
Delete the AWS IAM policy:
$ aws iam delete-policy --policy-arn $POLICY_ARN
Chapter 10. Tutorial: Configuring Microsoft Entra ID (formerly Azure Active Directory) as an identity provider
You can configure Microsoft Entra ID (formerly Azure Active Directory) as the cluster identity provider in Red Hat OpenShift Service on AWS (ROSA).
This tutorial guides you to complete the following tasks:
- Register a new application in Entra ID for authentication.
- Configure the application registration in Entra ID to include optional and group claims in tokens.
- Configure the Red Hat OpenShift Service on AWS cluster to use Entra ID as the identity provider.
- Grant additional permissions to individual groups.
10.1. Prerequisites
- You created a set of security groups and assigned users by following the Microsoft documentation.
10.2. Registering a new application in Entra ID for authentication
To register your application in Entra ID, first create the OAuth callback URL, then register your application.
Procedure
Create the cluster’s OAuth callback URL by changing the specified variables and running the following command:
NoteRemember to save this callback URL; it will be required later in the process.
$ domain=$(rosa describe cluster -c <cluster_name> | grep "DNS" | grep -oE '\S+.openshiftapps.com') $ echo "OAuth callback URL: https://oauth-openshift.apps.$domain/oauth2callback/AAD"
The "AAD" directory at the end of the OAuth callback URL must match the OAuth identity provider name that you will set up later in this process.
Create the Entra ID application by logging in to the Azure portal, and select the App registrations blade. Then, select New registration to create a new application.
-
Name the application, for example
openshift-auth
. - Select Web from the Redirect URI dropdown and enter the value of the OAuth callback URL you retrieved in the previous step.
After providing the required information, click Register to create the application.
Select the Certificates & secrets sub-blade and select New client secret.
Complete the requested details and store the generated client secret value. This secret is required later in this process.
ImportantAfter initial setup, you cannot see the client secret. If you did not record the client secret, you must generate a new one.
Select the Overview sub-blade and note the
Application (client) ID
andDirectory (tenant) ID
. You will need these values in a future step.
10.3. Configuring the application registration in Entra ID to include optional and group claims
So that Red Hat OpenShift Service on AWS has enough information to create the user’s account, you must configure Entra ID to give two optional claims: email
and preferred_username
. For more information about optional claims in Entra ID, see the Microsoft documentation.
In addition to individual user authentication, Red Hat OpenShift Service on AWS provides group claim functionality. This functionality allows an OpenID Connect (OIDC) identity provider, such as Entra ID, to offer a user’s group membership for use within Red Hat OpenShift Service on AWS.
Configuring optional claims
You can configure the optional claims in Entra ID.
Click the Token configuration sub-blade and select the Add optional claim button.
Select the ID radio button.
Select the email claim checkbox.
Select the
preferred_username
claim checkbox. Then, click Add to configure the email and preferred_username claims your Entra ID application.A dialog box appears at the top of the page. Follow the prompt to enable the necessary Microsoft Graph permissions.
Configuring group claims (optional)
Configure Entra ID to offer a groups claim.
Procedure
From the Token configuration sub-blade, click Add groups claim.
To configure group claims for your Entra ID application, select Security groups and then click the Add.
NoteIn this example, the group claim includes all of the security groups that a user is a member of. In a real production environment, ensure that the groups that the group claim only includes groups that apply to Red Hat OpenShift Service on AWS.
10.4. Configuring the Red Hat OpenShift Service on AWS cluster to use Entra ID as the identity provider
You must configure Red Hat OpenShift Service on AWS to use Entra ID as its identity provider.
Although ROSA offers the ability to configure identity providers by using OpenShift Cluster Manager, use the ROSA CLI to configure the cluster’s OAuth provider to use Entra ID as its identity provider. Before configuring the identity provider, set the necessary variables for the identity provider configuration.
Procedure
Create the variables by running the following command:
$ CLUSTER_NAME=example-cluster 1 $ IDP_NAME=AAD 2 $ APP_ID=yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy 3 $ CLIENT_SECRET=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx 4 $ TENANT_ID=zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz 5
Configure the cluster’s OAuth provider by running the following command. If you enabled group claims, ensure that you use the
--group-claims groups
argument.If you enabled group claims, run the following command:
$ rosa create idp \ --cluster ${CLUSTER_NAME} \ --type openid \ --name ${IDP_NAME} \ --client-id ${APP_ID} \ --client-secret ${CLIENT_SECRET} \ --issuer-url https://login.microsoftonline.com/${TENANT_ID}/v2.0 \ --email-claims email \ --name-claims name \ --username-claims preferred_username \ --extra-scopes email,profile \ --groups-claims groups
If you did not enable group claims, run the following command:
$ rosa create idp \ --cluster ${CLUSTER_NAME} \ --type openid \ --name ${IDP_NAME} \ --client-id ${APP_ID} \ --client-secret ${CLIENT_SECRET} \ --issuer-url https://login.microsoftonline.com/${TENANT_ID}/v2.0 \ --email-claims email \ --name-claims name \ --username-claims preferred_username \ --extra-scopes email,profile
After a few minutes, the cluster authentication Operator reconciles your changes, and you can log in to the cluster by using Entra ID.
10.5. Granting additional permissions to individual users and groups
When your first log in, you might notice that you have very limited permissions. By default, Red Hat OpenShift Service on AWS only grants you the ability to create new projects, or namespaces, in the cluster. Other projects are restricted from view.
You must grant these additional abilities to individual users and groups.
Granting additional permissions to individual users
Red Hat OpenShift Service on AWS includes a significant number of preconfigured roles, including the cluster-admin
role that grants full access and control over the cluster.
Procedure
Grant a user access to the
cluster-admin
role by running the following command:$ rosa grant user cluster-admin \ --user=<USERNAME> 1 --cluster=${CLUSTER_NAME}
- 1
- Provide the Entra ID username that you want to have cluster admin permissions.
Granting additional permissions to individual groups
If you opted to enable group claims, the cluster OAuth provider automatically creates or updates the user’s group memberships by using the group ID. The cluster OAuth provider does not automatically create RoleBindings
and ClusterRoleBindings
for the groups that are created; you are responsible for creating those bindings by using your own processes.
To grant an automatically generated group access to the cluster-admin
role, you must create a ClusterRoleBinding
to the group ID.
Procedure
Create the
ClusterRoleBinding
by running the following command:$ oc create clusterrolebinding cluster-admin-group \ --clusterrole=cluster-admin \ --group=<GROUP_ID> 1
- 1
- Provide the Entra ID group ID that you want to have cluster admin permissions.
Now, any user in the specified group automatically receives
cluster-admin
access.
10.6. Additional resources
For more information about how to use RBAC to define and apply permissions in Red Hat OpenShift Service on AWS, see the Red Hat OpenShift Service on AWS documentation.
Chapter 11. Tutorial: Using AWS Secrets Manager CSI on ROSA with STS
The AWS Secrets and Configuration Provider (ASCP) provides a way to expose AWS Secrets as Kubernetes storage volumes. With the ASCP, you can store and manage your secrets in Secrets Manager and then retrieve them through your workloads running on Red Hat OpenShift Service on AWS (ROSA).
11.1. Prerequisites
Ensure that you have the following resources and tools before starting this process:
- A ROSA cluster deployed with STS
- Helm 3
-
aws
CLI -
oc
CLI -
jq
CLI
Additional environment requirements
Log in to your ROSA cluster by running the following command:
$ oc login --token=<your-token> --server=<your-server-url>
You can find your login token by accessing your cluster in pull secret from Red Hat OpenShift Cluster Manager.
Validate that your cluster has STS by running the following command:
$ oc get authentication.config.openshift.io cluster -o json \ | jq .spec.serviceAccountIssuer
Example output
"https://xxxxx.cloudfront.net/xxxxx"
If your output is different, do not proceed. See Red Hat documentation on creating an STS cluster before continuing this process.
Set the
SecurityContextConstraints
permission to allow the CSI driver to run by running the following command:$ oc new-project csi-secrets-store $ oc adm policy add-scc-to-user privileged \ system:serviceaccount:csi-secrets-store:secrets-store-csi-driver $ oc adm policy add-scc-to-user privileged \ system:serviceaccount:csi-secrets-store:csi-secrets-store-provider-aws
Create environment variables to use later in this process by running the following command:
$ export REGION=$(oc get infrastructure cluster -o=jsonpath="{.status.platformStatus.aws.region}") $ export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster \ -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') $ export AWS_ACCOUNT_ID=`aws sts get-caller-identity --query Account --output text` $ export AWS_PAGER=""
11.2. Deploying the AWS Secrets and Configuration Provider
Use Helm to register the secrets store CSI driver by running the following command:
$ helm repo add secrets-store-csi-driver \ https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts
Update your Helm repositories by running the following command:
$ helm repo update
Install the secrets store CSI driver by running the following command:
$ helm upgrade --install -n csi-secrets-store \ csi-secrets-store-driver secrets-store-csi-driver/secrets-store-csi-driver
Deploy the AWS provider by running the following command:
$ oc -n csi-secrets-store apply -f \ https://raw.githubusercontent.com/rh-mobb/documentation/main/content/misc/secrets-store-csi/aws-provider-installer.yaml
Check that both Daemonsets are running by running the following command:
$ oc -n csi-secrets-store get ds \ csi-secrets-store-provider-aws \ csi-secrets-store-driver-secrets-store-csi-driver
Label the Secrets Store CSI Driver to allow use with the restricted pod security profile by running the following command:
$ oc label csidriver.storage.k8s.io/secrets-store.csi.k8s.io security.openshift.io/csi-ephemeral-volume-profile=restricted
11.3. Creating a Secret and IAM Access Policies
Create a secret in Secrets Manager by running the following command:
$ SECRET_ARN=$(aws --region "$REGION" secretsmanager create-secret \ --name MySecret --secret-string \ '{"username":"shadowman", "password":"hunter2"}' \ --query ARN --output text); echo $SECRET_ARN
Create an IAM Access Policy document by running the following command:
$ cat << EOF > policy.json { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": [ "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret" ], "Resource": ["$SECRET_ARN"] }] } EOF
Create an IAM Access Policy by running the following command:
$ POLICY_ARN=$(aws --region "$REGION" --query Policy.Arn \ --output text iam create-policy \ --policy-name openshift-access-to-mysecret-policy \ --policy-document file://policy.json); echo $POLICY_ARN
Create an IAM Role trust policy document by running the following command:
NoteThe trust policy is locked down to the default service account of a namespace you create later in this process.
$ cat <<EOF > trust-policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Condition": { "StringEquals" : { "${OIDC_ENDPOINT}:sub": ["system:serviceaccount:my-application:default"] } }, "Principal": { "Federated": "arn:aws:iam::$AWS_ACCOUNT_ID:oidc-provider/${OIDC_ENDPOINT}" }, "Action": "sts:AssumeRoleWithWebIdentity" } ] } EOF
Create an IAM role by running the following command:
$ ROLE_ARN=$(aws iam create-role --role-name openshift-access-to-mysecret \ --assume-role-policy-document file://trust-policy.json \ --query Role.Arn --output text); echo $ROLE_ARN
Attach the role to the policy by running the following command:
$ aws iam attach-role-policy --role-name openshift-access-to-mysecret \ --policy-arn $POLICY_ARN
11.4. Create an Application to use this secret
Create an OpenShift project by running the following command:
$ oc new-project my-application
Annotate the default service account to use the STS Role by running the following command:
$ oc annotate -n my-application serviceaccount default \ eks.amazonaws.com/role-arn=$ROLE_ARN
Create a secret provider class to access our secret by running the following command:
$ cat << EOF | oc apply -f - apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-application-aws-secrets spec: provider: aws parameters: objects: | - objectName: "MySecret" objectType: "secretsmanager" EOF
Create a deployment by using our secret in the following command:
$ cat << EOF | oc apply -f - apiVersion: v1 kind: Pod metadata: name: my-application labels: app: my-application spec: volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "my-application-aws-secrets" containers: - name: my-application-deployment image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - "/bin/sleep" - "10000" volumeMounts: - name: secrets-store-inline mountPath: "/mnt/secrets-store" readOnly: true EOF
Verify the pod has the secret mounted by running the following command:
$ oc exec -it my-application -- cat /mnt/secrets-store/MySecret
11.5. Clean up
Delete the application by running the following command:
$ oc delete project my-application
Delete the secrets store csi driver by running the following command:
$ helm delete -n csi-secrets-store csi-secrets-store-driver
Delete the security context constraints by running the following command:
$ oc adm policy remove-scc-from-user privileged \ system:serviceaccount:csi-secrets-store:secrets-store-csi-driver; oc adm policy remove-scc-from-user privileged \ system:serviceaccount:csi-secrets-store:csi-secrets-store-provider-aws
Delete the AWS provider by running the following command:
$ oc -n csi-secrets-store delete -f \ https://raw.githubusercontent.com/rh-mobb/documentation/main/content/misc/secrets-store-csi/aws-provider-installer.yaml
Delete AWS Roles and Policies by running the following command:
$ aws iam detach-role-policy --role-name openshift-access-to-mysecret \ --policy-arn $POLICY_ARN; aws iam delete-role --role-name openshift-access-to-mysecret; aws iam delete-policy --policy-arn $POLICY_ARN
Delete the Secrets Manager secret by running the following command:
$ aws secretsmanager --region $REGION delete-secret --secret-id $SECRET_ARN
Chapter 12. Tutorial: Using AWS Controllers for Kubernetes on ROSA
AWS Controllers for Kubernetes (ACK) lets you define and use AWS service resources directly from Red Hat OpenShift Service on AWS (ROSA). With ACK, you can take advantage of AWS-managed services for your applications without needing to define resources outside of the cluster or run services that provide supporting capabilities such as databases or message queues within the cluster.
You can install various ACK Operators directly from OperatorHub. This makes it easy to get started and use the Operators with your applications. This controller is a component of the AWS Controller for Kubernetes project, which is currently in developer preview.
Use this tutorial to deploy the ACK S3 Operator. You can also adapt it for any other ACK Operator in the OperatorHub of your cluster.
12.1. Prerequisites
- A ROSA cluster
-
A user account with
cluster-admin
privileges -
The OpenShift CLI (
oc
) -
The Amazon Web Services (AWS) CLI (
aws
)
12.2. Setting up your environment
Configure the following environment variables, changing the cluster name to suit your cluster:
$ export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}" | sed 's/-[a-z0-9]\{5\}$//') $ export REGION=$(rosa describe cluster -c ${ROSA_CLUSTER_NAME} --output json | jq -r .region.id) $ export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o json | jq -r .spec.serviceAccountIssuer | sed 's|^https://||') $ export AWS_ACCOUNT_ID=`aws sts get-caller-identity --query Account --output text` $ export ACK_SERVICE=s3 $ export ACK_SERVICE_ACCOUNT=ack-${ACK_SERVICE}-controller $ export POLICY_ARN=arn:aws:iam::aws:policy/AmazonS3FullAccess $ export AWS_PAGER="" $ export SCRATCH="/tmp/${ROSA_CLUSTER_NAME}/ack" $ mkdir -p ${SCRATCH}
Ensure all fields output correctly before moving to the next section:
$ echo "Cluster: ${ROSA_CLUSTER_NAME}, Region: ${REGION}, OIDC Endpoint: ${OIDC_ENDPOINT}, AWS Account ID: ${AWS_ACCOUNT_ID}"
12.3. Preparing your AWS Account
Create an AWS Identity Access Management (IAM) trust policy for the ACK Operator:
$ cat <<EOF > "${SCRATCH}/trust-policy.json" { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Condition": { "StringEquals" : { "${OIDC_ENDPOINT}:sub": "system:serviceaccount:ack-system:${ACK_SERVICE_ACCOUNT}" } }, "Principal": { "Federated": "arn:aws:iam::$AWS_ACCOUNT_ID:oidc-provider/${OIDC_ENDPOINT}" }, "Action": "sts:AssumeRoleWithWebIdentity" } ] } EOF
Create an AWS IAM role for the ACK Operator to assume with the
AmazonS3FullAccess
policy attached:NoteYou can find the recommended policy in each project’s GitHub repository, for example https://github.com/aws-controllers-k8s/s3-controller/blob/main/config/iam/recommended-policy-arn.
$ ROLE_ARN=$(aws iam create-role --role-name "ack-${ACK_SERVICE}-controller" \ --assume-role-policy-document "file://${SCRATCH}/trust-policy.json" \ --query Role.Arn --output text) $ echo $ROLE_ARN $ aws iam attach-role-policy --role-name "ack-${ACK_SERVICE}-controller" \ --policy-arn ${POLICY_ARN}
12.4. Installing the ACK S3 Controller
Create a project to install the ACK S3 Operator into:
$ oc new-project ack-system
Create a file with the ACK S3 Operator configuration:
NoteACK_WATCH_NAMESPACE
is purposefully left blank so the controller can properly watch all namespaces in the cluster.$ cat <<EOF > "${SCRATCH}/config.txt" ACK_ENABLE_DEVELOPMENT_LOGGING=true ACK_LOG_LEVEL=debug ACK_WATCH_NAMESPACE= AWS_REGION=${REGION} AWS_ENDPOINT_URL= ACK_RESOURCE_TAGS=${CLUSTER_NAME} ENABLE_LEADER_ELECTION=true LEADER_ELECTION_NAMESPACE= EOF
Use the file from the previous step to create a ConfigMap:
$ oc -n ack-system create configmap \ --from-env-file=${SCRATCH}/config.txt ack-${ACK_SERVICE}-user-config
Install the ACK S3 Operator from OperatorHub:
$ cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ack-${ACK_SERVICE}-controller namespace: ack-system spec: upgradeStrategy: Default --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ack-${ACK_SERVICE}-controller namespace: ack-system spec: channel: alpha installPlanApproval: Automatic name: ack-${ACK_SERVICE}-controller source: community-operators sourceNamespace: openshift-marketplace EOF
Annotate the ACK S3 Operator service account with the AWS IAM role to assume and restart the deployment:
$ oc -n ack-system annotate serviceaccount ${ACK_SERVICE_ACCOUNT} \ eks.amazonaws.com/role-arn=${ROLE_ARN} && \ oc -n ack-system rollout restart deployment ack-${ACK_SERVICE}-controller
Verify that the ACK S3 Operator is running:
$ oc -n ack-system get pods
Example output
NAME READY STATUS RESTARTS AGE ack-s3-controller-585f6775db-s4lfz 1/1 Running 0 51s
12.5. Validating the deployment
Deploy an S3 bucket resource:
$ cat << EOF | oc apply -f - apiVersion: s3.services.k8s.aws/v1alpha1 kind: Bucket metadata: name: ${CLUSTER-NAME}-bucket namespace: ack-system spec: name: ${CLUSTER-NAME}-bucket EOF
Verify the S3 bucket was created in AWS:
$ aws s3 ls | grep ${CLUSTER_NAME}-bucket
Example output
2023-10-04 14:51:45 mrmc-test-maz-bucket
12.6. Cleaning up
Delete the S3 bucket resource:
$ oc -n ack-system delete bucket.s3.services.k8s.aws/${CLUSTER-NAME}-bucket
Delete the ACK S3 Operator and the AWS IAM roles:
$ oc -n ack-system delete subscription ack-${ACK_SERVICE}-controller $ aws iam detach-role-policy \ --role-name "ack-${ACK_SERVICE}-controller" \ --policy-arn ${POLICY_ARN} $ aws iam delete-role \ --role-name "ack-${ACK_SERVICE}-controller"
Delete the
ack-system
project:$ oc delete project ack-system
Chapter 13. Tutorial: Deploying the External DNS Operator on ROSA
The External DNS Operator deploys and manages ExternalDNS
to provide the name resolution for services and routes from the external DNS provider, like Amazon Route 53, to Red Hat OpenShift Service on AWS (ROSA) clusters. In this tutorial, we will deploy and configure the External DNS Operator with a secondary ingress controller to manage DNS records in Amazon Route 53.
The External DNS
Operator does not support STS using IAM Roles for Service Accounts (IRSA) and uses long-lived Identity Access Management (IAM) credentials instead. This tutorial will be updated when the Operator supports STS.
13.1. Prerequisites
A ROSA Classic cluster
NoteROSA with HCP is not supported at this time.
-
A user account with
cluster-admin
privileges -
The OpenShift CLI (
oc
) -
The Amazon Web Services (AWS) CLI (
aws
) -
A unique domain, such as
apps.example.com
- An Amazon Route 53 public hosted zone for the above domain
13.2. Setting up your environment
Configure the following environment variables:
$ export DOMAIN=<apps.example.com> 1 $ export AWS_PAGER="" $ export CLUSTER=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}" | sed 's/-[a-z0-9]\{5\}$//') $ export REGION=$(oc get infrastructure cluster -o=jsonpath="{.status.platformStatus.aws.region}") $ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) $ export SCRATCH="/tmp/${CLUSTER}/external-dns" $ mkdir -p ${SCRATCH}
- 1
- Replace with the custom domain you want to use for the
IngressController
.
Ensure all fields output correctly before moving to the next section:
$ echo "Cluster: ${CLUSTER}, Region: ${REGION}, AWS Account ID: ${AWS_ACCOUNT_ID}"
NoteThe "Cluster" output from the previous command may be the name of your cluster, the internal ID of your cluster, or the cluster’s domain prefix. If you prefer to use another identifier, you can manually set this value by running the following command:
$ export CLUSTER=my-custom-value
13.3. Secondary ingress controller setup
Use the following procedure to deploy a secondary ingress controller using a custom domain.
Prerequisites
-
A unique domain, such as
apps.example.com
-
A wildcard or SAN TLS certificate configured with the custom domain selected above (
CN=*.apps.example.com
)
Procedure
Create a new TLS secret from a private key and a public certificate, where
fullchain.pem
is your full wildcard certificate chain (including any intermediaries) andprivkey.pem
is your wildcard certificate’s private key:$ oc -n openshift-ingress create secret tls external-dns-tls --cert=fullchain.pem --key=privkey.pem
Create a new
IngressController
resource:$ cat << EOF | oc apply -f - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: external-dns-ingress namespace: openshift-ingress-operator spec: domain: ${DOMAIN} defaultCertificate: name: external-dns-tls endpointPublishingStrategy: loadBalancer: dnsManagementPolicy: Unmanaged providerParameters: aws: type: NLB type: AWS scope: External type: LoadBalancerService EOF
WarningThis
IngressController
example will create an internet accessible Network Load Balancer (NLB) in your AWS account. To provision an internal NLB instead, set the.spec.endpointPublishingStrategy.loadBalancer.scope
parameter toInternal
before creating theIngressController
resource.Verify that your custom domain IngressController has successfully created an external load balancer:
$ oc -n openshift-ingress get service/router-external-dns-ingress
Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-external-dns-ingress LoadBalancer 172.30.71.250 a4838bb991c6748439134ab89f132a43-aeae124077b50c01.elb.us-east-1.amazonaws.com 80:32227/TCP,443:30310/TCP 43s
13.4. Preparing your AWS account
Retrieve the Amazon Route 53 public hosted zone ID:
$ export ZONE_ID=$(aws route53 list-hosted-zones-by-name --output json \ --dns-name "${DOMAIN}." --query 'HostedZones[0]'.Id --out text | sed 's/\/hostedzone\///')
Prepare a document with the necessary DNS changes to enable DNS resolution for the canonical domain of the Ingress Controller:
$ NLB_HOST=$(oc -n openshift-ingress get service/router-external-dns-ingress -ojsonpath="{.status.loadBalancer.ingress[0].hostname}") $ cat << EOF > "${SCRATCH}/create-cname.json" { "Comment":"Add CNAME to ingress controller canonical domain", "Changes":[{ "Action":"CREATE", "ResourceRecordSet":{ "Name": "router-external-dns-ingress.${DOMAIN}", "Type":"CNAME", "TTL":30, "ResourceRecords":[{ "Value": "${NLB_HOST}" }] } }] } EOF
The External DNS Operator uses this canonical domain as the target for CNAME records.
Submit your changes to Amazon Route 53 for propagation:
aws route53 change-resource-record-sets \ --hosted-zone-id ${ZONE_ID} \ --change-batch file://${SCRATCH}/create-cname.json
Create an AWS IAM Policy document that allows the
External DNS
Operator to update only the custom domain public hosted zone:$ cat << EOF > "${SCRATCH}/external-dns-policy.json" { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "route53:ChangeResourceRecordSets" ], "Resource": [ "arn:aws:route53:::hostedzone/${ZONE_ID}" ] }, { "Effect": "Allow", "Action": [ "route53:ListHostedZones", "route53:ListResourceRecordSets" ], "Resource": [ "*" ] } ] } EOF
Create an AWS IAM user:
$ aws iam create-user --user-name "${CLUSTER}-external-dns-operator"
Attach the policy:
$ aws iam attach-user-policy --user-name "${CLUSTER}-external-dns-operator" --policy-arn $POLICY_ARN
NoteThis will be changed to STS using IRSA in the future.
Create AWS keys for the IAM user:
$ SECRET_ACCESS_KEY=$(aws iam create-access-key --user-name "${CLUSTER}-external-dns-operator")
Create static credentials:
$ cat << EOF > "${SCRATCH}/credentials" [default] aws_access_key_id = $(echo $SECRET_ACCESS_KEY | jq -r '.AccessKey.AccessKeyId') aws_secret_access_key = $(echo $SECRET_ACCESS_KEY | jq -r '.AccessKey.SecretAccessKey') EOF
13.5. Installing the External DNS Operator
Create a new project:
$ oc new-project external-dns-operator
Install the
External DNS
Operator from OperatorHub:$ cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: external-dns-group namespace: external-dns-operator spec: targetNamespaces: - external-dns-operator --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: external-dns-operator namespace: external-dns-operator spec: channel: stable-v1.1 installPlanApproval: Automatic name: external-dns-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF
Wait until the
External DNS
Operator is running:$ oc rollout status deploy external-dns-operator --timeout=300s
Create a secret from the AWS IAM user credentials:
$ oc -n external-dns-operator create secret generic external-dns \ --from-file "${SCRATCH}/credentials"
Deploy the
ExternalDNS
controller:$ cat << EOF | oc apply -f - apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: ${DOMAIN} spec: domains: - filterType: Include matchType: Exact name: ${DOMAIN} provider: aws: credentials: name: external-dns type: AWS source: openshiftRouteOptions: routerName: external-dns-ingress type: OpenShiftRoute zones: - ${ZONE_ID} EOF
Wait until the controller is running:
$ oc rollout status deploy external-dns-${DOMAIN} --timeout=300s
13.6. Deploying a sample application
Now that the ExternalDNS
controller is running, you can deploy a sample application to confirm that the custom domain is configured and trusted when you expose a new route.
Create a new project for your sample application:
$ oc new-project hello-world
Deploy a hello world application:
$ oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
Create a route for the application specifying your custom domain name:
$ oc -n hello-world create route edge --service=hello-openshift hello-openshift-tls \ --hostname hello-openshift.${DOMAIN}
Check if the DNS record was created automatically by ExternalDNS:
NoteIt can take a few minutes for the record to appear in Amazon Route 53.
$ aws route53 list-resource-record-sets --hosted-zone-id ${ZONE_ID} \ --query "ResourceRecordSets[?Type == 'CNAME']" | grep hello-openshift
Optional: You can also view the TXT records that indicate they were created by ExternalDNS:
$ aws route53 list-resource-record-sets --hosted-zone-id ${ZONE_ID} \ --query "ResourceRecordSets[?Type == 'TXT']" | grep ${DOMAIN}
Curl the newly created DNS record to your sample application to verify the hello world application is accessible:
$ curl https://hello-openshift.${DOMAIN}
Example output
Hello OpenShift!
Chapter 14. Tutorial: Dynamically issuing certificates using the cert-manager Operator on ROSA
While wildcard certificates provide simplicity by securing all first-level subdomains of a given domain with a single certificate, other use cases can require the use of individual certificates per domain.
Learn how to use the cert-manager Operator for Red Hat OpenShift and Let’s Encrypt to dynamically issue certificates for routes created using a custom domain.
14.1. Prerequisites
- A ROSA cluster (HCP or Classic)
-
A user account with
cluster-admin
privileges -
The OpenShift CLI (
oc
) -
The Amazon Web Services (AWS) CLI (
aws
) -
A unique domain, such as
*.apps.example.com
- An Amazon Route 53 public hosted zone for the above domain
14.2. Setting up your environment
Configure the following environment variables:
$ export DOMAIN=apps.example.com 1 $ export EMAIL=email@example.com 2 $ export AWS_PAGER="" $ export CLUSTER=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}" | sed 's/-[a-z0-9]\{5\}$//') $ export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o json | jq -r .spec.serviceAccountIssuer | sed 's|^https://||') $ export REGION=$(oc get infrastructure cluster -o=jsonpath="{.status.platformStatus.aws.region}") $ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) $ export SCRATCH="/tmp/${CLUSTER}/dynamic-certs" $ mkdir -p ${SCRATCH}
Ensure all fields output correctly before moving to the next section:
$ echo "Cluster: ${CLUSTER}, Region: ${REGION}, OIDC Endpoint: ${OIDC_ENDPOINT}, AWS Account ID: ${AWS_ACCOUNT_ID}"
NoteThe "Cluster" output from the previous command may be the name of your cluster, the internal ID of your cluster, or the cluster’s domain prefix. If you prefer to use another identifier, you can manually set this value by running the following command:
$ export CLUSTER=my-custom-value
14.3. Preparing your AWS account
When cert-manager requests a certificate from Let’s Encrypt (or another ACME certificate issuer), Let’s Encrypt servers validate that you control the domain name in that certificate using challenges. For this tutorial, you are using a DNS-01 challenge that proves that you control the DNS for your domain name by putting a specific value in a TXT record under that domain name. This is all done automatically by cert-manager. To allow cert-manager permission to modify the Amazon Route 53 public hosted zone for your domain, you need to create an Identity Access Management (IAM) role with specific policy permissions and a trust relationship to allow access to the pod.
The public hosted zone that is used in this tutorial is in the same AWS account as the ROSA cluster. If your public hosted zone is in a different account, a few additional steps for Cross Account Access are required.
Retrieve the Amazon Route 53 public hosted zone ID:
NoteThis command looks for a public hosted zone that matches the custom domain you specified earlier as the
DOMAIN
environment variable. You can manually specify the Amazon Route 53 public hosted zone by runningexport ZONE_ID=<zone_ID>
, replacing<zone_ID>
with your specific Amazon Route 53 public hosted zone ID.$ export ZONE_ID=$(aws route53 list-hosted-zones-by-name --output json \ --dns-name "${DOMAIN}." --query 'HostedZones[0]'.Id --out text | sed 's/\/hostedzone\///')
Create an AWS IAM policy document for the cert-manager Operator that provides the ability to update only the specified public hosted zone:
$ cat <<EOF > "${SCRATCH}/cert-manager-policy.json" { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "route53:GetChange", "Resource": "arn:aws:route53:::change/*" }, { "Effect": "Allow", "Action": [ "route53:ChangeResourceRecordSets", "route53:ListResourceRecordSets" ], "Resource": "arn:aws:route53:::hostedzone/${ZONE_ID}" }, { "Effect": "Allow", "Action": "route53:ListHostedZonesByName", "Resource": "*" } ] } EOF
Create the IAM policy using the file you created in the previous step:
$ POLICY_ARN=$(aws iam create-policy --policy-name "${CLUSTER}-cert-manager-policy" \ --policy-document file://${SCRATCH}/cert-manager-policy.json \ --query 'Policy.Arn' --output text)
Create an AWS IAM trust policy for the cert-manager Operator:
$ cat <<EOF > "${SCRATCH}/trust-policy.json" { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Condition": { "StringEquals" : { "${OIDC_ENDPOINT}:sub": "system:serviceaccount:cert-manager:cert-manager" } }, "Principal": { "Federated": "arn:aws:iam::$AWS_ACCOUNT_ID:oidc-provider/${OIDC_ENDPOINT}" }, "Action": "sts:AssumeRoleWithWebIdentity" } ] } EOF
Create an IAM role for the cert-manager Operator using the trust policy you created in the previous step:
$ ROLE_ARN=$(aws iam create-role --role-name "${CLUSTER}-cert-manager-operator" \ --assume-role-policy-document "file://${SCRATCH}/trust-policy.json" \ --query Role.Arn --output text)
Attach the permissions policy to the role:
$ aws iam attach-role-policy --role-name "${CLUSTER}-cert-manager-operator" \ --policy-arn ${POLICY_ARN}
14.4. Installing the cert-manager Operator
Create a project to install the cert-manager Operator into:
$ oc new-project cert-manager-operator
ImportantDo not attempt to use more than one cert-manager Operator in your cluster. If you have a community cert-manager Operator installed in your cluster, you must uninstall it before installing the cert-manager Operator for Red Hat OpenShift.
Install the cert-manager Operator for Red Hat OpenShift:
$ cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-cert-manager-operator-group namespace: cert-manager-operator spec: targetNamespaces: - cert-manager-operator --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-cert-manager-operator namespace: cert-manager-operator spec: channel: stable-v1 installPlanApproval: Automatic name: openshift-cert-manager-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF
NoteIt takes a few minutes for this Operator to install and complete its set up.
Verify that the cert-manager Operator is running:
$ oc -n cert-manager-operator get pods
Example output
NAME READY STATUS RESTARTS AGE cert-manager-operator-controller-manager-84b8799db5-gv8mx 2/2 Running 0 12s
Annotate the service account used by the cert-manager pods with the AWS IAM role you created earlier:
$ oc -n cert-manager annotate serviceaccount cert-manager eks.amazonaws.com/role-arn=${ROLE_ARN}
Restart the existing cert-manager controller pod by running the following command:
$ oc -n cert-manager delete pods -l app.kubernetes.io/name=cert-manager
Patch the Operator’s configuration to use external nameservers to prevent DNS-01 challenge resolution issues:
$ oc patch certmanager.operator.openshift.io/cluster --type merge \ -p '{"spec":{"controllerConfig":{"overrideArgs":["--dns01-recursive-nameservers-only","--dns01-recursive-nameservers=1.1.1.1:53"]}}}'
Create a
ClusterIssuer
resource to use Let’s Encrypt by running the following command:$ cat << EOF | oc apply -f - apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-production spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: ${EMAIL} # This key doesn't exist, cert-manager creates it privateKeySecretRef: name: prod-letsencrypt-issuer-account-key solvers: - dns01: route53: hostedZoneID: ${ZONE_ID} region: ${REGION} secretAccessKeySecretRef: name: '' EOF
Verify the
ClusterIssuer
resource is ready:$ oc get clusterissuer.cert-manager.io/letsencrypt-production
Example output
NAME READY AGE letsencrypt-production True 47s
14.5. Creating a custom domain Ingress Controller
Create and configure a certificate resource to provision a certificate for the custom domain Ingress Controller:
NoteThe following example uses a single domain certificate. SAN and wildcard certificates are also supported.
$ cat << EOF | oc apply -f - apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: custom-domain-ingress-cert namespace: openshift-ingress spec: secretName: custom-domain-ingress-cert-tls issuerRef: name: letsencrypt-production kind: ClusterIssuer commonName: "${DOMAIN}" dnsNames: - "${DOMAIN}" EOF
Verify the certificate has been issued:
NoteIt takes a few minutes for this certificate to be issued by Let’s Encrypt. If it takes longer than 5 minutes, run
oc -n openshift-ingress describe certificate.cert-manager.io/custom-domain-ingress-cert
to see any issues reported by cert-manager.$ oc -n openshift-ingress get certificate.cert-manager.io/custom-domain-ingress-cert
Example output
NAME READY SECRET AGE custom-domain-ingress-cert True custom-domain-ingress-cert-tls 9m53s
Create a new
IngressController
resource:$ cat << EOF | oc apply -f - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: custom-domain-ingress namespace: openshift-ingress-operator spec: domain: ${DOMAIN} defaultCertificate: name: custom-domain-ingress-cert-tls endpointPublishingStrategy: loadBalancer: dnsManagementPolicy: Unmanaged providerParameters: aws: type: NLB type: AWS scope: External type: LoadBalancerService EOF
WarningThis
IngressController
example will create an internet accessible Network Load Balancer (NLB) in your AWS account. To provision an internal NLB instead, set the.spec.endpointPublishingStrategy.loadBalancer.scope
parameter toInternal
before creating theIngressController
resource.Verify that your custom domain IngressController has successfully created an external load balancer:
$ oc -n openshift-ingress get service/router-custom-domain-ingress
Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-custom-domain-ingress LoadBalancer 172.30.174.34 a309962c3bd6e42c08cadb9202eca683-1f5bbb64a1f1ec65.elb.us-east-1.amazonaws.com 80:31342/TCP,443:31821/TCP 7m28s
Prepare a document with the necessary DNS changes to enable DNS resolution for your custom domain Ingress Controller:
$ INGRESS=$(oc -n openshift-ingress get service/router-custom-domain-ingress -ojsonpath="{.status.loadBalancer.ingress[0].hostname}") $ cat << EOF > "${SCRATCH}/create-cname.json" { "Comment":"Add CNAME to custom domain endpoint", "Changes":[{ "Action":"CREATE", "ResourceRecordSet":{ "Name": "*.${DOMAIN}", "Type":"CNAME", "TTL":30, "ResourceRecords":[{ "Value": "${INGRESS}" }] } }] } EOF
Submit your changes to Amazon Route 53 for propagation:
$ aws route53 change-resource-record-sets \ --hosted-zone-id ${ZONE_ID} \ --change-batch file://${SCRATCH}/create-cname.json
NoteWhile the wildcard CNAME record avoids the need to create a new record for every new application you deploy using the custom domain Ingress Controller, the certificate that each of these applications use is not a wildcard certificate.
14.6. Configuring dynamic certificates for custom domain routes
Now you can expose cluster applications on any first-level subdomains of the specified domain, but the connection will not be secured with a TLS certificate that matches the domain of the application. To ensure these cluster applications have valid certificates for each domain name, configure cert-manager to dynamically issue a certificate to every new route created under this domain.
Create the necessary OpenShift resources cert-manager requires to manage certificates for OpenShift routes.
This step creates a new deployment (and therefore a pod) that specifically monitors annotated routes in the cluster. If the
issuer-kind
andissuer-name
annotations are found in a new route, it requests the Issuer (ClusterIssuer in this case) for a new certificate that is unique to this route and which will honor the hostname that was specified while creating the route.NoteIf the cluster does not have access to GitHub, you can save the raw contents locally and run
oc apply -f localfilename.yaml -n cert-manager
.$ oc -n cert-manager apply -f https://github.com/cert-manager/openshift-routes/releases/latest/download/cert-manager-openshift-routes.yaml
The following additional OpenShift resources are also created in this step:
-
ClusterRole
- grants permissions to watch and update the routes across the cluster -
ServiceAccount
- uses permissions to run the newly created pod -
ClusterRoleBinding
- binds these two resources
-
Ensure that the new
cert-manager-openshift-routes
pod is running successfully:$ oc -n cert-manager get pods
Example result
NAME READY STATUS RESTARTS AGE cert-manager-866d8f788c-9kspc 1/1 Running 0 4h21m cert-manager-cainjector-6885c585bd-znws8 1/1 Running 0 4h41m cert-manager-openshift-routes-75b6bb44cd-f8kd5 1/1 Running 0 6s cert-manager-webhook-8498785dd9-bvfdf 1/1 Running 0 4h41m
14.7. Deploying a sample application
Now that dynamic certificates are configured, you can deploy a sample application to confirm that certificates are provisioned and trusted when you expose a new route.
Create a new project for your sample application:
$ oc new-project hello-world
Deploy a hello world application:
$ oc -n hello-world new-app --image=docker.io/openshift/hello-openshift
Create a route to expose the application from outside the cluster:
$ oc -n hello-world create route edge --service=hello-openshift hello-openshift-tls --hostname hello.${DOMAIN}
Verify the certificate for the route is untrusted:
$ curl -I https://hello.${DOMAIN}
Example output
curl: (60) SSL: no alternative certificate subject name matches target host name 'hello.example.com' More details here: https://curl.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above.
Annotate the route to trigger cert-manager to provision a certificate for the custom domain:
$ oc -n hello-world annotate route hello-openshift-tls cert-manager.io/issuer-kind=ClusterIssuer cert-manager.io/issuer-name=letsencrypt-production
NoteIt takes 2-3 minutes for the certificate to be created. The renewal of the certificate will automatically be managed by the cert-manager Operator as it approaches expiration.
Verify the certificate for the route is now trusted:
$ curl -I https://hello.${DOMAIN}
Example output
HTTP/2 200 date: Thu, 05 Oct 2023 23:45:33 GMT content-length: 17 content-type: text/plain; charset=utf-8 set-cookie: 52e4465485b6fb4f8a1b1bed128d0f3b=68676068bb32d24f0f558f094ed8e4d7; path=/; HttpOnly; Secure; SameSite=None cache-control: private
14.8. Troubleshooting dynamic certificate provisioning
The validation process usually takes 2-3 minutes to complete while creating certificates.
If annotating your route does not trigger certificate creation during the certificate create step, run oc describe
against each of the certificate
,certificaterequest
,order
, and challenge
resources to view the events or reasons that can help identify the cause of the issue.
$ oc get certificate,certificaterequest,order,challenge
For troubleshooting, you can refer to this helpful guide in debugging certificates.
You can also use the cmctl CLI tool for various certificate management activities, such as checking the status of certificates and testing renewals.
Chapter 15. Tutorial: Assigning a consistent egress IP for external traffic
You can assign a consistent IP address for traffic that leaves your cluster such as security groups which require an IP-based configuration to meet security standards.
By default, Red Hat OpenShift Service on AWS (ROSA) uses the OVN-Kubernetes container network interface (CNI) to assign random IP addresses from a pool. This can make configuring security lockdowns unpredictable or open.
See Configuring an egress IP address for more information.
Objectives
- Learn how to configure a set of predictable IP addresses for egress cluster traffic.
Prerequisites
- A ROSA cluster deployed with OVN-Kubernetes
-
The OpenShift CLI (
oc
) -
The ROSA CLI (
rosa
) -
jq
15.1. Setting your environment variables
Set your environment variables by running the following command:
NoteReplace the value of the
ROSA_MACHINE_POOL_NAME
variable to target a different machine pool.$ export ROSA_CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}" | sed 's/-[a-z0-9]\{5\}$//') $ export ROSA_MACHINE_POOL_NAME=worker
15.2. Ensuring capacity
The number of IP addresses assigned to each node is limited for each public cloud provider.
Verify sufficient capacity by running the following command:
$ oc get node -o json | \ jq '.items[] | { "name": .metadata.name, "ips": (.status.addresses | map(select(.type == "InternalIP") | .address)), "capacity": (.metadata.annotations."cloud.network.openshift.io/egress-ipconfig" | fromjson[] | .capacity.ipv4) }'
Example output
--- { "name": "ip-10-10-145-88.ec2.internal", "ips": [ "10.10.145.88" ], "capacity": 14 } { "name": "ip-10-10-154-175.ec2.internal", "ips": [ "10.10.154.175" ], "capacity": 14 } ---
15.3. Creating the egress IP rules
Before creating the egress IP rules, identify which egress IPs you will use.
NoteThe egress IPs that you select should exist as a part of the subnets in which the worker nodes are provisioned.
Optional: Reserve the egress IPs that you requested to avoid conflicts with the AWS Virtual Private Cloud (VPC) Dynamic Host Configuration Protocol (DHCP) service.
Request explicit IP reservations on the AWS documentation for CIDR reservations page.
15.4. Assigning an egress IP to a namespace
Create a new project by running the following command:
$ oc new-project demo-egress-ns
Create the egress rule for all pods within the namespace by running the following command:
$ cat <<EOF | oc apply -f - apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: demo-egress-ns spec: # NOTE: these egress IPs are within the subnet range(s) in which my worker nodes # are deployed. egressIPs: - 10.10.100.253 - 10.10.150.253 - 10.10.200.253 namespaceSelector: matchLabels: kubernetes.io/metadata.name: demo-egress-ns EOF
15.5. Assigning an egress IP to a pod
Create a new project by running the following command:
$ oc new-project demo-egress-pod
Create the egress rule for the pod by running the following command:
Notespec.namespaceSelector
is a mandatory field.$ cat <<EOF | oc apply -f - apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: demo-egress-pod spec: # NOTE: these egress IPs are within the subnet range(s) in which my worker nodes # are deployed. egressIPs: - 10.10.100.254 - 10.10.150.254 - 10.10.200.254 namespaceSelector: matchLabels: kubernetes.io/metadata.name: demo-egress-pod podSelector: matchLabels: run: demo-egress-pod EOF
15.5.1. Labeling the nodes
Obtain your pending egress IP assignments by running the following command:
$ oc get egressips
Example output
NAME EGRESSIPS ASSIGNED NODE ASSIGNED EGRESSIPS demo-egress-ns 10.10.100.253 demo-egress-pod 10.10.100.254
The egress IP rule that you created only applies to nodes with the
k8s.ovn.org/egress-assignable
label. Make sure that the label is only on a specific machine pool.Assign the label to your machine pool using the following command:
WarningIf you rely on node labels for your machine pool, this command will replace those labels. Be sure to input your desired labels into the
--labels
field to ensure your node labels remain.$ rosa update machinepool ${ROSA_MACHINE_POOL_NAME} \ --cluster="${ROSA_CLUSTER_NAME}" \ --labels "k8s.ovn.org/egress-assignable="
15.5.2. Reviewing the egress IPs
Review the egress IP assignments by running the following command:
$ oc get egressips
Example output
NAME EGRESSIPS ASSIGNED NODE ASSIGNED EGRESSIPS demo-egress-ns 10.10.100.253 ip-10-10-156-122.ec2.internal 10.10.150.253 demo-egress-pod 10.10.100.254 ip-10-10-156-122.ec2.internal 10.10.150.254
15.6. Verification
15.6.1. Deploying a sample application
To test the egress IP rule, create a service that is restricted to the egress IP addresses which we have specified. This simulates an external service that is expecting a small subset of IP addresses.
Run the
echoserver
command to replicate a request:$ oc -n default run demo-service --image=gcr.io/google_containers/echoserver:1.4
Expose the pod as a service and limit the ingress to the egress IP addresses you specified by running the following command:
$ cat <<EOF | oc apply -f - apiVersion: v1 kind: Service metadata: name: demo-service namespace: default annotations: service.beta.kubernetes.io/aws-load-balancer-scheme: "internal" service.beta.kubernetes.io/aws-load-balancer-internal: "true" spec: selector: run: demo-service ports: - port: 80 targetPort: 8080 type: LoadBalancer externalTrafficPolicy: Local # NOTE: this limits the source IPs that are allowed to connect to our service. It # is being used as part of this demo, restricting connectivity to our egress # IP addresses only. # NOTE: these egress IPs are within the subnet range(s) in which my worker nodes # are deployed. loadBalancerSourceRanges: - 10.10.100.254/32 - 10.10.150.254/32 - 10.10.200.254/32 - 10.10.100.253/32 - 10.10.150.253/32 - 10.10.200.253/32 EOF
Retrieve the load balancer hostname and save it as an environment variable by running the following command:
$ export LOAD_BALANCER_HOSTNAME=$(oc get svc -n default demo-service -o json | jq -r '.status.loadBalancer.ingress[].hostname')
15.6.2. Testing the namespace egress
Start an interactive shell to test the namespace egress rule:
$ oc run \ demo-egress-ns \ -it \ --namespace=demo-egress-ns \ --env=LOAD_BALANCER_HOSTNAME=$LOAD_BALANCER_HOSTNAME \ --image=registry.access.redhat.com/ubi9/ubi -- \ bash
Send a request to the load balancer and ensure that you can successfully connect:
$ curl -s http://$LOAD_BALANCER_HOSTNAME
Check the output for a successful connection:
NoteThe
client_address
is the internal IP address of the load balancer not your egress IP. You can verify that you have configured the client address correctly by connecting with your service limited to.spec.loadBalancerSourceRanges
.Example output
CLIENT VALUES: client_address=10.10.207.247 command=GET real path=/ query=nil request_version=1.1 request_uri=http://internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com:8080/ SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* host=internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com user-agent=curl/7.76.1 BODY: -no body in request-
Exit the pod by running the following command:
$ exit
15.6.3. Testing the pod egress
Start an interactive shell to test the pod egress rule:
$ oc run \ demo-egress-pod \ -it \ --namespace=demo-egress-pod \ --env=LOAD_BALANCER_HOSTNAME=$LOAD_BALANCER_HOSTNAME \ --image=registry.access.redhat.com/ubi9/ubi -- \ bash
Send a request to the load balancer by running the following command:
$ curl -s http://$LOAD_BALANCER_HOSTNAME
Check the output for a successful connection:
NoteThe
client_address
is the internal IP address of the load balancer not your egress IP. You can verify that you have configured the client address correctly by connecting with your service limited to.spec.loadBalancerSourceRanges
.Example output
CLIENT VALUES: client_address=10.10.207.247 command=GET real path=/ query=nil request_version=1.1 request_uri=http://internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com:8080/ SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* host=internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com user-agent=curl/7.76.1 BODY: -no body in request-
Exit the pod by running the following command:
$ exit
15.6.4. Optional: Testing blocked egress
Optional: Test that the traffic is successfully blocked when the egress rules do not apply by running the following command:
$ oc run \ demo-egress-pod-fail \ -it \ --namespace=demo-egress-pod \ --env=LOAD_BALANCER_HOSTNAME=$LOAD_BALANCER_HOSTNAME \ --image=registry.access.redhat.com/ubi9/ubi -- \ bash
Send a request to the load balancer by running the following command:
$ curl -s http://$LOAD_BALANCER_HOSTNAME
- If the command is unsuccessful, egress is successfully blocked.
Exit the pod by running the following command:
$ exit
15.7. Cleaning up your cluster
Clean up your cluster by running the following commands:
$ oc delete svc demo-service -n default; \ $ oc delete pod demo-service -n default; \ $ oc delete project demo-egress-ns; \ $ oc delete project demo-egress-pod; \ $ oc delete egressip demo-egress-ns; \ $ oc delete egressip demo-egress-pod
Clean up the assigned node labels by running the following command:
WarningIf you rely on node labels for your machine pool, this command replaces those labels. Input your desired labels into the
--labels
field to ensure your node labels remain.$ rosa update machinepool ${ROSA_MACHINE_POOL_NAME} \ --cluster="${ROSA_CLUSTER_NAME}" \ --labels ""
Chapter 16. Tutorial: Updating component routes with custom domains and TLS certificates
This guide demonstrates how to modify the hostname and TLS certificate of the Web console, OAuth server, and Downloads component routes in Red Hat OpenShift Service on AWS (ROSA) version 4.14 and above.[1]
The changes that we make to the component routes[2] in this guide are described in greater detail in the customizing the internal OAuth server URL, console route, and download route OpenShift Container Platform documentation.
16.1. Prerequisites
-
ROSA CLI (
rosa
) version 1.2.37 or higher -
AWS CLI (
aws
) A ROSA Classic cluster version 4.14 or higher
NoteROSA with HCP is not supported at this time.
-
OpenShift CLI (
oc
) -
jq
CLI -
Access to the cluster as a user with the
cluster-admin
role. - OpenSSL (for generating the demonstration SSL/TLS certificates)
16.2. Setting up your environment
-
Log in to your cluster using an account with
cluster-admin
privileges. Configure an environment variable for your cluster name:
$ export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}" | sed 's/-[a-z0-9]\{5\}$//')
Ensure all fields output correctly before moving to the next section:
$ echo "Cluster: ${CLUSTER_NAME}"
Example output
Cluster: my-rosa-cluster
16.3. Find the current routes
Verify that you can reach the component routes on their default hostnames.
You can find the hostnames by querying the lists of routes in the
openshift-console
andopenshift-authentication
projects.$ oc get routes -n openshift-console $ oc get routes -n openshift-authentication
Example output
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD console console-openshift-console.apps.my-example-cluster-aws.z9a9.p1.openshiftapps.com ... 1 more console https reencrypt/Redirect None downloads downloads-openshift-console.apps.my-example-cluster-aws.z9a9.p1.openshiftapps.com ... 1 more downloads http edge/Redirect None NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD oauth-openshift oauth-openshift.apps.my-example-cluster-aws.z9a9.p1.openshiftapps.com ... 1 more oauth-openshift 6443 passthrough/Redirect None
From this output you can see that our base hostname is
z9a9.p1.openshiftapps.com
.Get the ID of the default ingress by running the following command:
$ export INGRESS_ID=$(rosa list ingress -c ${CLUSTER_NAME} -o json | jq -r '.[] | select(.default == true) | .id')
Ensure all fields output correctly before moving to the next section:
$ echo "Ingress ID: ${INGRESS_ID}"
Example output
Ingress ID: r3l6
By running these commands you can see that the default component routes for our cluster are:
-
console-openshift-console.apps.my-example-cluster-aws.z9a9.p1.openshiftapps.com
for Console -
downloads-openshift-console.apps.my-example-cluster-aws.z9a9.p1.openshiftapps.com
for Downloads -
oauth-openshift.apps.my-example-cluster-aws.z9a9.p1.openshiftapps.com
for OAuth
-
We can use the rosa edit ingress
command to change the hostname of each service and add a TLS certificate for all of our component routes. The relevant parameters are shown in this excerpt of the command line help for the rosa edit ingress
command:
$ rosa edit ingress -h Edit a cluster ingress for a cluster. Usage: rosa edit ingress ID [flags] [...] --component-routes string Component routes settings. Available keys [oauth, console, downloads]. For each key a pair of hostname and tlsSecretRef is expected to be supplied. Format should be a comma separate list 'oauth: hostname=example-hostname;tlsSecretRef=example-secret-ref,downloads:...'
For this example, we’ll use the following custom component routes:
-
console.my-new-domain.dev
for Console -
downloads.my-new-domain.dev
for Downloads -
oauth.my-new-domain.dev
for OAuth
16.4. Create a valid TLS certificate for each component route
In this section, we create three separate self-signed certificate key pairs and then trust them to verify that we can access our new component routes using a real web browser.
This is for demonstration purposes only, and is not recommended as a solution for production workloads. Consult your certificate authority to understand how to create certificates with similar attributes for your production workloads.
To prevent issues with HTTP/2 connection coalescing, you must use a separate individual certificate for each endpoint. Using a wildcard or SAN certificate is not supported.
Generate a certificate for each component route, taking care to set our certificate’s subject (
-subj
) to the custom domain of the component route we want to use:Example
$ openssl req -newkey rsa:2048 -new -nodes -x509 -days 365 -keyout key-console.pem -out cert-console.pem -subj "/CN=console.my-new-domain.dev" $ openssl req -newkey rsa:2048 -new -nodes -x509 -days 365 -keyout key-downloads.pem -out cert-downloads.pem -subj "/CN=downloads.my-new-domain.dev" $ openssl req -newkey rsa:2048 -new -nodes -x509 -days 365 -keyout key-oauth.pem -out cert-oauth.pem -subj "/CN=oauth.my-new-domain.dev"
This generates three pairs of
.pem
files,key-<component>.pem
andcert-<component>.pem
.
16.5. Add the certificates to the cluster as secrets
Create three TLS secrets in the
openshift-config
namespace.These become your secret reference when you update the component routes later in this guide.
$ oc create secret tls console-tls --cert=cert-console.pem --key=key-console.pem -n openshift-config $ oc create secret tls downloads-tls --cert=cert-downloads.pem --key=key-downloads.pem -n openshift-config $ oc create secret tls oauth-tls --cert=cert-oauth.pem --key=key-oauth.pem -n openshift-config
16.6. Find the hostname of the load balancer in your cluster
When you create a cluster, the service creates a load balancer and generates a hostname for that load balancer. We need to know the load balancer hostname in order to create DNS records for our cluster.
You can find the hostname by running the oc get svc
command against the openshift-ingress
namespace. The hostname of the load balancer is the EXTERNAL-IP
associated with the router-default
service in the openshift-ingress
namespace.
$ oc get svc -n openshift-ingress NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.237.88 a234gsr3242rsfsfs-1342r624.us-east-1.elb.amazonaws.com 80:31175/TCP,443:31554/TCP 76d
In our case, the hostname is a234gsr3242rsfsfs-1342r624.us-east-1.elb.amazonaws.com
.
Save this value for later, as we will need it to configure DNS records for our new component route hostnames.
16.7. Add component route DNS records to your hosting provider
In your hosting provider, add DNS records that map the CNAME
of your new component route hostnames to the load balancer hostname we found in the previous step.
16.8. Update the component routes and TLS secret using the ROSA CLI
When your DNS records have been updated, you can use the ROSA CLI to change the component routes.
Use the
rosa edit ingress
command to update your default ingress route with the new base domain and the secret reference associated with it, taking care to update the hostnames for each component route.$ rosa edit ingress -c ${CLUSTER_NAME} ${INGRESS_ID} --component-routes 'console: hostname=console.my-new-domain.dev;tlsSecretRef=console-tls,downloads: hostname=downloads.my-new-domain.dev;tlsSecretRef=downloads-tls,oauth: hostname=oauth.my-new-domain.dev;tlsSecretRef=oauth-tls'
NoteYou can also edit only a subset of the component routes by leaving the component routes you do not want to change set to an empty string. For example, if you only want to change the Console and OAuth server hostnames and TLS certificates, you would run the following command:
$ rosa edit ingress -c ${CLUSTER_NAME} ${INGRESS_ID} --component-routes 'console: hostname=console.my-new-domain.dev;tlsSecretRef=console-tls,downloads: hostname="";tlsSecretRef="", oauth: hostname=oauth.my-new-domain.dev;tlsSecretRef=oauth-tls'
Run the
rosa list ingress
command to verify that your changes were successfully made:$ rosa list ingress -c ${CLUSTER_NAME} -ojson | jq ".[] | select(.id == \"${INGRESS_ID}\") | .component_routes"
Example output
{ "console": { "kind": "ComponentRoute", "hostname": "console.my-new-domain.dev", "tls_secret_ref": "console-tls" }, "downloads": { "kind": "ComponentRoute", "hostname": "downloads.my-new-domain.dev", "tls_secret_ref": "downloads-tls" }, "oauth": { "kind": "ComponentRoute", "hostname": "oauth.my-new-domain.dev", "tls_secret_ref": "oauth-tls" } }
- Add your certificate to the truststore on your local system, then confirm that you can access your components at their new routes using your local web browser.
16.9. Reset the component routes to the default using the ROSA CLI
If you want to reset the component routes to the default configuration, run the following rosa edit ingress
command:
$ rosa edit ingress -c ${CLUSTER_NAME} ${INGRESS_ID} --component-routes 'console: hostname="";tlsSecretRef="",downloads: hostname="";tlsSecretRef="", oauth: hostname="";tlsSecretRef=""'
Chapter 17. Getting started with ROSA
17.1. Tutorial: What is ROSA
Red Hat OpenShift Service on AWS (ROSA) is a fully-managed turnkey application platform that allows you to focus on what matters most, delivering value to your customers by building and deploying applications. Red Hat and AWS SRE experts manage the underlying platform so you do not have to worry about infrastructure management. ROSA provides seamless integration with a wide range of AWS compute, database, analytics, machine learning, networking, mobile, and other services to further accelerate the building and delivering of differentiating experiences to your customers.
ROSA makes use of AWS Security Token Service (STS) to obtain credentials to manage infrastructure in your AWS account. AWS STS is a global web service that creates temporary credentials for IAM users or federated users. ROSA uses this to assign short-term, limited-privilege, security credentials. These credentials are associated with IAM roles that are specific to each component that makes AWS API calls. This method aligns with the principals of least privilege and secure practices in cloud service resource management. The ROSA command line interface (CLI) tool manages the STS credentials that are assigned for unique tasks and takes action on AWS resources as part of OpenShift functionality.
17.1.1. Key features of ROSA
- Native AWS service: Access and use Red Hat OpenShift on-demand with a self-service onboarding experience through the AWS management console.
- Flexible, consumption-based pricing: Scale to your business needs and pay as you go with flexible pricing and an on-demand hourly or annual billing model.
- Single bill for Red Hat OpenShift and AWS usage: Customers will receive a single bill from AWS for both Red Hat OpenShift and AWS consumption.
- Fully integrated support experience: Installation, management, maintenance, and upgrades are performed by Red Hat site reliability engineers (SREs) with joint Red Hat and Amazon support and a 99.95% service-level agreement (SLA).
- AWS service integration: AWS has a robust portfolio of cloud services, such as compute, storage, networking, database, analytics, and machine learning. All of these services are directly accessible through ROSA. This makes it easier to build, operate, and scale globally and on-demand through a familiar management interface.
- Maximum Availability: Deploy clusters across multiple availability zones in supported regions to maximize availability and maintain high availability for your most demanding mission-critical applications and data.
- Cluster node scaling: Easily add or remove compute nodes to match resource demand.
- Optimized clusters: Choose from memory-optimized, compute-optimized, or general purpose EC2 instance types with clusters sized to meet your needs.
- Global availability: Refer to the product regional availability page to see where ROSA is available globally.
17.1.2. ROSA and Kubernetes
In ROSA, everything you need to deploy and manage containers is bundled, including container management, Operators, networking, load balancing, service mesh, CI/CD, firewall, monitoring, registry, authentication, and authorization capabilities. These components are tested together for unified operations as a complete platform. Automated cluster operations, including over-the-air platform upgrades, further enhance your Kubernetes experience.
17.1.3. Basic responsibilities
In general, cluster deployment and upkeep is Red Hat’s or AWS’s responsibility, while applications, users, and data is the customer’s responsibility. For a more detailed breakdown of responsibilities, see the responsibility matrix.
17.1.4. Roadmap and feature requests
Visit the ROSA roadmap to stay up-to-date with the status of features currently in development. Open a new issue if you have any suggestions for the product team.
17.1.5. AWS region availability
Refer to the product regional availability page for an up-to-date view of where ROSA is available.
17.1.6. Compliance certifications
ROSA is currently compliant with SOC-2 type 2, SOC 3, ISO-27001, ISO 27017, ISO 27018, HIPAA, GDPR, and PCI-DSS. We are also currently working towards FedRAMP High.
17.1.7. Nodes
17.1.7.1. Worker nodes across multiple AWS regions
All nodes in a ROSA cluster must be located in the same AWS region. For clusters configured for multiple availability zones, control plane nodes and worker nodes will be distributed across the availability zones.
17.1.7.2. Minimum number of worker nodes
For a ROSA cluster, the minimum is 2 worker nodes for single availability zone and 3 worker nodes for multiple availability zones.
17.1.7.3. Underlying node operating system
As with all OpenShift v4.x offerings, the control plane, infra and worker nodes run Red Hat Enterprise Linux CoreOS (RHCOS).
17.1.7.4. Node hibernation or shut-down
At this time, ROSA does not have a hibernation or shut-down feature for nodes. The shutdown and hibernation feature is an OpenShift platform feature that is not yet mature enough for widespread cloud services use.
17.1.7.5. Supported instances for worker nodes
For a complete list of supported instances for worker nodes see AWS instance types. Spot instances are also supported.
17.1.7.6. Node autoscaling
Autoscaling allows you to automatically adjust the size of the cluster based on the current workload. See About autoscaling nodes on a cluster for more details.
17.1.7.7. Maximum number of worker nodes
The maximum number of worker nodes is 180 worker nodes for each ROSA cluster. See limits and scalability for more details on node counts.
A list of the account-wide and per-cluster roles is provided in the ROSA documentation.
17.1.8. Administrators
A ROSA customer’s administrator can manage users and quotas in addition to accessing all user-created projects.
17.1.9. OpenShift versions and upgrades
ROSA is a managed service which is based on OpenShift Container Platform. You can view the current version and life cycle dates in the ROSA documentation.
Customers can upgrade to the newest version of OpenShift and use the features from that version of OpenShift. For more information, see life cycle dates. Not all OpenShift features are be available on ROSA. Review the Service Definition for more information.
17.1.10. Support
You can open a ticket directly from the OpenShift Cluster Manager. See the ROSA support documentation for more details about obtaining support.
You can also visit the Red Hat Customer Portal to search or browse through the Red Hat knowledge base of articles and solutions relating to Red Hat products or submit a support case to Red Hat Support.
17.1.10.1. Limited support
If a ROSA cluster is not upgraded before the "end of life" date, the cluster continues to operate in a limited support status. The SLA for that cluster will no longer be applicable, but you can still get support for that cluster. See the limited support status documentation for more details.
Additional support resources
- Red Hat Support
AWS support customers must have a valid AWS support contract
17.1.11. Service-level agreement (SLA)
Refer to the ROSA SLA page for details.
17.1.12. Notifications and communication
Red Hat will provide notifications regarding new Red Hat and AWS features, updates, and scheduled maintenance through email and the Hybrid Cloud Console service log.
17.1.13. Open Service Broker for AWS (OBSA)
You can use OSBA with ROSA. However, the preferred method is the more recent AWS Controller for Kubernetes. See Open Service Broker for AWS for more information on OSBA.
17.1.14. Offboarding
Customers can stop using ROSA at any time and move their applications to on-premise, a private cloud, or other cloud providers. Standard reserved instances (RI) policy applies for unused RI.
17.1.15. Authentication
ROSA supports the following authentication mechanisms: OpenID Connect (a profile of OAuth2), Google OAuth, GitHub OAuth, GitLab, and LDAP.
17.1.16. SRE cluster access
All SRE cluster access is secured by MFA. See SRE access for more details.
17.1.17. Encryption
17.1.17.1. Encryption keys
ROSA uses a key stored in KMS to encrypt EBS volumes. Customers also have the option to provide their own KMS keys at cluster creation.
17.1.17.2. KMS keys
If you specify a KMS key, the control plane, infrastructure and worker node root volumes and the persistent volumes are encrypted with the key.
17.1.17.3. Data encryption
By default, there is encryption at rest. The AWS Storage platform automatically encrypts your data before persisting it and decrypts the data before retrieval. See AWS EBS Encryption for more details.
You can also encrypt etcd in the cluster, combining it with AWS storage encryption. This results in double the encryption which adds up to a 20% performance hit. For more details see the etcd encryption documentation.
17.1.17.4. etcd encryption
etcd encryption can only be enabled at cluster creation.
etcd encryption incurs additional overhead with negligible security risk mitigation.
17.1.17.5. etcd encryption configuration
etcd encryption is configured the same as in OpenShift Container Platform. The aescbc cypher is used and the setting is patched during cluster deployment. For more details, see the Kubernetes documentation.
17.1.17.6. Multi-region KMS keys for EBS encryption
Currently, the ROSA CLI does not accept multi-region KMS keys for EBS encryption. This feature is in our backlog for product updates. The ROSA CLI accepts single region KMS keys for EBS encryption if it is defined at cluster creation.
17.1.18. Infrastructure
ROSA uses several different cloud services such as virtual machines, storage, and load balancers. You can see a defined list in the AWS prerequisites.
17.1.19. Credential methods
There are two credential methods to grant Red Hat the permissions needed to perform the required actions in your AWS account: AWS with STS or an IAM user with admin permissions. AWS with STS is the preferred method, and the IAM user method will eventually be deprecated. AWS with STS better aligns with the principles of least privilege and secure practices in cloud service resource management.
17.1.20. Prerequisite permission or failure errors
Check for a newer version of the ROSA CLI. Every release of the ROSA CLI is located in two places: Github and the Red Hat signed binary releases.
17.1.21. Storage
Refer to the storage section of the service definition.
OpenShift includes the CSI driver for AWS EFS. For more information, see Setting up AWS EFS for Red Hat OpenShift Service on AWS.
17.1.22. Using a VPC
At installation you can select to deploy to an existing VPC or bring your own VPC. You can then select the required subnets and provide a valid CIDR range that encompasses the subnets for the installation program when using those subnets.
ROSA allows multiple clusters to share the same VPC. The number of clusters on one VPC is limited by the remaining AWS resource quota and CIDR ranges that cannot overlap. See CIDR Range Definitions for more information.
17.1.23. Network plugin
ROSA uses the OpenShift OVN-Kubernetes default CNI network provider.
17.1.24. Cross-namespace networking
Cluster admins can customize, and deny, cross-namespace on a project basis using NetworkPolicy objects. Refer to Configuring multitenant isolation with network policy for more information.
17.1.25. Using Prometheus and Grafana
You can use Prometheus and Grafana to monitor containers and manage capacity using OpenShift User Workload Monitoring. This is a check-box option in the OpenShift Cluster Manager.
17.1.26. Audit logs output from the cluster control-plane
If the Cluster Logging Operator Add-on has been added to the cluster then audit logs are available through CloudWatch. If it has not, then a support request would allow you to request some audit logs. Small targeted and time-boxed logs can be requested for export and sent to a customer. The selection of audit logs available are at the discretion of SRE in the category of platform security and compliance. Requests for exports of a cluster’s entirety of logs will be rejected.
17.1.27. AWS Permissions Boundary
You can use an AWS Permissions Boundary around the policies for your cluster.
17.1.28. AMI
ROSA worker nodes use a different AMI from OSD and OpenShift Container Platform. Control Plane and Infra node AMIs are common across products in the same version.
17.1.29. Cluster backups
ROSA STS clusters do not have backups. Users must have their own backup policies for applications and data. See our backup policy for more information.
17.1.30. Custom domain
You can define a custom domain for your applications. See Configuring custom domains for applications for more information.
17.1.31. ROSA domain certificates
Red Hat infrastructure (Hive) manages certificate rotation for default application ingress.
17.1.32. Disconnected environments
ROSA does not support an air-gapped, disconnected environment. The ROSA cluster must have egress to the internet to access our registry, S3, and send metrics. The service requires a number of egress endpoints. Ingress can be limited to a PrivateLink for Red Hat SREs and a VPN for customer access.
Additional resources
ROSA product pages:
ROSA specific resources
- Learn about OpenShift
- OpenShift Cluster Manager
- Red Hat Support
17.2. Tutorial: ROSA with AWS STS explained
This tutorial outlines the two options for allowing Red Hat OpenShift Service on AWS (ROSA) to interact with resources in a user’s Amazon Web Service (AWS) account. It details the components and processes that ROSA with Security Token Service (STS) uses to obtain the necessary credentials. It also reviews why ROSA with STS is the more secure, preferred method.
This content currently covers ROSA Classic with AWS STS. For ROSA with hosted control planes (HCP) with AWS STS, see AWS STS and ROSA with HCP explained.
This tutorial will:
Enumerate two of the deployment options:
- ROSA with IAM Users
- ROSA with STS
- Explain the differences between the two options
- Explain why ROSA with STS is more secure and the preferred option
- Explain how ROSA with STS works
17.2.1. Different credential methods to deploy ROSA
As part of ROSA, Red Hat manages infrastructure resources in your AWS account and must be granted the necessary permissions. There are currently two supported methods for granting those permissions:
Using static IAM user credentials with an
AdministratorAccess
policyThis is referred to as "ROSA with IAM Users" in this tutorial. It is not the preferred credential method.
Using AWS STS with short-lived, dynamic tokens
This is referred to as “ROSA with STS” in this tutorial. It is the preferred credential method.
17.2.1.1. Rosa with IAM Users
When ROSA was first released, the only credential method was ROSA with IAM Users. This method grants IAM users with an AdministratorAccess
policy full access to create the necessary resources in the AWS account that uses ROSA. The cluster can then create and expand its credentials as needed.
17.2.1.2. ROSA with STS
ROSA with STS grants users limited, short-term access to resources in your AWS account. The STS method uses predefined roles and policies to grant temporary, least-privilege permissions to IAM users or authenticated federated users. The credentials typically expire an hour after being requested. Once expired, they are no longer recognized by AWS and no longer have account access from API requests made with them. For more information, see the AWS documentation. While both ROSA with IAM Users and ROSA with STS are currently enabled, ROSA with STS is the preferred and recommended option.
17.2.2. ROSA with STS security
Several crucial components make ROSA with STS more secure than ROSA with IAM Users:
- An explicit and limited set of roles and policies that the user creates ahead of time. The user knows every requested permission and every role used.
- The service cannot do anything outside of those permissions.
- Whenever the service needs to perform an action, it obtains credentials that expire in one hour or less. This means that there is no need to rotate or revoke credentials. Additionally, credential expiration reduces the risks of credentials leaking and being reused.
17.2.3. AWS STS explained
ROSA uses AWS STS to grant least-privilege permissions with short-term security credentials to specific and segregated IAM roles. The credentials are associated with IAM roles specific to each component and cluster that makes AWS API calls. This method aligns with principles of least-privilege and secure practices in cloud service resource management. The ROSA command line interface (CLI) tool manages the STS roles and policies that are assigned for unique tasks and takes action upon AWS resources as part of OpenShift functionality.
STS roles and policies must be created for each ROSA cluster. To make this easier, the installation tools provide all the commands and files needed to create the roles as policies and an option to allow the CLI to automatically create the roles and policies. See Creating a ROSA cluster with STS using customizations for more information about the different --mode
options.
17.2.4. Components specific to ROSA with STS
- AWS infrastructure - This provides the infrastructure required for the cluster. It contains the actual EC2 instances, storage, and networking components. See AWS compute types to see supported instance types for compute nodes and provisioned AWS infrastructure for control plane and infrastructure node configuration.
- AWS STS - See the credential method section above.
- OpenID Connect (OIDC) - This provides a mechanism for cluster Operators to authenticate with AWS, assume the cluster roles through a trust policy, and obtain temporary credentials from STS to make the required API calls.
Roles and policies - The roles and policies are one of the main differences between ROSA with STS and ROSA with IAM Users. For ROSA with STS, the roles and policies used by ROSA are broken into account-wide roles and policies and Operator roles and policies.
The policies determine the allowed actions for each of the roles. See About IAM resources for ROSA clusters that use STS for more details about the individual roles and policies.
The account-wide roles are:
- ManagedOpenShift-Installer-Role
- ManagedOpenShift-ControlPlane-Role
- ManagedOpenShift-Worker-Role
- ManagedOpenShift-Support-Role
The account-wide policies are:
- ManagedOpenShift-Installer-Role-Policy
- ManagedOpenShift-ControlPlane-Role-Policy
- ManagedOpenShift-Worker-Role-Policy
- ManagedOpenShift-Support-Role-Policy
- ManagedOpenShift-openshift-ingress-operator-cloud-credentials [1]
- ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent [1]
- ManagedOpenShift-openshift-cloud-network-config-controller-cloud [1]
- ManagedOpenShift-openshift-machine-api-aws-cloud-credentials [1]
- ManagedOpenShift-openshift-cloud-credential-operator-cloud-crede [1]
ManagedOpenShift-openshift-image-registry-installer-cloud-creden [1]
- This policy is used by the cluster Operator roles, listed below. The Operator roles are created in a second step because they are dependent on an existing cluster name and cannot be created at the same time as the account-wide roles.
The Operator roles are:
- <cluster-name\>-xxxx-openshift-cluster-csi-drivers-ebs-cloud-credent
- <cluster-name\>-xxxx-openshift-cloud-network-config-controller-cloud
- <cluster-name\>-xxxx-openshift-machine-api-aws-cloud-credentials
- <cluster-name\>-xxxx-openshift-cloud-credential-operator-cloud-crede
- <cluster-name\>-xxxx-openshift-image-registry-installer-cloud-creden
- <cluster-name\>-xxxx-openshift-ingress-operator-cloud-credentials
- Trust policies are created for each account-wide and Operator role.
17.2.5. Deploying a ROSA STS cluster
You are not expected to create the resources listed in the below steps from scratch. The ROSA CLI creates the required JSON files for you and outputs the commands you need. The ROSA CLI can also take this a step further and run the commands for you, if desired.
Steps to deploy a ROSA with STS cluster
- Create the account-wide roles and policies.
- Assign the permissions policy to the corresponding account-wide role.
- Create the cluster.
- Create the Operator roles and policies.
- Assign the permission policy to the corresponding Operator role.
- Create the OIDC provider.
The roles and policies can be created automatically by the ROSA CLI, or they can be manually created by utilizing the --mode manual
or --mode auto
flags in the ROSA CLI. For further details about deployment, see Creating a cluster with customizations or the Deploying the cluster tutorial.
17.2.6. ROSA with STS workflow
The user creates the required account-wide roles and account-wide policies. For more information, see the components section in this tutorial. During role creation, a trust policy, known as a cross-account trust policy, is created which allows a Red Hat-owned role to assume the roles. Trust policies are also created for the EC2 service, which allows workloads on EC2 instances to assume roles and obtain credentials. The user can then assign a corresponding permissions policy to each role.
After the account-wide roles and policies are created, the user can create a cluster. Once cluster creation is initiated, the Operator roles are created so that cluster Operators can make AWS API calls. These roles are then assigned to the corresponding permission policies that were created earlier and a trust policy with an OIDC provider. The Operator roles differ from the account-wide roles in that they ultimately represent the pods that need access to AWS resources. Because a user cannot attach IAM roles to pods, they must create a trust policy with an OIDC provider so that the Operator, and therefore the pods, can access the roles they need.
Once the user assigns the roles to the corresponding policy permissions, the final step is creating the OIDC provider.
When a new role is needed, the workload currently using the Red Hat role will assume the role in the AWS account, obtain temporary credentials from AWS STS, and begin performing the actions using API calls within the customer’s AWS account as permitted by the assumed role’s permissions policy. The credentials are temporary and have a maximum duration of one hour.
The entire workflow is depicted in the following graphic:
Operators use the following process to obtain the requisite credentials to perform their tasks. Each Operator is assigned an Operator role, a permissions policy, and a trust policy with an OIDC provider. The Operator will assume the role by passing a JSON web token that contains the role and a token file (web_identity_token_file
) to the OIDC provider, which then authenticates the signed key with a public key. The public key is created during cluster creation and stored in an S3 bucket. The Operator then confirms that the subject in the signed token file matches the role in the role trust policy which ensures that the OIDC provider can only obtain the allowed role. The OIDC provider then returns the temporary credentials to the Operator so that the Operator can make AWS API calls. For a visual representation, see below:
17.2.7. ROSA with STS use cases
Creating nodes at cluster install
The Red Hat installation program uses the RH-Managed-OpenShift-Installer
role and a trust policy to assume the Managed-OpenShift-Installer-Role
role in the customer’s account. This process returns temporary credentials from AWS STS. The installation program begins making the required API calls with the temporary credentials just received from STS. The installation program creates the required infrastructure in AWS. The credentials expire within an hour and the installation program no longer has access to the customer’s account.
The same process also applies for support cases. In support cases, a Red Hat site reliability engineer (SRE) replaces the installation program.
Scaling the cluster
The machine-api-operator
uses AssumeRoleWithWebIdentity to assume the machine-api-aws-cloud-credentials
role. This launches the sequence for the cluster Operators to receive the credentials. The machine-api-operator
role can now make the relevant API calls to add more EC2 instances to the cluster.
17.3. Tutorial: OpenShift concepts
17.3.1. Source-to-Image (S2I)
Source-to-Image (S2I) is a toolkit and workflow for building reproducible container images from source code. S2I produces ready-to-run images by inserting source code into a container image and letting the container prepare the source code. By creating self-assembling builder images, you can version and control your build environments exactly like you use container images to version your runtime environments.
Additional resources
17.3.1.1. How it works
For a dynamic language such as Ruby, the build time and run time environments are typically the same. Assuming that Ruby, Bundler, Rake, Apache, GCC, and all other packages needed to set up and run a Ruby application are already installed, a builder image performs the following steps:
- The builder image starts a container with the application source injected into a known directory.
- The container process transforms that source code into the appropriate runnable setup. For example, it installs dependencies with Bundler and moves the source code into a directory where Apache has been preconfigured to look for the Ruby configuration file.
- It then commits the new container and sets the image entrypoint to be a script that will start Apache to host the Ruby application.
For compiled languages such as C, C++, Go, or Java, the necessary dependencies for compilation might outweigh the size of the runtime artifacts. To keep runtime images small, S2I enables a multiple-step build process, where a binary artifact such as an executable file is created in the first builder image, extracted, and injected into a second runtime image that simply places the executable program in the correct location.
For example, to create a reproducible build pipeline for Tomcat and Maven:
- Create a builder image containing OpenJDK and Tomcat that expects to have a WAR file injected.
- Create a second image that layers on top of the first image Maven and any other standard dependencies, and expects to have a Maven project injected.
- Start S2I using the Java application source and the Maven image to create the desired application WAR.
- Start S2I a second time using the WAR file from the earlier step and the initial Tomcat image to create the runtime image.
By placing build logic inside of images and combining the images into multiple steps, the runtime environment is close to the build environment without requiring the deployment of build tools to production.
17.3.1.2. S2I benefits
- Reproducibility
- Allow build environments to be tightly versioned by encapsulating them within a container image and defining a simple interface of injected source code for callers. Reproducible builds are a key requirement for enabling security updates and continuous integration in containerized infrastructure, and builder images help ensure repeatability and the ability to swap run times.
- Flexibility
- Any existing build system that can run on Linux can run inside of a container, and each individual builder can also be part of a larger pipeline. The scripts that process the application source code can be injected into the builder image, allowing authors to adapt existing images to enable source handling.
- Speed
- Instead of building multiple layers in a single Dockerfile, S2I encourages authors to represent an application in a single image layer. This saves time during creation and deployment and allows for better control over the output of the final image.
- Security
- Dockerfiles are run without many of the normal operational controls of containers. They usually run as root and have access to the container network. S2I can control what permissions and privileges are available to the builder image since the build is launched in a single container. In concert with platforms like OpenShift, S2I allows administrators to control what privileges developers have at build time.
17.3.2. Routes
An OpenShift route exposes a service at a hostname so that external clients can reach it by name. When a Route
object is created on OpenShift, it gets picked up by the built-in HAProxy load balancer to expose the requested service and make it externally available with the given configuration.
Similar to the Kubernetes Ingress
object, Red Hat created the concept of route to fill a need and then contributed the design principles behind it to the community, which heavily influenced the Ingress
design. A route does have some additional features as can be seen in the following chart:
Feature | Ingress on OpenShift | Route on OpenShift |
---|---|---|
Standard Kubernetes object | X | |
External access to services | X | X |
Persistent (sticky) sessions | X | X |
Load-balancing strategies (e.g. round robin) | X | X |
Rate-limit and throttling | X | X |
IP whitelisting | X | X |
TLS edge termination for improved security | X | X |
TLS re-encryption for improved security | X | |
TLS passhtrough for improved security | X | |
Multiple weighted backends (split traffic) | X | |
Generated pattern-based hostnames | X | |
Wildcard domains | X |
DNS resolution for a hostname is handled separately from routing. Your administrator might have configured a cloud domain that will always correctly resolve to the router or modify your unrelated hostname DNS records independently to resolve to the router.
An individual route can override some defaults by providing specific configurations in its annotations.
Additional resources
17.3.3. Image streams
An image stream stores a mapping of tags to images, metadata overrides that are applied when images are tagged in a stream, and an optional reference to a Docker image repository on a registry.
17.3.3.1. Image stream benefits
Using an image stream makes it easier to change a tag for a container image. Otherwise, to manually change a tag, you must download the image, change it locally, then push it all back. Promoting applications by manually changing a tag and then updating the deployment object entails many steps.
With image streams, you upload a container image once and then you manage its virtual tags internally in OpenShift. In one project you might use the developer tag and only change a reference to it internally, while in production you might use a production tag and also manage it internally. You do not have to deal with the registry.
You can also use image streams in conjunction with deployment configs to set a trigger that will start a deployment as soon as a new image appears or a tag changes its reference.
17.3.4. Builds
A build is the process of transforming input parameters into a resulting object. Most often, the process is used to transform input parameters or source code into a runnable image. A BuildConfig
object is the definition of the entire build process.
OpenShift Container Platform leverages Kubernetes by creating Docker-formatted containers from build images and pushing them to a container image registry.
Build objects share common characteristics:
- Inputs for a build
- Requirements to complete a build process
- Logging the build process
- Publishing resources from successful builds
- Publishing the final status of the build
Builds take advantage of resource restrictions, specifying limitations on resources such as CPU usage, memory usage, and build or pod execution time.
Additional resources
17.4. Deploying a cluster
17.4.1. Tutorial: Choosing a deployment method
This tutorial outlines the different ways to deploy a cluster. Choose the deployment method that best fits your preferences and needs.
17.4.1.1. Deployment options
If you want:
- Only the necessary CLI commands - Simple CLI guide
- A user interface - Simple UI guide
- The CLI commands with details - Detailed CLI guide
- A user interface with details - Detailed UI guide
All of the above deployment options work well for this tutorial. If you are doing this tutorial for the first time, the Simple CLI guide is the simplest and recommended method.
17.4.2. Tutorial: Simple CLI guide
This page outlines the minimum list of commands to deploy a Red Hat OpenShift Service on AWS (ROSA) cluster using the command line interface (CLI).
While this simple deployment works well for a tutorial setting, clusters used in production should be deployed with a more detailed method.
17.4.2.1. Prerequisites
- You have completed the prerequisites in the Setup tutorial.
17.4.2.2. Creating account roles
Run the following command once for each AWS account and y-stream OpenShift version:
rosa create account-roles --mode auto --yes
17.4.2.3. Deploying the cluster
Create the cluster with the default configuration by running the following command substituting your own cluster name:
rosa create cluster --cluster-name <cluster-name> --sts --mode auto --yes
Check the status of your cluster by running the following command:
rosa list clusters
17.4.3. Tutorial: Detailed CLI guide
This tutorial outlines the detailed steps to deploy a ROSA cluster using the ROSA CLI.
17.4.3.1. CLI deployment modes
There are two modes with which to deploy a ROSA cluster. One is automatic, which is quicker and performs the manual work for you. The other is manual, requires you to run extra commands, and allows you to inspect the roles and policies being created. This tutorial documents both options.
If you want to create a cluster quickly, use the automatic option. If you prefer exploring the roles and policies being created, use the manual option.
Choose the deployment mode by using the --mode
flag in the relevant commands.
Valid options for --mode
are:
-
manual
: Role and policies are created and saved in the current directory. You must manually run the provided commands as the next step. This option allows you to review the policy and roles before creating them. -
auto
: Roles and policies are created and applied automatically using the current AWS account.
You can use either deployment method for this tutorial. The auto
mode is faster and has less steps.
17.4.3.2. Deployment workflow
The overall deployment workflow follows these steps:
-
rosa create account-roles
- This is executed only once for each account. Once created, the account roles do not need to be created again for more clusters of the same y-stream version. -
rosa create cluster
-
rosa create operator-roles
- For manual mode only. -
rosa create oidc-provider
- For manual mode only.
For each additional cluster in the same account for the same y-stream version, only step 2 is needed for automatic mode. Steps 2 through 4 are needed for manual mode.
17.4.3.3. Automatic mode
Use this method if you want the ROSA CLI to automate the creation of the roles and policies to create your cluster quickly.
17.4.3.3.1. Creating account roles
If this is the first time you are deploying ROSA in this account and you have not yet created the account roles, then create the account-wide roles and policies, including Operator policies.
Run the following command to create the account-wide roles:
rosa create account-roles --mode auto --yes
Example output
I: Creating roles using 'arn:aws:iam::000000000000:user/rosa-user' I: Created role 'ManagedOpenShift-ControlPlane-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-ControlPlane-Role' I: Created role 'ManagedOpenShift-Worker-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-Worker-Role' I: Created role 'ManagedOpenShift-Support-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-Support-Role' I: Created role 'ManagedOpenShift-Installer-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-Installer-Role' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-machine-api-aws-cloud-credentials' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-cloud-credential-operator-cloud-crede' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-image-registry-installer-cloud-creden' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-ingress-operator-cloud-credentials' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent' I: To create a cluster with these roles, run the following command: rosa create cluster --sts
17.4.3.3.2. Creating a cluster
Run the following command to create a cluster with all the default options:
rosa create cluster --cluster-name <cluster-name> --sts --mode auto --yes
This will also create the required Operator roles and OIDC provider. If you want to see all available options for your cluster use the --help
flag or --interactive
for interactive mode.
Example input
$ rosa create cluster --cluster-name my-rosa-cluster --sts --mode auto --yes
Example output
I: Creating cluster 'my-rosa-cluster' I: To view a list of clusters and their status, run 'rosa list clusters' I: Cluster 'my-rosa-cluster' has been created. I: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information. I: To determine when your cluster is Ready, run 'rosa describe cluster -c my-rosa-cluster'. I: To watch your cluster installation logs, run 'rosa logs install -c my-rosa-cluster --watch'. Name: my-rosa-cluster ID: 1mlhulb3bo0l54ojd0ji000000000000 External ID: OpenShift Version: Channel Group: stable DNS: my-rosa-cluster.ibhp.p1.openshiftapps.com AWS Account: 000000000000 API URL: Console URL: Region: us-west-2 Multi-AZ: false Nodes: - Master: 3 - Infra: 2 - Compute: 2 Network: - Service CIDR: 172.30.0.0/16 - Machine CIDR: 10.0.0.0/16 - Pod CIDR: 10.128.0.0/14 - Host Prefix: /23 STS Role ARN: arn:aws:iam::000000000000:role/ManagedOpenShift-Installer-Role Support Role ARN: arn:aws:iam::000000000000:role/ManagedOpenShift-Support-Role Instance IAM Roles: - Master: arn:aws:iam::000000000000:role/ManagedOpenShift-ControlPlane-Role - Worker: arn:aws:iam::000000000000:role/ManagedOpenShift-Worker-Role Operator IAM Roles: - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-image-registry-installer-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-ingress-operator-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-cluster-csi-drivers-ebs-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-machine-api-aws-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-cloud-credential-operator-cloud-credential-oper State: waiting (Waiting for OIDC configuration) Private: No Created: Oct 28 2021 20:28:09 UTC Details Page: https://console.redhat.com/openshift/details/s/1wupmiQy45xr1nN000000000000 OIDC Endpoint URL: https://rh-oidc.s3.us-east-1.amazonaws.com/1mlhulb3bo0l54ojd0ji000000000000
17.4.3.3.2.1. Default configuration
The default settings are as follows:
Nodes:
- 3 control plane nodes
- 2 infrastructure nodes
- 2 worker nodes
- No autoscaling
- See the documentation on ec2 instances for more details.
-
Region: As configured for the
aws
CLI Networking IP ranges:
- Machine CIDR: 10.0.0.0/16
- Service CIDR: 172.30.0.0/16
- Pod CIDR: 10.128.0.0/14
- New VPC
- Default AWS KMS key for encryption
-
The most recent version of OpenShift available to
rosa
- A single availability zone
- Public cluster
17.4.3.3.3. Checking the installation status
Run one of the following commands to check the status of your cluster:
For a detailed view of the status, run:
rosa describe cluster --cluster <cluster-name>
For an abridged view of the status, run:
rosa list clusters
- The cluster state will change from “waiting” to “installing” to "ready". This will take about 40 minutes.
- Once the state changes to “ready” your cluster is installed.
17.4.3.4. Manual Mode
If you want to review the roles and policies before applying them to a cluster, use the manual method. This method requires running a few extra commands to create the roles and policies.
This section uses the --interactive
mode. See the documentation on interactive mode for a description of the fields in this section.
17.4.3.4.1. Creating account roles
If this is the first time you are deploying ROSA in this account and you have not yet created the account roles, create the account-wide roles and policies, including the Operator policies. The command creates the needed JSON files for the required roles and policies for your account in the current directory. It also outputs the
aws
CLI commands that you need to run to create these objects.Run the following command to create the needed files and output the additional commands:
rosa create account-roles --mode manual
Example output
I: All policy files saved to the current directory I: Run the following commands to create the account roles and policies: aws iam create-role \ --role-name ManagedOpenShift-Worker-Role \ --assume-role-policy-document file://sts_instance_worker_trust_policy.json \ --tags Key=rosa_openshift_version,Value=4.8 Key=rosa_role_prefix,Value=ManagedOpenShift Key=rosa_role_type,Value=instance_worker aws iam put-role-policy \ --role-name ManagedOpenShift-Worker-Role \ --policy-name ManagedOpenShift-Worker-Role-Policy \ --policy-document file://sts_instance_worker_permission_policy.json
Check the contents of your current directory to see the new files. Use the
aws
CLI to create each of these objects.Example output
$ ls openshift_cloud_credential_operator_cloud_credential_operator_iam_ro_creds_policy.json sts_instance_controlplane_permission_policy.json openshift_cluster_csi_drivers_ebs_cloud_credentials_policy.json sts_instance_controlplane_trust_policy.json openshift_image_registry_installer_cloud_credentials_policy.json sts_instance_worker_permission_policy.json openshift_ingress_operator_cloud_credentials_policy.json sts_instance_worker_trust_policy.json openshift_machine_api_aws_cloud_credentials_policy.json sts_support_permission_policy.json sts_installer_permission_policy.json sts_support_trust_policy.json sts_installer_trust_policy.json
Optional: Open the files to review what you will create. For example, opening the
sts_installer_permission_policy.json
shows:Example output
$ cat sts_installer_permission_policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "autoscaling:DescribeAutoScalingGroups", "ec2:AllocateAddress", "ec2:AssociateAddress", "ec2:AssociateDhcpOptions", "ec2:AssociateRouteTable", "ec2:AttachInternetGateway", "ec2:AttachNetworkInterface", "ec2:AuthorizeSecurityGroupEgress", "ec2:AuthorizeSecurityGroupIngress", [...]
You can also see the contents in the About IAM resources for ROSA clusters documentation.
-
Run the
aws
commands listed in step 1. You can copy and paste if you are in the same directory as the JSON files you created.
17.4.3.4.2. Creating a cluster
After the
aws
commands are executed successfully, run the following command to begin ROSA cluster creation in interactive mode:rosa create cluster --interactive --sts
See the ROSA documentation for a description of the fields.
For the purpose of this tutorial, copy and then input the following values:
Cluster name: my-rosa-cluster OpenShift version: <choose version> External ID (optional): <leave blank> Operator roles prefix: <accept default> Multiple availability zones: No AWS region: <choose region> PrivateLink cluster: No Install into an existing VPC: No Enable Customer Managed key: No Compute nodes instance type: m5.xlarge Enable autoscaling: No Compute nodes: 2 Machine CIDR: <accept default> Service CIDR: <accept default> Pod CIDR: <accept default> Host prefix: <accept default> Encrypt etcd data (optional): No Disable Workload monitoring: No
Example output
I: Creating cluster 'my-rosa-cluster' I: To create this cluster again in the future, you can run: rosa create cluster --cluster-name my-rosa-cluster --role-arn arn:aws:iam::000000000000:role/ManagedOpenShift-Installer-Role --support-role-arn arn:aws:iam::000000000000:role/ManagedOpenShift-Support-Role --master-iam-role arn:aws:iam::000000000000:role/ManagedOpenShift-ControlPlane-Role --worker-iam-role arn:aws:iam::000000000000:role/ManagedOpenShift-Worker-Role --operator-roles-prefix my-rosa-cluster --region us-west-2 --version 4.8.13 --compute-nodes 2 --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 --pod-cidr 10.128.0.0/14 --host-prefix 23 I: To view a list of clusters and their status, run 'rosa list clusters' I: Cluster 'my-rosa-cluster' has been created. I: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information. Name: my-rosa-cluster ID: 1t6i760dbum4mqltqh6o000000000000 External ID: OpenShift Version: Channel Group: stable DNS: my-rosa-cluster.abcd.p1.openshiftapps.com AWS Account: 000000000000 API URL: Console URL: Region: us-west-2 Multi-AZ: false Nodes: - Control plane: 3 - Infra: 2 - Compute: 2 Network: - Service CIDR: 172.30.0.0/16 - Machine CIDR: 10.0.0.0/16 - Pod CIDR: 10.128.0.0/14 - Host Prefix: /23 STS Role ARN: arn:aws:iam::000000000000:role/ManagedOpenShift-Installer-Role Support Role ARN: arn:aws:iam::000000000000:role/ManagedOpenShift-Support-Role Instance IAM Roles: - Control plane: arn:aws:iam::000000000000:role/ManagedOpenShift-ControlPlane-Role - Worker: arn:aws:iam::000000000000:role/ManagedOpenShift-Worker-Role Operator IAM Roles: - arn:aws:iam::000000000000:role/my-rosa-cluster-w7i6-openshift-ingress-operator-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-w7i6-openshift-cluster-csi-drivers-ebs-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-w7i6-openshift-cloud-network-config-controller-cloud-cre - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-machine-api-aws-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-cloud-credential-operator-cloud-credentia - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-image-registry-installer-cloud-credential State: waiting (Waiting for OIDC configuration) Private: No Created: Jul 1 2022 22:13:50 UTC Details Page: https://console.redhat.com/openshift/details/s/2BMQm8xz8Hq5yEN000000000000 OIDC Endpoint URL: https://rh-oidc.s3.us-east-1.amazonaws.com/1t6i760dbum4mqltqh6o000000000000 I: Run the following commands to continue the cluster creation: rosa create operator-roles --cluster my-rosa-cluster rosa create oidc-provider --cluster my-rosa-cluster I: To determine when your cluster is Ready, run 'rosa describe cluster -c my-rosa-cluster'. I: To watch your cluster installation logs, run 'rosa logs install -c my-rosa-cluster --watch'.
NoteThe cluster state will remain as “waiting” until the next two steps are completed.
17.4.3.4.3. Creating Operator roles
The above step outputs the next commands to run. These roles need to be created once for each cluster. To create the roles run the following command:
rosa create operator-roles --mode manual --cluster <cluster-name>
Example output
I: Run the following commands to create the operator roles: aws iam create-role \ --role-name my-rosa-cluster-openshift-image-registry-installer-cloud-credentials \ --assume-role-policy-document file://operator_image_registry_installer_cloud_credentials_policy.json \ --tags Key=rosa_cluster_id,Value=1mkesci269png3tck000000000000000 Key=rosa_openshift_version,Value=4.8 Key=rosa_role_prefix,Value= Key=operator_namespace,Value=openshift-image-registry Key=operator_name,Value=installer-cloud-credentials aws iam attach-role-policy \ --role-name my-rosa-cluster-openshift-image-registry-installer-cloud-credentials \ --policy-arn arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-image-registry-installer-cloud-creden [...]
-
Run each of the
aws
commands.
17.4.3.4.4. Creating the OIDC provider
Run the following command to create the OIDC provider:
rosa create oidc-provider --mode manual --cluster <cluster-name>
This displays the
aws
commands that you need to run.Example output
I: Run the following commands to create the OIDC provider: $ aws iam create-open-id-connect-provider \ --url https://rh-oidc.s3.us-east-1.amazonaws.com/1mkesci269png3tckknhh0rfs2da5fj9 \ --client-id-list openshift sts.amazonaws.com \ --thumbprint-list a9d53002e97e00e043244f3d170d000000000000 $ aws iam create-open-id-connect-provider \ --url https://rh-oidc.s3.us-east-1.amazonaws.com/1mkesci269png3tckknhh0rfs2da5fj9 \ --client-id-list openshift sts.amazonaws.com \ --thumbprint-list a9d53002e97e00e043244f3d170d000000000000
- Your cluster will now continue the installation process.
17.4.3.4.5. Checking the installation status
Run one of the following commands to check the status of your cluster:
For a detailed view of the status, run:
rosa describe cluster --cluster <cluster-name>
For an abridged view of the status, run:
rosa list clusters
- The cluster state will change from “waiting” to “installing” to "ready". This will take about 40 minutes.
- Once the state changes to “ready” your cluster is installed.
17.4.3.5. Obtaining the Red Hat Hybrid Cloud Console URL
To obtain the Hybrid Cloud Console URL, run the following command:
rosa describe cluster -c <cluster-name> | grep Console
The cluster has now been successfully deployed. The next tutorial shows how to create an admin user to be able to use the cluster immediately.
17.4.4. Tutorial: Simple UI guide
This page outlines the minimum list of commands to deploy a ROSA cluster using the user interface (UI).
While this simple deployment works well for a tutorial setting, clusters used in production should be deployed with a more detailed method.
17.4.4.1. Prerequisites
- You have completed the prerequisites in the Setup tutorial.
17.4.4.2. Creating account roles
Run the following command once for each AWS account and y-stream OpenShift version:
rosa create account-roles --mode auto --yes
17.4.4.3. Creating Red Hat OpenShift Cluster Manager roles
Create one OpenShift Cluster Manager role for each AWS account by running the following command:
rosa create ocm-role --mode auto --admin --yes
Create one OpenShift Cluster Manager user role for each AWS account by running the following command:
rosa create user-role --mode auto --yes
- Use the OpenShift Cluster Manager to select your AWS account, cluster options, and begin deployment.
OpenShift Cluster Manager UI displays cluster status.
17.4.5. Tutorial: Detailed UI guide
This tutorial outlines the detailed steps to deploy a Red Hat OpenShift Service on AWS (ROSA) cluster using the Red Hat OpenShift Cluster Manager user interface (UI).
17.4.5.1. Deployment workflow
The overall deployment workflow follows these steps:
- Create the account wide roles and policies.
Associate your AWS account with your Red Hat account.
- Create and link the Red Hat OpenShift Cluster Manager role.
- Create and link the user role.
- Create the cluster.
Step 1 only needs to be performed the first time you are deploying into an AWS account. Step 2 only needs to be performed the first time you are using the UI. For successive clusters of the same y-stream version, you only need to create the cluster.
17.4.5.2. Creating account wide roles
If you already have account roles from an earlier deployment, skip this step. The UI will detect your existing roles after you select an associated AWS account.
If this is the first time you are deploying ROSA in this account and you have not yet created the account roles, create the account-wide roles and policies, including the Operator policies.
In your terminal, run the following command to create the account-wide roles:
$ rosa create account-roles --mode auto --yes
Example output
I: Creating roles using 'arn:aws:iam::000000000000:user/rosa-user' I: Created role 'ManagedOpenShift-ControlPlane-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-ControlPlane-Role' I: Created role 'ManagedOpenShift-Worker-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-Worker-Role' I: Created role 'ManagedOpenShift-Support-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-Support-Role' I: Created role 'ManagedOpenShift-Installer-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-Installer-Role' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-machine-api-aws-cloud-credentials' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-cloud-credential-operator-cloud-crede' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-image-registry-installer-cloud-creden' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-ingress-operator-cloud-credentials' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent' I: To create a cluster with these roles, run the following command: rosa create cluster --sts
17.4.5.3. Associating your AWS account with your Red Hat account
This step tells the OpenShift Cluster Manager what AWS account you want to use when deploying ROSA.
If you have already associated your AWS accounts, skip this step.
- Open the Red Hat Hybrid Cloud Console by visiting the OpenShift Cluster Manager and logging in to your Red Hat account.
- Click Create Cluster.
Scroll down to the Red Hat OpenShift Service on AWS (ROSA) row and click Create Cluster.
A dropdown menu appears. Click With web interface.
Under "Select an AWS control plane type," choose Classic. Then click Next.
- Click the dropbox under Associated AWS infrastructure account. If you have not yet associated any AWS accounts, the dropbox may be empty.
Click How to associate a new AWS account.
A sidebar appears with instructions for associating a new AWS account.
17.4.5.4. Creating and associating an OpenShift Cluster Manager role
Run the following command to see if an OpenShift Cluster Manager role exists:
$ rosa list ocm-role
The UI displays the commands to create an OpenShift Cluster Manager role with two different levels of permissions:
- Basic OpenShift Cluster Manager role: Allows the OpenShift Cluster Manager to have read-only access to the account to check if the roles and policies that are required by ROSA are present before creating a cluster. You will need to manually create the required roles, policies, and OIDC provider using the CLI.
Admin OpenShift Cluster Manager role: Grants the OpenShift Cluster Manager additional permissions to create the required roles, policies, and OIDC provider for ROSA. Using this makes the deployment of a ROSA cluster quicker since the OpenShift Cluster Manager will be able to create the required resources for you.
To read more about these roles, see the OpenShift Cluster Manager roles and permissions section of the documentation.
For the purposes of this tutorial, use the Admin OpenShift Cluster Manager role for the simplest and quickest approach.
Copy the command to create the Admin OpenShift Cluster Manager role from the sidebar or switch to your terminal and enter the following command:
$ rosa create ocm-role --mode auto --admin --yes
This command creates the OpenShift Cluster Manager role and associates it with your Red Hat account.
Example output
I: Creating ocm role I: Creating role using 'arn:aws:iam::000000000000:user/rosa-user' I: Created role 'ManagedOpenShift-OCM-Role-12561000' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-OCM-Role-12561000' I: Linking OCM role I: Successfully linked role-arn 'arn:aws:iam::000000000000:role/ManagedOpenShift-OCM-Role-12561000' with organization account '1MpZfntsZeUdjWHg7XRgP000000'
- Click Step 2: User role.
17.4.5.4.1. Other OpenShift Cluster Manager role creation options
Manual mode: If you prefer to run the AWS CLI commands yourself, you can define the mode as
manual
rather thanauto
. The CLI will output the AWS commands and the relevant JSON files are created in the current directory.Use the following command to create the OpenShift Cluster Manager role in manual mode:
$ rosa create ocm-role --mode manual --admin --yes
Basic OpenShift Cluster Manager role: If you prefer that the OpenShift Cluster Manager has read only access to the account, create a basic OpenShift Cluster Manager role. You will then need to manually create the required roles, policies, and OIDC provider using the CLI.
Use the following command to create a Basic OpenShift Cluster Manager role:
$ rosa create ocm-role --mode auto --yes
17.4.5.5. Creating an OpenShift Cluster Manager user role
As defined in the user role documentation, the user role needs to be created so that ROSA can verify your AWS identity. This role has no permissions, and it is only used to create a trust relationship between the installation program account and your OpenShift Cluster Manager role resources.
Check if a user role already exists by running the following command:
$ rosa list user-role
Run the following command to create the user role and to link it to your Red Hat account:
$ rosa create user-role --mode auto --yes
Example output
I: Creating User role I: Creating ocm user role using 'arn:aws:iam::000000000000:user/rosa-user' I: Created role 'ManagedOpenShift-User-rosa-user-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-User-rosa-user-Role' I: Linking User role I: Successfully linked role ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-User-rosa-user-Role' with account '1rbOQez0z5j1YolInhcXY000000'
NoteAs before, you can define
--mode manual
if you’d prefer to run the AWS CLI commands yourself. The CLI outputs the AWS commands and the relevant JSON files are created in the current directory. Make sure to link the role.- Click Step 3: Account roles.
17.4.5.6. Creating account roles
Create your account roles by running the following command:
$ rosa create account-roles --mode auto
- Click OK to close the sidebar.
17.4.5.7. Confirming successful account association
- You should now see your AWS account in the Associated AWS infrastructure account dropdown menu. If you see your account, account association was successful.
- Select the account.
You will see the account role ARNs populated below.
- Click Next.
17.4.5.8. Creating the cluster
For the purposes of this tutorial make the following selections:
Cluster settings
- Cluster name: <pick a name\>
- Version: <select latest version\>
- Region: <select region\>
- Availability: Single zone
- Enable user workload monitoring: leave checked
- Enable additional etcd encryption: leave unchecked
- Encrypt persistent volumes with customer keys: leave unchecked
- Click Next.
Leave the default settings on for the machine pool:
Default machine pool settings
- Compute node instance type: m5.xlarge - 4 vCPU 16 GiB RAM
- Enable autoscaling: unchecked
- Compute node count: 2
- Leave node labels blank
- Click Next.
17.4.5.8.1. Networking
- Leave all the default values for configuration.
- Click Next.
- Leave all the default values for CIDR ranges.
- Click Next.
17.4.5.8.2. Cluster roles and policies
For this tutorial, leave Auto selected. It will make the cluster deployment process simpler and quicker.
If you selected a Basic OpenShift Cluster Manager role earlier, you can only use manual mode. You must manually create the operator roles and OIDC provider. See the "Basic OpenShift Cluster Manager role" section below after you have completed the "Cluster updates" section and started cluster creation.
17.4.5.8.3. Cluster updates
- Leave all the options at default in this section.
17.4.5.8.4. Reviewing and creating your cluster
- Review the content for the cluster configuration.
- Click Create cluster.
17.4.5.8.5. Monitoring the installation progress
Stay on the current page to monitor the installation progress. It should take about 40 minutes.
17.4.5.9. Basic OpenShift Cluster Manager Role
If you created an Admin OpenShift Cluster Manager role as directed above ignore this entire section. The OpenShift Cluster Manager will create the resources for you.
If you created a Basic OpenShift Cluster Manager role earlier, you will need to manually create two more elements before cluster installation can continue:
- Operator roles
- OIDC provider
17.4.5.9.1. Creating Operator roles
A pop up window will show you the commands to run.
Run the commands from the window in your terminal to launch interactive mode. Or, for simplicity, run the following command to create the Operator roles:
$ rosa create operator-roles --mode auto --cluster <cluster-name> --yes
Example output
I: Creating roles using 'arn:aws:iam::000000000000:user/rosauser' I: Created role 'rosacluster-b736-openshift-ingress-operator-cloud-credentials' with ARN 'arn:aws:iam::000000000000:role/rosacluster-b736-openshift-ingress-operator-cloud-credentials' I: Created role 'rosacluster-b736-openshift-cluster-csi-drivers-ebs-cloud-credent' with ARN 'arn:aws:iam::000000000000:role/rosacluster-b736-openshift-cluster-csi-drivers-ebs-cloud-credent' I: Created role 'rosacluster-b736-openshift-cloud-network-config-controller-cloud' with ARN 'arn:aws:iam::000000000000:role/rosacluster-b736-openshift-cloud-network-config-controller-cloud' I: Created role 'rosacluster-b736-openshift-machine-api-aws-cloud-credentials' with ARN 'arn:aws:iam::000000000000:role/rosacluster-b736-openshift-machine-api-aws-cloud-credentials' I: Created role 'rosacluster-b736-openshift-cloud-credential-operator-cloud-crede' with ARN 'arn:aws:iam::000000000000:role/rosacluster-b736-openshift-cloud-credential-operator-cloud-crede' I: Created role 'rosacluster-b736-openshift-image-registry-installer-cloud-creden' with ARN 'arn:aws:iam::000000000000:role/rosacluster-b736-openshift-image-registry-installer-cloud-creden'
17.4.5.9.2. Creating the OIDC provider
In your terminal, run the following command to create the OIDC provider:
$ rosa create oidc-provider --mode auto --cluster <cluster-name> --yes
Example output
I: Creating OIDC provider using 'arn:aws:iam::000000000000:user/rosauser' I: Created OIDC provider with ARN 'arn:aws:iam::000000000000:oidc-provider/rh-oidc.s3.us-east-1.amazonaws.com/1tt4kvrr2kha2rgs8gjfvf0000000000'
17.4.6. Tutorial: Hosted control plane (HCP) guide
Follow this workshop to deploy a sample Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) cluster. You can then use your cluster in the next tutorials.
Tutorial objectives
Learn to create your cluster prerequisites:
- Create a sample virtual private cloud (VPC)
- Create sample OpenID Connect (OIDC) resources
- Create sample environment variables
- Deploy a sample ROSA cluster
Prerequisites
- ROSA version 1.2.31 or later
- Amazon Web Service (AWS) command line interface (CLI)
-
ROSA CLI (
rosa
)
17.4.6.1. Creating your cluster prerequisites
Before deploying a ROSA with HCP cluster, you must have both a VPC and OIDC resources. We will create these resources first. ROSA uses the bring your own VPC (BYO-VPC) model.
17.4.6.1.1. Creating a VPC
Make sure your AWS CLI (
aws
) is configured to use a region where ROSA is available. See the regions supported by the AWS CLI by running the following command:$ rosa list regions --hosted-cp
Create the VPC. For this tutorial, the following script creates the VPC and its required components. It uses the region configured in your
aws
CLI.#!/bin/bash set -e ########## # This script will create the network requirements for a ROSA cluster. This will be # a public cluster. This creates: # - VPC # - Public and private subnets # - Internet Gateway # - Relevant route tables # - NAT Gateway # # This will automatically use the region configured for the aws cli # ########## VPC_CIDR=10.0.0.0/16 PUBLIC_CIDR_SUBNET=10.0.1.0/24 PRIVATE_CIDR_SUBNET=10.0.0.0/24 # Create VPC echo -n "Creating VPC..." VPC_ID=$(aws ec2 create-vpc --cidr-block $VPC_CIDR --query Vpc.VpcId --output text) # Create tag name aws ec2 create-tags --resources $VPC_ID --tags Key=Name,Value=$CLUSTER_NAME # Enable dns hostname aws ec2 modify-vpc-attribute --vpc-id $VPC_ID --enable-dns-hostnames echo "done." # Create Public Subnet echo -n "Creating public subnet..." PUBLIC_SUBNET_ID=$(aws ec2 create-subnet --vpc-id $VPC_ID --cidr-block $PUBLIC_CIDR_SUBNET --query Subnet.SubnetId --output text) aws ec2 create-tags --resources $PUBLIC_SUBNET_ID --tags Key=Name,Value=$CLUSTER_NAME-public echo "done." # Create private subnet echo -n "Creating private subnet..." PRIVATE_SUBNET_ID=$(aws ec2 create-subnet --vpc-id $VPC_ID --cidr-block $PRIVATE_CIDR_SUBNET --query Subnet.SubnetId --output text) aws ec2 create-tags --resources $PRIVATE_SUBNET_ID --tags Key=Name,Value=$CLUSTER_NAME-private echo "done." # Create an internet gateway for outbound traffic and attach it to the VPC. echo -n "Creating internet gateway..." IGW_ID=$(aws ec2 create-internet-gateway --query InternetGateway.InternetGatewayId --output text) echo "done." aws ec2 create-tags --resources $IGW_ID --tags Key=Name,Value=$CLUSTER_NAME aws ec2 attach-internet-gateway --vpc-id $VPC_ID --internet-gateway-id $IGW_ID > /dev/null 2>&1 echo "Attached IGW to VPC." # Create a route table for outbound traffic and associate it to the public subnet. echo -n "Creating route table for public subnet..." PUBLIC_ROUTE_TABLE_ID=$(aws ec2 create-route-table --vpc-id $VPC_ID --query RouteTable.RouteTableId --output text) aws ec2 create-tags --resources $PUBLIC_ROUTE_TABLE_ID --tags Key=Name,Value=$CLUSTER_NAME echo "done." aws ec2 create-route --route-table-id $PUBLIC_ROUTE_TABLE_ID --destination-cidr-block 0.0.0.0/0 --gateway-id $IGW_ID > /dev/null 2>&1 echo "Created default public route." aws ec2 associate-route-table --subnet-id $PUBLIC_SUBNET_ID --route-table-id $PUBLIC_ROUTE_TABLE_ID > /dev/null 2>&1 echo "Public route table associated" # Create a NAT gateway in the public subnet for outgoing traffic from the private network. echo -n "Creating NAT Gateway..." NAT_IP_ADDRESS=$(aws ec2 allocate-address --domain vpc --query AllocationId --output text) NAT_GATEWAY_ID=$(aws ec2 create-nat-gateway --subnet-id $PUBLIC_SUBNET_ID --allocation-id $NAT_IP_ADDRESS --query NatGateway.NatGatewayId --output text) aws ec2 create-tags --resources $NAT_IP_ADDRESS --resources $NAT_GATEWAY_ID --tags Key=Name,Value=$CLUSTER_NAME sleep 10 echo "done." # Create a route table for the private subnet to the NAT gateway. echo -n "Creating a route table for the private subnet to the NAT gateway..." PRIVATE_ROUTE_TABLE_ID=$(aws ec2 create-route-table --vpc-id $VPC_ID --query RouteTable.RouteTableId --output text) aws ec2 create-tags --resources $PRIVATE_ROUTE_TABLE_ID $NAT_IP_ADDRESS --tags Key=Name,Value=$CLUSTER_NAME-private aws ec2 create-route --route-table-id $PRIVATE_ROUTE_TABLE_ID --destination-cidr-block 0.0.0.0/0 --gateway-id $NAT_GATEWAY_ID > /dev/null 2>&1 aws ec2 associate-route-table --subnet-id $PRIVATE_SUBNET_ID --route-table-id $PRIVATE_ROUTE_TABLE_ID > /dev/null 2>&1 echo "done." # echo "***********VARIABLE VALUES*********" # echo "VPC_ID="$VPC_ID # echo "PUBLIC_SUBNET_ID="$PUBLIC_SUBNET_ID # echo "PRIVATE_SUBNET_ID="$PRIVATE_SUBNET_ID # echo "PUBLIC_ROUTE_TABLE_ID="$PUBLIC_ROUTE_TABLE_ID # echo "PRIVATE_ROUTE_TABLE_ID="$PRIVATE_ROUTE_TABLE_ID # echo "NAT_GATEWAY_ID="$NAT_GATEWAY_ID # echo "IGW_ID="$IGW_ID # echo "NAT_IP_ADDRESS="$NAT_IP_ADDRESS echo "Setup complete." echo "" echo "To make the cluster create commands easier, please run the following commands to set the environment variables:" echo "export PUBLIC_SUBNET_ID=$PUBLIC_SUBNET_ID" echo "export PRIVATE_SUBNET_ID=$PRIVATE_SUBNET_ID"
Additional resources
- For more about VPC requirements, see the VPC documentation.
The script outputs commands. Set the commands as environment variables to store the subnet IDs for later use. Copy and run the commands:
$ export PUBLIC_SUBNET_ID=$PUBLIC_SUBNET_ID $ export PRIVATE_SUBNET_ID=$PRIVATE_SUBNET_ID
Confirm your environment variables by running the following command:
$ echo "Public Subnet: $PUBLIC_SUBNET_ID"; echo "Private Subnet: $PRIVATE_SUBNET_ID"
Example output
Public Subnet: subnet-0faeeeb0000000000 Private Subnet: subnet-011fe340000000000
17.4.6.1.2. Creating your OIDC configuration
In this tutorial, we will use the automatic mode when creating the OIDC configuration. We will also store the OIDC ID as an environment variable for later use. The command uses the ROSA CLI to create your cluster’s unique OIDC configuration.
Create the OIDC configuration by running the following command:
$ export OIDC_ID=$(rosa create oidc-config --mode auto --managed --yes -o json | jq -r '.id')
17.4.6.2. Creating additional environment variables
Run the following command to set up environment variables. These variables make it easier to run the command to create a ROSA cluster:
$ export CLUSTER_NAME=<cluster_name> $ export REGION=<VPC_region>
TipRun
rosa whoami
to find the VPC region.
17.4.6.3. Creating a cluster
Optional: Run the following command to create the account-wide roles and policies, including the Operator policies and the AWS IAM roles and policies:
ImportantOnly complete this step if this is the first time you are deploying ROSA in this account and you have not yet created your account roles and policies.
$ rosa create account-roles --mode auto --yes
Run the following command to create the cluster:
$ rosa create cluster --cluster-name $CLUSTER_NAME \ --subnet-ids ${PUBLIC_SUBNET_ID},${PRIVATE_SUBNET_ID} \ --hosted-cp \ --region $REGION \ --oidc-config-id $OIDC_ID \ --sts --mode auto --yes
The cluster is ready after about 10 minutes. The cluster will have a control plane across three AWS availability zones in your selected region and create two worker nodes in your AWS account.
17.4.6.4. Checking the installation status
Run one of the following commands to check the status of the cluster:
For a detailed view of the cluster status, run:
$ rosa describe cluster --cluster $CLUSTER_NAME
For an abridged view of the cluster status, run:
$ rosa list clusters
To watch the log as it progresses, run:
$ rosa logs install --cluster $CLUSTER_NAME --watch
- Once the state changes to “ready” your cluster is installed. It might take a few more minutes for the worker nodes to come online.
17.5. Tutorial: Creating an admin user
Creating an administration (admin) user allows you to access your cluster quickly. Follow these steps to create an admin user.
An admin user works well in this tutorial setting. For actual deployment, use a formal identity provider to access the cluster and grant the user admin privileges.
Run the following command to create the admin user:
rosa create admin --cluster=<cluster-name>
Example output
W: It is recommended to add an identity provider to login to this cluster. See 'rosa create idp --help' for more information. I: Admin account has been added to cluster 'my-rosa-cluster'. It may take up to a minute for the account to become active. I: To login, run the following command: oc login https://api.my-rosa-cluster.abcd.p1.openshiftapps.com:6443 \ --username cluster-admin \ --password FWGYL-2mkJI-00000-00000
Copy the log in command returned to you in the previous step and paste it into your terminal. This will log you in to the cluster using the CLI so you can start using the cluster.
$ oc login https://api.my-rosa-cluster.abcd.p1.openshiftapps.com:6443 \ > --username cluster-admin \ > --password FWGYL-2mkJI-00000-00000
Example output
Login successful. You have access to 79 projects, the list has been suppressed. You can list all projects with ' projects' Using project "default".
To check that you are logged in as the admin user, run one of the following commands:
Option 1:
$ oc whoami
Example output
cluster-admin
Option 2:
oc get all -n openshift-apiserver
Only an admin user can run this command without errors.
- You can now use the cluster as an admin user, which will suffice for this tutorial. For actual deployment, it is highly recommended to set up an identity provider, which is explained in the next tutorial.
17.6. Tutorial: Setting up an identity provider
To log in to your cluster, set up an identity provider (IDP). This tutorial uses GitHub as an example IDP. See the full list of IDPs supported by ROSA.
To view all IDP options, run the following command:
rosa create idp --help
17.6.1. Setting up an IDP with GitHub
- Log in to your GitHub account.
Create a new GitHub organization where you are an administrator.
TipIf you are already an administrator in an existing organization and you want to use that organization, skip to step 9.
Click the + icon, then click New Organization.
- Choose the most applicable plan for your situation or click Join for free.
Enter an organization account name, an email, and whether it is a personal or business account. Then, click Next.
- Optional: Add the GitHub IDs of other users to grant additional access to your ROSA cluster. You can also add them later.
- Click Complete Setup.
- Optional: Enter the requested information on the following page.
- Click Submit.
Go back to the terminal and enter the following command to set up the GitHub IDP:
rosa create idp --cluster=<cluster name> --interactive
Enter the following values:
Type of identity provider: github Identity Provider Name: <IDP-name> Restrict to members of: organizations GitHub organizations: <organization-account-name>
The CLI will provide you with a link. Copy and paste the link into a browser and press Enter. This will fill the required information to register this application for OAuth. You do not need to modify any of the information.
Click Register application.
The next page displays a Client ID. Copy the ID and paste it in the terminal where it asks for Client ID.
NoteDo not close the tab.
The CLI will ask for a Client Secret. Go back in your browser and click Generate a new client secret.
- A secret is generated for you. Copy your secret because it will never be visible again.
- Paste your secret into the terminal and press Enter.
- Leave GitHub Enterprise Hostname blank.
- Select claim.
Wait approximately 1 minute for the IDP to be created and the configuration to land on your cluster.
Copy the returned link and paste it into your browser. The new IDP should be available under your chosen name. Click your IDP and use your GitHub credentials to access the cluster.
17.6.2. Granting other users access to the cluster
To grant access to other cluster user you will need to add their GitHub user ID to the GitHub organization used for this cluster.
- In GitHub, go to the Your organizations page.
Click your profile icon, then Your organizations. Then click <your-organization-name>. In our example, it is
my-rosa-cluster
.Click Invite someone.
- Enter the GitHub ID of the new user, select the correct user, and click Invite.
- Once the new user accepts the invitation, they will be able to log in to the ROSA cluster using the Hybrid Cloud Console link and their GitHub credentials.
17.7. Tutorial: Granting admin privileges
Administration (admin) privileges are not automatically granted to users that you add to your cluster. If you want to grant admin-level privileges to certain users, you will need to manually grant them to each user. You can grant admin privileges from either the ROSA command line interface (CLI) or the Red Hat OpenShift Cluster Manager web user interface (UI).
Red Hat offers two types of admin privileges:
-
cluster-admin
:cluster-admin
privileges give the admin user full privileges within the cluster. -
dedicated-admin
:dedicated-admin
privileges allow the admin user to complete most administrative tasks with certain limitations to prevent cluster damage. It is best practice to usededicated-admin
when elevated privileges are needed.
For more information on admin privileges, see the administering a cluster documentation.
17.7.1. Using the ROSA CLI
Assuming you are the user who created the cluster, run one of the following commands to grant admin privileges:
For
cluster-admin
:$ rosa grant user cluster-admin --user <idp_user_name> --cluster=<cluster-name>
For
dedicated-admin
:$ rosa grant user dedicated-admin --user <idp_user_name> --cluster=<cluster-name>
Verify that the admin privileges were added by running the following command:
$ rosa list users --cluster=<cluster-name>
Example output
$ rosa list users --cluster=my-rosa-cluster ID GROUPS <idp_user_name> cluster-admins
If you are currently logged into the Red Hat Hybrid Cloud Console, log out of the console and log back in to the cluster to see a new perspective with the "Administrator Panel". You might need an incognito or private window.
You can also test that admin privileges were added to your account by running the following command. Only a
cluster-admin
users can run this command without errors.$ oc get all -n openshift-apiserver
17.7.2. Using the Red Hat OpenShift Cluster Manager UI
- Log in to the OpenShift Cluster Manager.
- Select your cluster.
- Click the Access Control tab.
- Click the Cluster roles and Access tab in the sidebar.
Click Add user.
- On the pop-up screen, enter the user ID.
Select whether you want to grant the user
cluster-admins
ordedicated-admins
privileges.
17.8. Tutorial: Accessing your cluster
You can connect to your cluster using the command line interface (CLI) or the Red Hat Hybrid Cloud Console user interface (UI).
17.8.1. Accessing your cluster using the CLI
To access the cluster using the CLI, you must have the oc
CLI installed. If you are following the tutorials, you already installed the oc
CLI.
- Log in to the OpenShift Cluster Manager.
- Click your username in the top right corner.
Click Copy Login Command.
This opens a new tab with a choice of identity providers (IDPs). Click the IDP you want to use. For example, "rosa-github".
- A new tab opens. Click Display token.
Run the following command in your terminal:
$ oc login --token=sha256~GBAfS4JQ0t1UTKYHbWAK6OUWGUkdMGz000000000000 --server=https://api.my-rosa-cluster.abcd.p1.openshiftapps.com:6443
Example output
Logged into "https://api.my-rosa-cluster.abcd.p1.openshiftapps.com:6443" as "rosa-user" using the token provided. You have access to 79 projects, the list has been suppressed. You can list all projects with ' projects' Using project "default".
Confirm that you are logged in by running the following command:
$ oc whoami
Example output
rosa-user
- You can now access your cluster.
17.8.2. Accessing the cluster via the Hybrid Cloud Console
Log in to the OpenShift Cluster Manager.
To retrieve the Hybrid Cloud Console URL run:
rosa describe cluster -c <cluster-name> | grep Console
Click your IDP. For example, "rosa-github".
- Enter your user credentials.
You should be logged in. If you are following the tutorials, you will be a cluster-admin and should see the Hybrid Cloud Console webpage with the Administrator panel visible.
17.9. Tutorial: Managing worker nodes
In Red Hat OpenShift Service on AWS (ROSA), changing aspects of your worker nodes is performed through the use of machine pools. A machine pool allows users to manage many machines as a single entity. Every ROSA cluster has a default machine pool that is created when the cluster is created. For more information, see the machine pool documentation.
17.9.1. Creating a machine pool
You can create a machine pool with either the command line interface (CLI) or the user interface (UI).
17.9.1.1. Creating a machine pool with the CLI
Run the following command:
rosa create machinepool --cluster=<cluster-name> --name=<machinepool-name> --replicas=<number-nodes>
Example input
$ rosa create machinepool --cluster=my-rosa-cluster --name=new-mp --replicas=2
Example output
I: Machine pool 'new-mp' created successfully on cluster 'my-rosa-cluster' I: To view all machine pools, run 'rosa list machinepools -c my-rosa-cluster'
Optional: Add node labels or taints to specific nodes in a new machine pool by running the following command:
rosa create machinepool --cluster=<cluster-name> --name=<machinepool-name> --replicas=<number-nodes> --labels=`<key=pair>`
Example input
$ rosa create machinepool --cluster=my-rosa-cluster --name=db-nodes-mp --replicas=2 --labels='app=db','tier=backend'
Example output
I: Machine pool 'db-nodes-mp' created successfully on cluster 'my-rosa-cluster'
This creates an additional 2 nodes that can be managed as a unit and also assigns them the labels shown.
Run the following command to confirm machine pool creation and the assigned labels:
rosa list machinepools --cluster=<cluster-name>
Example output
ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES Default No 2 m5.xlarge us-east-1a
17.9.1.2. Creating a machine pool with the UI
Log in to the OpenShift Cluster Manager and click your cluster.
Click Machine pools.
- Click Add machine pool.
Enter the desired configuration.
TipYou can also and expand the Edit node labels and taints section to add node labels and taints to the nodes in the machine pool.
You will see the new machine pool you created.
17.9.2. Scaling worker nodes
Edit a machine pool to scale the number of worker nodes in that specific machine pool. You can use either the CLI or the UI to scale worker nodes.
17.9.2.1. Scaling worker nodes using the CLI
Run the following command to see the default machine pool that is created with each cluster:
rosa list machinepools --cluster=<cluster-name>
Example output
ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES Default No 2 m5.xlarge us-east-1a
To scale the default machine pool out to a different number of nodes, run the following command:
rosa edit machinepool --cluster=<cluster-name> --replicas=<number-nodes> <machinepool-name>
Example input
rosa edit machinepool --cluster=my-rosa-cluster --replicas 3 Default
Run the following command to confirm that the machine pool has scaled:
rosa describe cluster --cluster=<cluster-name> | grep Compute
Example input
$ rosa describe cluster --cluster=my-rosa-cluster | grep Compute
Example output
- Compute: 3 (m5.xlarge)
17.9.2.2. Scaling worker nodes using the UI
- Click the three dots to the right of the machine pool you want to edit.
- Click Edit.
- Enter the desired number of nodes, and click Save.
Confirm that the cluster has scaled by selecting the cluster, clicking the Overview tab, and scrolling to Compute listing. The compute listing should equal the scaled nodes. For example, 3/3.
17.9.2.3. Adding node labels
Use the following command to add node labels:
rosa edit machinepool --cluster=<cluster-name> --replicas=<number-nodes> --labels='key=value' <machinepool-name>
Example input
rosa edit machinepool --cluster=my-rosa-cluster --replicas=2 --labels 'foo=bar','baz=one' new-mp
This adds 2 labels to the new machine pool.
This command replaces all machine pool configurations with the newly defined configuration. If you want to add another label and keep the old label, you must state both the new and preexisting the label. Otherwise the command will replace all preexisting labels with the one you wanted to add. Similarly, if you want to delete a label, run the command and state the ones you want, excluding the one you want to delete.
17.9.3. Mixing node types
You can also mix different worker node machine types in the same cluster by using new machine pools. You cannot change the node type of a machine pool once it is created, but you can create a new machine pool with different nodes by adding the --instance-type
flag.
For example, to change the database nodes to a different node type, run the following command:
rosa create machinepool --cluster=<cluster-name> --name=<mp-name> --replicas=<number-nodes> --labels='<key=pair>' --instance-type=<type>
Example input
rosa create machinepool --cluster=my-rosa-cluster --name=db-nodes-large-mp --replicas=2 --labels='app=db','tier=backend' --instance-type=m5.2xlarge
To see all the instance types available, run the following command:
rosa list instance-types
To make step-by-step changes, use the
--interactive
flag:rosa create machinepool -c <cluster-name> --interactive
Run the following command to list the machine pools and see the new, larger instance type:
rosa list machinepools -c <cluster-name>
17.10. Tutorial: Autoscaling
The cluster autoscaler adds or removes worker nodes from a cluster based on pod resources.
The cluster autoscaler increases the size of the cluster when:
- Pods fail to schedule on the current nodes due to insufficient resources.
- Another node is necessary to meet deployment needs.
The cluster autoscaler does not increase the cluster resources beyond the limits that you specify.
The cluster autoscaler decreases the size of the cluster when:
- Some nodes are consistently not needed for a significant period. For example, when a node has low resource use and all of its important pods can fit on other nodes.
17.10.1. Enabling autoscaling for an existing machine pool using the CLI
Cluster autoscaling can be enabled at cluster creation and when creating a new machine pool by using the --enable-autoscaling
option.
Autoscaling is set based on machine pool availability. To find out which machine pools are available for autoscaling, run the following command:
$ rosa list machinepools -c <cluster-name>
Example output
ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES Default No 2 m5.xlarge us-east-1a
Run the following command to add autoscaling to an available machine pool:
$ rosa edit machinepool -c <cluster-name> --enable-autoscaling <machinepool-name> --min-replicas=<num> --max-replicas=<num>
Example input
$ rosa edit machinepool -c my-rosa-cluster --enable-autoscaling Default --min-replicas=2 --max-replicas=4
The above command creates an autoscaler for the worker nodes that scales between 2 and 4 nodes depending on the resources.
17.10.2. Enabling autoscaling for an existing machine pool using the UI
Cluster autoscaling can be enabled at cluster creation by checking the Enable autoscaling checkbox when creating machine pools.
- Go to the Machine pools tab and click the three dots in the right..
- Click Scale, then Enable autoscaling.
Run the following command to confirm that autoscaling was added:
$ rosa list machinepools -c <cluster-name>
Example output
ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES Default Yes 2-4 m5.xlarge us-east-1a
17.11. Tutorial: Upgrading your cluster
Red Hat OpenShift Service on AWS (ROSA) executes all cluster upgrades as part of the managed service. You do not need to run any commands or make changes to the cluster. You can schedule the upgrades at a convenient time.
Ways to schedule a cluster upgrade include:
- Manually using the command line interface (CLI): Start a one-time immediate upgrade or schedule a one-time upgrade for a future date and time.
- Manually using the Red Hat OpenShift Cluster Manager user interface (UI): Start a one-time immediate upgrade or schedule a one-time upgrade for a future date and time.
- Automated upgrades: Set an upgrade window for recurring y-stream upgrades whenever a new version is available without needing to manually schedule it. Minor versions have to be manually scheduled.
For more details about cluster upgrades, run the following command:
$ rosa upgrade cluster --help
17.11.1. Manually upgrading your cluster using the CLI
Check if there is an upgrade available by running the following command:
$ rosa list upgrade -c <cluster-name>
Example output
$ rosa list upgrade -c <cluster-name> VERSION NOTES 4.14.7 recommended 4.14.6 ...
In the above example, versions 4.14.7 and 4.14.6 are both available.
Schedule the cluster to upgrade within the hour by running the following command:
$ rosa upgrade cluster -c <cluster-name> --version <desired-version>
Optional: Schedule the cluster to upgrade at a later date and time by running the following command:
$ rosa upgrade cluster -c <cluster-name> --version <desired-version> --schedule-date <future-date-for-update> --schedule-time <future-time-for-update>
17.11.2. Manually upgrading your cluster using the UI
- Log in to the OpenShift Cluster Manager, and select the cluster you want to upgrade.
- Click Settings.
If an upgrade is available, click Update.
- Select the version to which you want to upgrade in the new window.
- Schedule a time for the upgrade or begin it immediately.
17.11.3. Setting up automatic recurring upgrades
- Log in to the OpenShift Cluster Manager, and select the cluster you want to upgrade.
Click Settings.
- Under Update Strategy, click Recurring updates.
- Set the day and time for the upgrade to occur.
- Under Node draining, select a grace period to allow the nodes to drain before pod eviction.
- Click Save.
17.12. Tutorial: Deleting your cluster
You can delete your Red Hat OpenShift Service on AWS (ROSA) cluster using either the command line interface (CLI) or the user interface (UI).
17.12.1. Deleting a ROSA cluster using the CLI
Optional: List your clusters to make sure you are deleting the correct one by running the following command:
$ rosa list clusters
Delete a cluster by running the following command:
$ rosa delete cluster --cluster <cluster-name>
WarningThis command is non-recoverable.
The CLI prompts you to confirm that you want to delete the cluster. Press y and then Enter. The cluster and all its associated infrastructure will be deleted.
NoteAll AWS STS and IAM roles and policies will remain and must be deleted manually once the cluster deletion is complete by following the steps below.
The CLI outputs the commands to delete the OpenID Connect (OIDC) provider and Operator IAM roles resources that were created. Wait until the cluster finishes deleting before deleting these resources. Perform a quick status check by running the following command:
$ rosa list clusters
Once the cluster is deleted, delete the OIDC provider by running the following command:
$ rosa delete oidc-provider -c <clusterID> --mode auto --yes
Delete the Operator IAM roles by running the following command:
$ rosa delete operator-roles -c <clusterID> --mode auto --yes
NoteThis command requires the cluster ID and not the cluster name.
Only remove the remaining account roles if they are no longer needed by other clusters in the same account. If you want to create other ROSA clusters in this account, do not perform this step.
To delete the account roles, you need to know the prefix used when creating them. The default is "ManagedOpenShift" unless you specified otherwise.
Delete the account roles by running the following command:
$ rosa delete account-roles --prefix <prefix> --mode auto --yes
17.12.2. Deleting a ROSA cluster using the UI
- Log in to the OpenShift Cluster Manager, and locate the cluster you want to delete.
Click the three dots to the right of the cluster.
In the dropdown menu, click Delete cluster.
- Enter the name of the cluster to confirm deletion, and click Delete.
17.13. Tutorial: Obtaining support
Finding the right help when you need it is important. These are some of the resources at your disposal when you need assistance.
17.13.1. Adding support contacts
You can add additional email addresses for communications about your cluster.
- On the Red Hat OpenShift Cluster Manager user interface (UI), click select cluster.
- Click the Support tab.
- Click Add notification contact, and enter the additional email addresses.
17.13.2. Contacting Red Hat for support using the UI
- On the OpenShift Cluster Manager UI, click the Support tab.
- Click Open support case.
17.13.3. Contacting Red Hat for support using the support page
- Go to the Red Hat support page.
Click Open a new Case.
- Log in to your Red Hat account.
Select the reason for contacting support.
- Select Red Hat OpenShift Service on AWS.
- Click continue.
Enter a summary of the issue and the details of your request. Upload any files, logs, and screenshots. The more details you provide, the better Red Hat support can help your case.
NoteRelevant suggestions that might help with your issue will appear at the bottom of this page.
- Click Continue.
- Answer the questions in the new fields.
- Click Continue.
Enter the following information about your case:
- Support level: Premium
- Severity: Review the Red Hat Support Severity Level Definitions to choose the correct one.
- Group: If this is related to a few other cases you can select the corresponding group.
- Language
- Send notifications: Add any additional email addresses to keep notified of activity.
- Red Hat associates: If you are working with anyone from Red Hat and want to keep them in the loop you can enter their email address here.
- Alternate Case ID: If you want to attach your own ID to it you can enter it here.
- Click Continue.
On the review screen make sure you select the correct cluster ID that you are contacting support about.
- Click Submit.
- You will be contacted based on the response time committed to for the indicated severity level.
Chapter 18. Deploying an application
18.1. Tutorial: Deploying an application
18.1.1. Introduction
After successfully provisioning your cluster, you can deploy an application on it. This application allows you to become more familiar with some of the features of Red Hat OpenShift Service on AWS (ROSA) and Kubernetes.
18.1.1.1. Lab overview
In this lab, you will complete the following set of tasks designed to help you understand the concepts of deploying and operating container-based applications:
- Deploy a Node.js based app by using S2I and Kubernetes Deployment objects.
- Set up a continuous delivery (CD) pipeline to automatically push source code changes.
- Explore logging.
- Experience self healing of applications.
- Explore configuration management through configmaps, secrets, and environment variables.
- Use persistent storage to share data across pod restarts.
- Explore networking within Kubernetes and applications.
- Familiarize yourself with ROSA and Kubernetes functionality.
- Automatically scale pods based on loads from the Horizontal Pod Autoscaler.
- Use AWS Controllers for Kubernetes (ACK) to deploy and use an S3 bucket.
This lab uses either the ROSA CLI or ROSA web user interface (UI).
18.2. Tutorial: Deploying an application
18.2.1. Prerequisites
A Provisioned ROSA cluster
This lab assumes you have access to a successfully provisioned a ROSA cluster. If you have not yet created a ROSA cluster, see ROSA quick start guide for more information.
The OpenShift Command Line Interface (CLI)
For more information, see Getting started with the OpenShift CLI.
A GitHub Account
Use your existing GitHub account or register at https://github.com/signup.
18.2.1.1. Understanding AWS account association
Before you can use Red Hat OpenShift Cluster Manager on the Red Hat Hybrid Cloud Console to create Red Hat OpenShift Service on AWS (ROSA) clusters that use the AWS Security Token Service (STS), you must associate your AWS account with your Red Hat organization. You can associate your account by creating and linking the following IAM roles.
- OpenShift Cluster Manager role
Create an OpenShift Cluster Manager IAM role and link it to your Red Hat organization.
You can apply basic or administrative permissions to the OpenShift Cluster Manager role. The basic permissions enable cluster maintenance using OpenShift Cluster Manager. The administrative permissions enable automatic deployment of the cluster-specific Operator roles and the OpenID Connect (OIDC) provider using OpenShift Cluster Manager.
- User role
Create a user IAM role and link it to your Red Hat user account. The Red Hat user account must exist in the Red Hat organization that is linked to your OpenShift Cluster Manager role.
The user role is used by Red Hat to verify your AWS identity when you use the OpenShift Cluster Manager Hybrid Cloud Console to install a cluster and the required STS resources.
Associating your AWS account with your Red Hat organization
Before using Red Hat OpenShift Cluster Manager on the Red Hat Hybrid Cloud Console to create Red Hat OpenShift Service on AWS (ROSA) clusters that use the AWS Security Token Service (STS), create an OpenShift Cluster Manager IAM role and link it to your Red Hat organization. Then, create a user IAM role and link it to your Red Hat user account in the same Red Hat organization.
Procedure
Create an OpenShift Cluster Manager role and link it to your Red Hat organization:
NoteTo enable automatic deployment of the cluster-specific Operator roles and the OpenID Connect (OIDC) provider using the OpenShift Cluster Manager Hybrid Cloud Console, you must apply the administrative privileges to the role by choosing the Admin OCM role command in the Accounts and roles step of creating a ROSA cluster. For more information about the basic and administrative privileges for the OpenShift Cluster Manager role, see Understanding AWS account association.
NoteIf you choose the Basic OCM role command in the Accounts and roles step of creating a ROSA cluster in the OpenShift Cluster Manager Hybrid Cloud Console, you must deploy a ROSA cluster using manual mode. You will be prompted to configure the cluster-specific Operator roles and the OpenID Connect (OIDC) provider in a later step.
$ rosa create ocm-role
Select the default values at the prompts to quickly create and link the role.
Create a user role and link it to your Red Hat user account:
$ rosa create user-role
Select the default values at the prompts to quickly create and link the role.
NoteThe Red Hat user account must exist in the Red Hat organization that is linked to your OpenShift Cluster Manager role.
18.3. Tutorial: Deploying an application
18.3.1. Lab overview
18.3.1.1. Lab resources
- Source code for the OSToy application
- OSToy front-end container image
- OSToy microservice container image
Deployment Definition YAML files:
ostoy-frontend-deployment.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ostoy-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi --- apiVersion: apps/v1 kind: Deployment metadata: name: ostoy-frontend labels: app: ostoy spec: selector: matchLabels: app: ostoy-frontend strategy: type: Recreate replicas: 1 template: metadata: labels: app: ostoy-frontend spec: # Uncomment to use with ACK portion of the workshop # If you chose a different service account name please replace it. # serviceAccount: ostoy-sa containers: - name: ostoy-frontend securityContext: allowPrivilegeEscalation: false runAsNonRoot: true seccompProfile: type: RuntimeDefault capabilities: drop: - ALL image: quay.io/ostoylab/ostoy-frontend:1.6.0 imagePullPolicy: IfNotPresent ports: - name: ostoy-port containerPort: 8080 resources: requests: memory: "256Mi" cpu: "100m" limits: memory: "512Mi" cpu: "200m" volumeMounts: - name: configvol mountPath: /var/config - name: secretvol mountPath: /var/secret - name: datavol mountPath: /var/demo_files livenessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 10 periodSeconds: 5 env: - name: ENV_TOY_SECRET valueFrom: secretKeyRef: name: ostoy-secret-env key: ENV_TOY_SECRET - name: MICROSERVICE_NAME value: OSTOY_MICROSERVICE_SVC - name: NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumes: - name: configvol configMap: name: ostoy-configmap-files - name: secretvol secret: defaultMode: 420 secretName: ostoy-secret - name: datavol persistentVolumeClaim: claimName: ostoy-pvc --- apiVersion: v1 kind: Service metadata: name: ostoy-frontend-svc labels: app: ostoy-frontend spec: type: ClusterIP ports: - port: 8080 targetPort: ostoy-port protocol: TCP name: ostoy selector: app: ostoy-frontend --- apiVersion: route.openshift.io/v1 kind: Route metadata: name: ostoy-route spec: to: kind: Service name: ostoy-frontend-svc --- apiVersion: v1 kind: Secret metadata: name: ostoy-secret-env type: Opaque data: ENV_TOY_SECRET: VGhpcyBpcyBhIHRlc3Q= --- kind: ConfigMap apiVersion: v1 metadata: name: ostoy-configmap-files data: config.json: '{ "default": "123" }' --- apiVersion: v1 kind: Secret metadata: name: ostoy-secret data: secret.txt: VVNFUk5BTUU9bXlfdXNlcgpQQVNTV09SRD1AT3RCbCVYQXAhIzYzMlk1RndDQE1UUWsKU01UUD1sb2NhbGhvc3QKU01UUF9QT1JUPTI1 type: Opaque
ostoy-microservice-deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: ostoy-microservice labels: app: ostoy spec: selector: matchLabels: app: ostoy-microservice replicas: 1 template: metadata: labels: app: ostoy-microservice spec: containers: - name: ostoy-microservice securityContext: allowPrivilegeEscalation: false runAsNonRoot: true seccompProfile: type: RuntimeDefault capabilities: drop: - ALL image: quay.io/ostoylab/ostoy-microservice:1.5.0 imagePullPolicy: IfNotPresent ports: - containerPort: 8080 protocol: TCP resources: requests: memory: "128Mi" cpu: "50m" limits: memory: "256Mi" cpu: "100m" --- apiVersion: v1 kind: Service metadata: name: ostoy-microservice-svc labels: app: ostoy-microservice spec: type: ClusterIP ports: - port: 8080 targetPort: 8080 protocol: TCP selector: app: ostoy-microservice
S3 bucket manifest for ACK S3
s3-bucket.yaml
apiVersion: s3.services.k8s.aws/v1alpha1 kind: Bucket metadata: name: ostoy-bucket namespace: ostoy spec: name: ostoy-bucket
To simplify deployment of the OSToy application, all of the objects required in the above deployment manifests are grouped together. For a typical enterprise deployment, a separate manifest file for each Kubernetes object is recommended.
18.3.1.2. About the OSToy application
OSToy is a simple Node.js application that you will deploy to a ROSA cluster to help explore the functionality of Kubernetes. This application has a user interface where you can:
- Write messages to the log (stdout / stderr).
- Intentionally crash the application to view self-healing.
- Toggle a liveness probe and monitor OpenShift behavior.
- Read config maps, secrets, and env variables.
- If connected to shared storage, read and write files.
- Check network connectivity, intra-cluster DNS, and intra-communication with the included microservice.
- Increase the load to view automatic scaling of the pods to handle the load using the Horizontal Pod Autoscaler.
- Optional: Connect to an AWS S3 bucket to read and write objects.
18.3.1.3. OSToy Application Diagram
18.3.1.4. Understanding the OSToy UI
- Shows the pod name that served your browser the page.
- Home: The main page of the application where you can perform some of the functions listed which we will explore.
- Persistent Storage: Allows you to write data to the persistent volume bound to this application.
- Config Maps: Shows the contents of configmaps available to the application and the key:value pairs.
- Secrets: Shows the contents of secrets available to the application and the key:value pairs.
- ENV Variables: Shows the environment variables available to the application.
- Networking: Tools to illustrate networking within the application.
- Pod Auto Scaling: Tool to increase the load of the pods and test the HPA.
ACK S3: Optional: Integrate with AWS S3 to read and write objects to a bucket.
NoteIn order see the "ACK S3" section of OSToy, you must complete the ACK section of this workshop. If you decide not to complete that section, the OSToy application will still function.
- About: Displays more information about the application.
18.4. Tutorial: Deploying an application
18.4.1. Deploying the OSToy application with Kubernetes
You can deploy the OSToy application by creating and storing the images for the front-end and back-end microservice containers in an image repository. You can then create Kubernetes deployments to deploy the application.
18.4.1.1. Retrieving the login command
- If you are not logged in to the CLI, access your cluster with the web console.
Click the dropdown arrow next to your login name in the upper right, and select Copy Login Command.
A new tab opens.
- Select your authentication method.
- Click Display Token.
- Copy the command under Log in with this token.
From your terminal, paste and run the copied command. If the login is successful, you will see the following confirmation message:
$ oc login --token=<your_token> --server=https://api.osd4-demo.abc1.p1.openshiftapps.com:6443 Logged into "https://api.myrosacluster.abcd.p1.openshiftapps.com:6443" as "rosa-user" using the token provided. You don't have any projects. You can try to create a new project, by running oc new-project <project name>
18.4.1.2. Creating a new project
18.4.1.2.1. Using the CLI
Create a new project named
ostoy
in your cluster by running following command:$ oc new-project ostoy
Example output
Now using project "ostoy" on server "https://api.myrosacluster.abcd.p1.openshiftapps.com:6443".
Optional: Alternatively, create a unique project name by running the following command:
$ oc new-project ostoy-$(uuidgen | cut -d - -f 2 | tr '[:upper:]' '[:lower:]')
18.4.1.2.2. Using the web console
- From the web console, click Home → Projects.
On the Projects page, click create Create Project.
18.4.1.3. Deploying the back-end microservice
The microservice serves internal web requests and returns a JSON object containing the current hostname and a randomly generated color string.
Deploy the microservice by running the following command from your terminal:
$ oc apply -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/ostoy-microservice-deployment.yaml
Example output
$ oc apply -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/ostoy-microservice-deployment.yaml deployment.apps/ostoy-microservice created service/ostoy-microservice-svc created
18.4.1.4. Deploying the front-end service
The front-end deployment uses the Node.js front-end for the application and additional Kubernetes objects.
The ostoy-frontend-deployment.yaml
file shows that front-end deployment defines the following features:
- Persistent volume claim
- Deployment object
- Service
- Route
- Configmaps
Secrets
Deploy the application front-end and create all of the objects by entering the following command:
$ oc apply -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/ostoy-frontend-deployment.yaml
Example output
persistentvolumeclaim/ostoy-pvc created deployment.apps/ostoy-frontend created service/ostoy-frontend-svc created route.route.openshift.io/ostoy-route created configmap/ostoy-configmap-env created secret/ostoy-secret-env created configmap/ostoy-configmap-files created secret/ostoy-secret created
You should see all objects created successfully.
18.4.1.5. Getting the route
You must get the route to access the application.
Get the route to your application by running the following command:
$ oc get route
Example output
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD ostoy-route ostoy-route-ostoy.apps.<your-rosa-cluster>.abcd.p1.openshiftapps.com ostoy-frontend-svc <all> None
18.4.1.6. Viewing the application
-
Copy the
ostoy-route-ostoy.apps.<your-rosa-cluster>.abcd.p1.openshiftapps.com
URL output from the previous step. Paste the copied URL into your web browser and press enter. You should see the homepage of your application. If the page does not load, make sure you use
http
and nothttps
.
18.5. Tutorial: Health Check
You can see how Kubernetes responds to pod failure by intentionally crashing your pod and making it unresponsive to the Kubernetes liveness probes.
18.5.1. Preparing your desktop
Split your desktop screen between the OpenShift web console and the OSToy application web console so that you can see the results of your actions immediately.
If you cannot split your screen, open the OSToy application web console in another tab so you can quickly switch to the OpenShift web console after activating the features in the application.
From the OpenShift web console, select Workloads > Deployments > ostoy-frontend to view the OSToy deployment.
18.5.2. Crashing the pod
-
From the OSToy application web console, click Home in the left menu, and enter a message in the Crash Pod box, for example,
This is goodbye!
. Click Crash Pod.
The pod crashes and Kubernetes should restart the pod.
18.5.3. Viewing the revived pod
From the OpenShift web console, quickly switch to the Deployments screen. You will see that the pod turns yellow, meaning it is down. It should quickly revive and turn blue. The revival process happens quickly so you might miss it.
Verification
From the web console, click Pods > ostoy-frontend-xxxxxxx-xxxx to change to the pods screen.
Click the Events sub-tab and verify that the container crashed and restarted.
18.5.4. Making the application malfunction
Keep the pod events page open from the previous procedure.
From the OSToy application, click Toggle Health in the Toggle Health Status tile. Watch Current Health switch to I’m not feeling all that well.
Verification
After the previous step, the application stops responding with a 200 HTTP code
. After 3 consecutive failures, Kubernetes will stop the pod and restart it. From the web console, switch back to the pod events page and you will see that the liveness probe failed and the pod restarted.
The following image shows an example of what you should see on your pod events page.
A. The pod has three consecutive failures.
B. Kubernetes stops the pod.
C. Kubernetes restarts the pod.
18.6. Tutorial: Persistent volumes for cluster storage
Red Hat OpenShift Service on AWS (ROSA) (classic architecture) and Red Hat OpenShift Service on AWS (ROSA) support storing persistent volumes with either Amazon Web Services (AWS) Elastic Block Store (EBS) or AWS Elastic File System (EFS).
18.6.1. Using persistent volumes
Use the following procedures to create a file, store it on a persistent volume in your cluster, and confirm that it still exists after pod failure and re-creation.
18.6.1.1. Viewing a persistent volume claim
- Navigate to the cluster’s OpenShift web console.
- Click Storage in the left menu, then click PersistentVolumeClaims to see a list of all the persistent volume claims.
Click a persistence volume claim to see the size, access mode, storage class, and other additional claim details.
NoteThe access mode is
ReadWriteOnce
(RWO). This means that the volume can only be mounted to one node and the pod or pods can read and write to the volume.
18.6.1.2. Storing your file
- In the OSToy app console, click Persistent Storage in the left menu.
-
In the Filename box, enter a file name with a
.txt
extension, for exampletest-pv.txt
. -
In the File contents box, enter a sentence of text, for example
OpenShift is the greatest thing since sliced bread!
. Click Create file.
- Scroll to Existing files on the OSToy app console.
Click the file you created to see the file name and contents.
18.6.1.3. Crashing the pod
- On the OSToy app console, click Home in the left menu.
- Click Crash pod.
18.6.1.4. Confirming persistent storage
- Wait for the pod to re-create.
- On the OSToy app console, click Persistent Storage in the left menu.
Find the file you created, and open it to view and confirm the contents.
Verification
The deployment YAML file shows that we mounted the directory /var/demo_files
to our persistent volume claim.
Retrieve the name of your front-end pod by running the following command:
$ oc get pods
Start a secure shell (SSH) session in your container by running the following command:
$ oc rsh <pod_name>
Go to the directory by running the following command:
$ cd /var/demo_files
Optional: See all the files you created by running the following command:
$ ls
Open the file to view the contents by running the following command:
$ cat test-pv.txt
Verify that the output is the text you entered in the OSToy app console.
Example terminal
$ oc get pods NAME READY STATUS RESTARTS AGE ostoy-frontend-5fc8d486dc-wsw24 1/1 Running 0 18m ostoy-microservice-6cf764974f-hx4qm 1/1 Running 0 18m $ oc rsh ostoy-frontend-5fc8d486dc-wsw24 $ cd /var/demo_files/ $ ls lost+found test-pv.txt $ cat test-pv.txt OpenShift is the greatest thing since sliced bread!
18.6.1.5. Ending the session
-
Type
exit
in your terminal to quit the session and return to the CLI.
18.6.2. Additional resources
- For more information about persistent volume storage, see Understanding persistent storage.
- For more information about ROSA storage options, see Storage overview.
18.7. Tutorial: ConfigMaps, secrets, and environment variables
This tutorial shows how to configure the OSToy application by using config maps, secrets, and environment variables. For more information, see these linked topics.
18.7.1. Configuration using ConfigMaps
Config maps allow you to decouple configuration artifacts from container image content to keep containerized applications portable.
Procedure
In the OSToy app, in the left menu, click Config Maps, displaying the contents of the config map available to the OSToy application. The code snippet shows an example of a config map configuration:
Example output
kind: ConfigMap apiVersion: v1 metadata: name: ostoy-configmap-files data: config.json: '{ "default": "123" }'
18.7.2. Configuration using secrets
Kubernetes Secret
objects allow you to store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. Putting this information in a secret is safer and more flexible than putting it in plain text into a pod definition or a container image.
Procedure
In the OSToy app, in the left menu, click Secrets, displaying the contents of the secrets available to the OSToy application. The code snippet shows an example of a secret configuration:
Example output
USERNAME=my_user PASSWORD=VVNFUk5BTUU9bXlfdXNlcgpQQVNTV09SRD1AT3RCbCVYQXAhIzYzMlk1RndDQE1UUWsKU01UUD1sb2NhbGhvc3QKU01UUF9QT1JUPTI1 SMTP=localhost SMTP_PORT=25
18.7.3. Configuration using environment variables
Using environment variables is an easy way to change application behavior without requiring code changes. It allows different deployments of the same application to potentially behave differently based on the environment variables. Red Hat OpenShift Service on AWS makes it simple to set, view, and update environment variables for pods or deployments.
Procedure
In the OSToy app, in the left menu, click ENV Variables, displaying the environment variables available to the OSToy application. The code snippet shows an example of an environmental variable configuration:
Example output
{ "npm_config_local_prefix": "/opt/app-root/src", "STI_SCRIPTS_PATH": "/usr/libexec/s2i", "npm_package_version": "1.7.0", "APP_ROOT": "/opt/app-root", "NPM_CONFIG_PREFIX": "/opt/app-root/src/.npm-global", "OSTOY_MICROSERVICE_PORT_8080_TCP_PORT": "8080", "NODE": "/usr/bin/node", "LD_PRELOAD": "libnss_wrapper.so", "KUBERNETES_SERVICE_HOST": "172.30.0.1", "OSTOY_MICROSERVICE_PORT": "tcp://172.30.60.255:8080", "OSTOY_PORT": "tcp://172.30.152.25:8080", "npm_package_name": "ostoy", "OSTOY_SERVICE_PORT_8080_TCP": "8080", "_": "/usr/bin/node" "ENV_TOY_CONFIGMAP": "ostoy-configmap -env" }
18.8. Tutorial: Networking
This tutorial shows how the OSToy app uses intra-cluster networking to separate functions by using microservices and visualize the scaling of pods.
The diagram shows there are at least two separate pods, each with its own service.
One pod functions as the front end web application with a service and a publicly accessible route. The other pod functions as the backend microservice with a service object so that the front end pod can communicate with the microservice. This communication occurs across the pods if more than one. Because of these communication limits, this microservice is not accessible from outside this cluster or from other namespaces or projects if these are configured. The sole purpose of this microservice is to serve internal web requests and return a JSON object containing the current hostname, which is the pod’s name, and a randomly generated color string. This color string is used to display a box with that color displayed in the tile titled "Intra-cluster Communication".
For more information about the networking limitations, see About network policy.
18.8.1. Intra-cluster networking
You can view your networking configurations in your OSToy application.
Procedure
- In the OSToy application, click Networking in the left menu.
Review the networking configuration. The right tile titled "Hostname Lookup" illustrates how the service name created for a pod can be used to translate into an internal ClusterIP address.
Enter the name of the microservice created in the right tile ("Hostname Lookup") following the format of
<service_name>.<namespace>.svc.cluster.local
. You can find this service name in the service definition ofostoy-microservice.yaml
by running the following command:$ oc get service <name_of_service> -o yaml
Example output
apiVersion: v1 kind: Service metadata: name: ostoy-microservice-svc labels: app: ostoy-microservice spec: type: ClusterIP ports: - port: 8080 targetPort: 8080 protocol: TCP selector: app: ostoy-microservice
In this example, the full hostname is
ostoy-microservice-svc.ostoy.svc.cluster.local
.You see an IP address returned. In this example it is
172.30.165.246
. This is the intra-cluster IP address, which is only accessible from within the cluster.
18.9. Tutorial: Scaling an application
18.9.1. Scaling
You can manually or automatically scale your pods by using the Horizontal Pod Autoscaler (HPA). You can also scale your cluster nodes.
18.9.1.1. Manual pod scaling
You can manually scale your application’s pods by using one of the following methods:
- Changing your ReplicaSet or deployment definition
- Using the command line
- Using the web console
This workshop starts by using only one pod for the microservice. By defining a replica of 1
in your deployment definition, the Kubernetes Replication Controller strives to keep one pod alive. You then learn how to define pod autoscaling by using the Horizontal Pod Autoscaler(HPA) which is based on the load and will scale out more pods, beyond your initial definition, if high load is experienced.
Prerequisites
- An active ROSA cluster
- A deloyed the OSToy application
Procedure
- In the OSToy app, click the Networking tab in the navigational menu.
In the "Intra-cluster Communication" section, locate the box located beneath "Remote Pods" that randomly changes colors. Inside the box, you see the microservice’s pod name. There is only one box in this example because there is only one microservice pod.
Confirm that there is only one pod running for the microservice by running the following command:
$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE ostoy-frontend-679cb85695-5cn7x 1/1 Running 0 1h ostoy-microservice-86b4c6f559-p594d 1/1 Running 0 1h
- Download the ostoy-microservice-deployment.yaml and save it to your local machine.
Change the deployment definition to three pods instead of one by using the following example:
spec: selector: matchLabels: app: ostoy-microservice replicas: 3
Apply the replica changes by running the following command:
$ oc apply -f ostoy-microservice-deployment.yaml
NoteYou can also edit the
ostoy-microservice-deployment.yaml
file in the OpenShift Web Console by going to the Workloads > Deployments > ostoy-microservice > YAML tab.Confirm that there are now 3 pods by running the following command:
$ oc get pods
The output shows that there are now 3 pods for the microservice instead of only one.
Example output
NAME READY STATUS RESTARTS AGE ostoy-frontend-5fbcc7d9-rzlgz 1/1 Running 0 26m ostoy-microservice-6666dcf455-2lcv4 1/1 Running 0 81s ostoy-microservice-6666dcf455-5z56w 1/1 Running 0 81s ostoy-microservice-6666dcf455-tqzmn 1/1 Running 0 26m
Scale the application by using the CLI or by using the web UI:
In the CLI, decrease the number of pods from
3
to2
by running the following command:$ oc scale deployment ostoy-microservice --replicas=2
- From the navigational menu of the OpenShift web console UI, click Workloads > Deployments > ostoy-microservice.
- On the left side of the page, locate the blue circle with a "3 Pod" label in the middle.
Selecting the arrows next to the circle scales the number of pods. Select the down arrow to
2
.
Verification
Check your pod counts by using the CLI, the web UI, or the OSToy app:
From the CLI, confirm that you are using two pods for the microservice by running the following command:
$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE ostoy-frontend-5fbcc7d9-rzlgz 1/1 Running 0 75m ostoy-microservice-6666dcf455-2lcv4 1/1 Running 0 50m ostoy-microservice-6666dcf455-tqzmn 1/1 Running 0 75m
In the web UI, select Workloads > Deployments > ostoy-microservice.
You can also confirm that there are two pods in use by selecting Networking in the navigational menu of the OSToy app. There should be two colored boxes for the two pods.
18.9.1.2. Pod Autoscaling
Red Hat OpenShift Service on AWS offers a Horizontal Pod Autoscaler (HPA). The HPA uses metrics to increase or decrease the number of pods when necessary.
Procedure
From the navigational menu of the web UI, select Pod Auto Scaling.
Create the HPA by running the following command:
$ oc autoscale deployment/ostoy-microservice --cpu-percent=80 --min=1 --max=10
This command creates an HPA that maintains between 1 and 10 replicas of the pods controlled by the ostoy-microservice deployment. Thoughout deployment, HPA increases and decreases the number of replicas to keep the average CPU use across all pods at 80% and 40 millicores.
On the Pod Auto Scaling > Horizontal Pod Autoscaling page, select Increase the load.
ImportantBecause increasing the load generates CPU intensive calculations, the page can become unresponsive. This is an expected response. Click Increase the Load only once. For more information about the process, see the microservice’s GitHub repository.
After a few minutes, the new pods display on the page represented by colored boxes.
NoteThe page can experience lag.
Verification
Check your pod counts with one of the following methods:
In the OSToy application’s web UI, see the remote pods box:
Because there is only one pod, increasing the workload should trigger an increase of pods.
In the CLI, run the following command:
oc get pods --field-selector=status.phase=Running | grep microservice
Example output
ostoy-microservice-79894f6945-cdmbd 1/1 Running 0 3m14s ostoy-microservice-79894f6945-mgwk7 1/1 Running 0 4h24m ostoy-microservice-79894f6945-q925d 1/1 Running 0 3m14s
You can also verify autoscaling from the OpenShift Cluster Manager
- In the OpenShift web console navigational menu, click Observe > Dashboards.
In the dashboard, select Kubernetes / Compute Resources / Namespace (Pods) and your namespace ostoy.
A graph appears showing your resource usage across CPU and memory. The top graph shows recent CPU consumption per pod and the lower graph indicates memory usage. The following lists the callouts in the graph:
- The load increased (A).
- Two new pods were created (B and C).
- The thickness of each graph represents the CPU consumption and indicates which pods handled more load.
The load decreased (D), and the pods were deleted.
18.9.1.3. Node Autoscaling
Red Hat OpenShift Service on AWS allows you to use node autoscaling. In this scenario, you will create a new project with a job that has a large workload that the cluster cannot handle. With autoscaling enabled, when the load is larger than your current capacity, the cluster will automatically create new nodes to handle the load.
Prerequisites
- Autoscaling is enabled on your machine pools.
Procedure
Create a new project called
autoscale-ex
by running the following command:$ oc new-project autoscale-ex
Create the job by running the following command:
$ oc create -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/job-work-queue.yaml
After a few minuts, run the following command to see the pods:
$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE work-queue-5x2nq-24xxn 0/1 Pending 0 10s work-queue-5x2nq-57zpt 0/1 Pending 0 10s work-queue-5x2nq-58bvs 0/1 Pending 0 10s work-queue-5x2nq-6c5tl 1/1 Running 0 10s work-queue-5x2nq-7b84p 0/1 Pending 0 10s work-queue-5x2nq-7hktm 0/1 Pending 0 10s work-queue-5x2nq-7md52 0/1 Pending 0 10s work-queue-5x2nq-7qgmp 0/1 Pending 0 10s work-queue-5x2nq-8279r 0/1 Pending 0 10s work-queue-5x2nq-8rkj2 0/1 Pending 0 10s work-queue-5x2nq-96cdl 0/1 Pending 0 10s work-queue-5x2nq-96tfr 0/1 Pending 0 10s
-
Because there are many pods in a
Pending
state, this status should trigger the autoscaler to create more nodes in your machine pool. Allow time to create these worker nodes. After a few minutes, use the following command to see how many worker nodes you now have:
$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION ip-10-0-138-106.us-west-2.compute.internal Ready infra,worker 22h v1.23.5+3afdacb ip-10-0-153-68.us-west-2.compute.internal Ready worker 2m12s v1.23.5+3afdacb ip-10-0-165-183.us-west-2.compute.internal Ready worker 2m8s v1.23.5+3afdacb ip-10-0-176-123.us-west-2.compute.internal Ready infra,worker 22h v1.23.5+3afdacb ip-10-0-195-210.us-west-2.compute.internal Ready master 23h v1.23.5+3afdacb ip-10-0-196-84.us-west-2.compute.internal Ready master 23h v1.23.5+3afdacb ip-10-0-203-104.us-west-2.compute.internal Ready worker 2m6s v1.23.5+3afdacb ip-10-0-217-202.us-west-2.compute.internal Ready master 23h v1.23.5+3afdacb ip-10-0-225-141.us-west-2.compute.internal Ready worker 23h v1.23.5+3afdacb ip-10-0-231-245.us-west-2.compute.internal Ready worker 2m11s v1.23.5+3afdacb ip-10-0-245-27.us-west-2.compute.internal Ready worker 2m8s v1.23.5+3afdacb ip-10-0-245-7.us-west-2.compute.internal Ready worker 23h v1.23.5+3afdacb
You can see the worker nodes were automatically created to handle the workload.
Return to the OSToy app by entering the following command:
$ oc project ostoy
18.10. Tutorial: Logging
There are various methods to view your logs in Red Hat OpenShift Service on AWS (ROSA). Use the following procedures to forward the logs to AWS CloudWatch and view the logs directly through the pod by using oc logs
.
ROSA is not preconfigured with a logging solution.
18.10.1. Forwarding logs to CloudWatch
Install the logging add-on service to forward the logs to AWS CloudWatch.
Run the following script to configure your ROSA cluster to forward logs to CloudWatch:
$ curl https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/resources/configure-cloudwatch.sh | bash
NoteConfiguring ROSA to send logs to CloudWatch goes beyond the scope of this tutorial. Integrating with AWS and enabling CloudWatch logging are important aspects of ROSA, so a script is included to simplify the configuration process. The script will automatically set up AWS CloudWatch. You can examine the script to understand the steps involved.
Example output
Varaibles are set...ok. Policy already exists...ok. Created RosaCloudWatch-mycluster role. Attached role policy. Deploying the Red Hat OpenShift Logging Operator namespace/openshift-logging configured operatorgroup.operators.coreos.com/cluster-logging created subscription.operators.coreos.com/cluster-logging created Waiting for Red Hat OpenShift Logging Operator deployment to complete... Red Hat OpenShift Logging Operator deployed. secret/cloudwatch-credentials created clusterlogforwarder.logging.openshift.io/instance created clusterlogging.logging.openshift.io/instance created Complete.
After a few minutes, you should begin to see log groups inside of AWS CloudWatch. Run the following command to see the log groups:
$ aws logs describe-log-groups --log-group-name-prefix rosa-mycluster
Example output
{ "logGroups": [ { "logGroupName": "rosa-mycluster.application", "creationTime": 1724104537717, "metricFilterCount": 0, "arn": "arn:aws:logs:us-west-2:000000000000:log-group:rosa-mycluster.application:*", "storedBytes": 0, "logGroupClass": "STANDARD", "logGroupArn": "arn:aws:logs:us-west-2:000000000000:log-group:rosa-mycluster.application" }, { "logGroupName": "rosa-mycluster.audit", "creationTime": 1724104152968, "metricFilterCount": 0, "arn": "arn:aws:logs:us-west-2:000000000000:log-group:rosa-mycluster.audit:*", "storedBytes": 0, "logGroupClass": "STANDARD", "logGroupArn": "arn:aws:logs:us-west-2:000000000000:log-group:rosa-mycluster.audit" },
18.10.2. Outputting the data to the streams and logs
Output a message to
stdout
.- In the OSToy application, click Home and then click the message box for Log Message (stdout).
-
Write a message to output to the
stdout
stream, for example "All is well!". Click Send Message.
Output a message to
stderr
.- Click the message box for Log Message (stderr).
-
Write a message to output to the
stderr
stream, for example "Oh no! Error!". Click Send Message.
18.10.3. Viewing the application logs by using the oc
command
Enter the following command in the command line interface (CLI) to retrieve the name of your frontend pod:
$ oc get pods -o name
Example output
pod/ostoy-frontend-679cb85695-5cn7x 1 pod/ostoy-microservice-86b4c6f559-p594d
- 1
- The pod name is
ostoy-frontend-679cb85695-5cn7x
.
Run the following command to see both the
stdout
andstderr
messages:$ oc logs <pod-name>
Example output
$ oc logs ostoy-frontend-679cb85695-5cn7x [...] ostoy-frontend-679cb85695-5cn7x: server starting on port 8080 Redirecting to /home stdout: All is well! stderr: Oh no! Error!
18.10.4. Viewing the logs with CloudWatch
- Navigate to CloudWatch on the AWS web console.
In the left menu, click Logs and then Log groups to see the different groups of logs. You should see 3 groups:
-
rosa-<cluster-name>.application
-
rosa-<cluster-name>.audit
rosa-<cluster-name>.infrastructure
-
-
Click
rosa-<cluster-name>.application
. Click the log stream for the frontend pod.
-
Filter for
stdout
andstderr
. Expand the row to show the messages you entered earlier and other pertinent information.
- Return to the log streams and select the microservice.
- Enter "microservice" in the search bar to see other messages in your logs.
Expand one of the entries to see the color the frontend pod received from microservice and which pod sent that color to the frontend pod.
Additional resources
18.11. Tutorial: S2I deployments
There are several methods to deploy applications in OpenShift. This tutorial explores using the integrated Source-to-Image (S2I) builder. As mentioned in the OpenShift concepts section, S2I is a tool for building reproducible, Docker-formatted container images.
18.11.1. Prerequisites
The following requirements must be completed before you can use this tutorial.
- You have created a ROSA cluster.
Retrieve your login command
If you are not logged in via the CLI, in OpenShift Cluster Manager, click the dropdown arrow next to your name in the upper-right and select Copy Login Command.
- A new tab opens. Enter your username and password, and select the authentication method.
- Click Display Token
- Copy the command under "Log in with this token".
Log in to the command line interface (CLI) by running the copied command in your terminal. You should see something similar to the following:
$ oc login --token=RYhFlXXXXXXXXXXXX --server=https://api.osd4-demo.abc1.p1.openshiftapps.com:6443
Example output
Logged into "https://api.myrosacluster.abcd.p1.openshiftapps.com:6443" as "rosa-user" using the token provided. You don't have any projects. You can try to create a new project, by running oc new-project <project name>
Create a new project from the CLI by running the following command:
$ oc new-project ostoy-s2i
18.11.2. Fork the OSToy repository
The next section focuses on triggering automated builds based on changes to the source code. You must set up a GitHub webhook to trigger S2I builds when you push code into your GitHub repo. To set up the webhook, you must first fork the repo.
Replace <UserName>
with your own GitHub username for the following URLs in this guide.
18.11.3. Using S2i to deploy OSToy on your cluster
Add secret to OpenShift
The example emulates a
.env
file and shows how easy it is to move these directly into an OpenShift environment. Files can even be renamed in the Secret. In your CLI enter the following command, replacing<UserName>
with your GitHub username:$ oc create -f https://raw.githubusercontent.com/<UserName>/ostoy/master/deployment/yaml/secret.yaml
Add ConfigMap to OpenShift
The example emulates an HAProxy config file, and is typically used for overriding default configurations in an OpenShift application. Files can even be renamed in the ConfigMap.
In your CLI enter the following command, replacing
<UserName>
with your GitHub username:$ oc create -f https://raw.githubusercontent.com/<UserName>/ostoy/master/deployment/yaml/configmap.yaml
Deploy the microservice
You must deploy the microservice first to ensure that the SERVICE environment variables are available from the UI application.
--context-dir
is used here to only build the application defined in themicroservice
directory in the git repository. Using theapp
label allows us to ensure the UI application and microservice are both grouped in the OpenShift UI. Run the following command in the CLI to create the microservice, replacing<UserName>
with your GitHub username:$ oc new-app https://github.com/<UserName>/ostoy \ --context-dir=microservice \ --name=ostoy-microservice \ --labels=app=ostoy
Example output
--> Creating resources with label app=ostoy ... imagestream.image.openshift.io "ostoy-microservice" created buildconfig.build.openshift.io "ostoy-microservice" created deployment.apps "ostoy-microservice" created service "ostoy-microservice" created --> Success Build scheduled, use 'oc logs -f buildconfig/ostoy-microservice' to track its progress. Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose service/ostoy-microservice' Run 'oc status' to view your app.
Check the status of the microservice
Before moving onto the next step we should be sure that the microservice was created and is running correctly by running the following command:
$ oc status
Example output
In project ostoy-s2i on server https://api.myrosacluster.g14t.p1.openshiftapps.com:6443 svc/ostoy-microservice - 172.30.47.74:8080 dc/ostoy-microservice deploys istag/ostoy-microservice:latest <- bc/ostoy-microservice source builds https://github.com/UserName/ostoy on openshift/nodejs:14-ubi8 deployment #1 deployed 34 seconds ago - 1 pod
Wait until you see that it was successfully deployed. You can also check this through the web UI.
Deploy the front end UI
The application has been designed to rely on several environment variables to define external settings. Attach the previously created Secret and ConfigMap afterward, along with creating a PersistentVolume. Enter the following into the CLI:
$ oc new-app https://github.com/<UserName>/ostoy \ --env=MICROSERVICE_NAME=OSTOY_MICROSERVICE
Example output
--> Creating resources ... imagestream.image.openshift.io "ostoy" created buildconfig.build.openshift.io "ostoy" created deployment.apps "ostoy" created service "ostoy" created --> Success Build scheduled, use 'oc logs -f buildconfig/ostoy' to track its progress. Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose service/ostoy' Run 'oc status' to view your app.
Update the Deployment
Update the deployment to use a "Recreate" deployment strategy (as opposed to the default of
RollingUpdate
) for consistent deployments with persistent volumes. Reasoning here is that the PV is backed by EBS and as such only supports theRWO
method. If the deployment is updated without all existing pods being killed it might not be able to schedule a new pod and create a PVC for the PV as it’s still bound to the existing pod. If you will be using EFS you do not have to change this.$ oc patch deployment ostoy --type=json -p \ '[{"op": "replace", "path": "/spec/strategy/type", "value": "Recreate"}, {"op": "remove", "path": "/spec/strategy/rollingUpdate"}]'
Set a Liveness probe
Create a Liveness Probe on the Deployment to ensure the pod is restarted if something isn’t healthy within the application. Enter the following into the CLI:
$ oc set probe deployment ostoy --liveness --get-url=http://:8080/health
Attach Secret, ConfigMap, and PersistentVolume to Deployment
Run the following commands attach your secret, ConfigMap, and PersistentVolume:
Attach Secret
$ oc set volume deployment ostoy --add \ --secret-name=ostoy-secret \ --mount-path=/var/secret
Attach ConfigMap
$ oc set volume deployment ostoy --add \ --configmap-name=ostoy-config \ -m /var/config
Create and attach PersistentVolume
$ oc set volume deployment ostoy --add \ --type=pvc \ --claim-size=1G \ -m /var/demo_files
Expose the UI application as an OpenShift Route
Run the following command to deploy this as an HTTPS application that uses the included TLS wildcard certificates:
$ oc create route edge --service=ostoy --insecure-policy=Redirect
Browse to your application with the following methods:
Running the following command opens a web browser with your OSToy application:
$ python -m webbrowser "$(oc get route ostoy -o template --template='https://{{.spec.host}}')"
You can get the route for the application and copy and paste the route into your browser by running the following command:
$ oc get route
18.12. Tutorial: Using Source-to-Image (S2I) webhooks for automated deployment
Automatically trigger a build and deploy anytime you change the source code by using a webhook. For more information about this process, see Triggering Builds.
Procedure
To get the GitHub webhook trigger secret, in your terminal, run the following command:
$ oc get bc/ostoy-microservice -o=jsonpath='{.spec.triggers..github.secret}'
Example output
`o_3x9M1qoI2Wj_cz1WiK`
ImportantYou need to use this secret in a later step in this process.
To get the GitHub webhook trigger URL from the OSToy’s buildconfig, run the following command:
$ oc describe bc/ostoy-microservice
Example output
[...] Webhook GitHub: URL: https://api.demo1234.openshift.com:443/apis/build.openshift.io/v1/namespaces/ostoy-s2i/buildconfigs/ostoy/webhooks/<secret>/github [...]
In the GitHub webhook URL, replace the
<secret>
text with the secret you retrieved. Your URL will resemble the following example output:Example output
https://api.demo1234.openshift.com:443/apis/build.openshift.io/v1/namespaces/ostoy-s2i/buildconfigs/ostoy-microservice/webhooks/o_3x9M1qoI2Wj_czR1WiK/github
Setup the webhook URL in GitHub repository.
In your repository, click Settings > Webhooks > Add webhook.
-
Paste the GitHub webhook URL with the
Secret
included into the "Payload URL" field. -
Change the "Content type" to
application/json
. Click the Add webhook button.
You should see a message from GitHub stating that your webhook was successfully configured. Now, whenever you push a change to your GitHub repository, a new build automatically starts, and upon a successful build, a new deployment starts.
Now, make a change in the source code. Any changes automatically trigger a build and deployment. In this example, the colors that denote the status of your OSToy app are selected randomly. To test the configuration, change the box to only display grayscale.
- Go to the source code in your repository https://github.com/<username>/ostoy/blob/master/microservice/app.js.
- Edit the file.
-
Comment out line 8 (containing
let randomColor = getRandomColor();
). Uncomment line 9 (containing
let randomColor = getRandomGrayScaleColor();
).7 app.get('/', function(request, response) { 8 //let randomColor = getRandomColor(); // <-- comment this 9 let randomColor = getRandomGrayScaleColor(); // <-- uncomment this 10 11 response.writeHead(200, {'Content-Type': 'application/json'});
- Enter a message for the update, such as "changed box to grayscale colors".
- Click Commit at the bottom to commit the changes to the main branch.
In your cluster’s web UI, click Builds > Builds to determine the status of the build. After this build is completed, the deployment begins. You can also check the status by running
oc status
in your terminal.After the deployment has finished, return to the OSToy application in your browser. Access the Networking menu item on the left. The box color is now limited to grayscale colors only.
18.13. Tutorial: Integrating with AWS Services
Although the OSToy application functions independently, many real-world applications require external services such as databases, object stores, or messaging services.
Objectives
- Learn how to integrate the OSToy application with other Amazon Web Services (AWS) services, specifically AWS S3 Storage. By the end of this section, the application will securely create and read objects from AWS S3 Storage.
- Use the Amazon Controller for Kubernetes (ACK) to create the necessary services for our application directly from Kubernetes.
- Use Identity and Access Management (IAM) roles for service accounts to manage access and authentication.
- Use OSToy to create a basic text file and save it in an S3 bucket.
- Confirm that the file was successfully added and can be read from the bucket.
18.13.1. Amazon Controller for Kubernetes (ACK)
Use the ACK to create and use AWS services directly from Kubernetes. You can deploy your applications directly in the Kubernetes framework by using a familiar structure to declaratively define and create AWS services such as S3 buckets or Relational Database Service (RDS) databases.
With ACK, you can create an S3 bucket, integrate it with the OSToy application, upload a file to it, and view the file in your application.
18.13.2. IAM roles for service accounts
You can use IAM roles for service accounts to assign IAM roles directly to Kubernetes service accounts. You can use it to grant the ACK controller credentials to deploy services in your AWS account. Use IAM roles for service accounts to automate the management and rotation of temporary credentials.
Pods receive a valid OpenID Connect (OIDC) JSON web token (JWT) and pass it to the AWS STS AssumeRoleWithWebIdentity
API operation to receive IAM temporary role credentials. The process relies on the EKS pod identity mutating webhook which modifies pods that require AWS IAM access.
IAM roles for service accounts adheres to the following best practices:
- Principle of least privilege: You can create IAM permissions for AWS roles that only allow limited access. These permissions are limited to the service account associated with the role and only the pods that use that service account have access.
- Credential isolation: A pod can only retrieve credentials for the IAM role associated with the service account that the pod is using.
- Auditing: All AWS resource access can be viewed in CloudTrail.
Additional resources
18.13.3. Installing the ACK controller
Install the ACK controller to create and delete buckets in the S3 service by using a Kubernetes custom resource for the bucket. Installing the controller will also create the required namespace and service account.
We will use an Operator to make it easy. The Operator installation will also create an ack-system
namespace and a service account ack-s3-controller
for you.
- Log in to the cluster console.
- On the left menu, click Operators, then OperatorHub.
In the filter box, enter "S3" and select AWS Controller for Kubernetes - Amazon S3.
- If a pop-up about community operators appears, click Continue.
- Click Install.
- Select All namespaces on the cluster under "Installation mode".
- Select ack-system under "Installed Namespace".
Select Manual under "Update approval".
ImportantMake sure Manual Mode is selected so changes to the service account are not overwritten by an automatic operator update.
Click Install.
The settings should look like the below image.
- Click Approve.
- The installation begins but will not complete until you have created an IAM role and policy for the ACK controller.
18.13.4. Creating an IAM role and policy for the ACK controller
Run one of the following scripts to create the AWS IAM role for the ACK controller and assign the S3 policy:
- Automatically download the setup-s3-ack-controller.sh script, which automates the process for you.
Run the following script in your command line interface (CLI):
$ curl https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/resources/setup-s3-ack-controller.sh | bash
- When the script completes, it restarts the deployment which updates the service controller pods with the IAM roles for service accounts environment variables.
Confirm that the environment variables are set by running the following command:
$ oc describe pod ack-s3-controller -n ack-system | grep "^\s*AWS_"
Example output
AWS_ROLE_ARN: arn:aws:iam::000000000000:role/ack-s3-controller AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
Confirm successful setup of the ACK controller in the web console by clicking Operators and then Installed operators.
If you do not see a successful Operator installation and the environment variables, manually restart the deployment by running the following command:
$ oc rollout restart deployment ack-s3-controller -n ack-system
18.13.5. Setting up access for the application
You can create an AWS IAM role and service account so that OSToy can read and write objects to an S3 bucket.
Create a new unique project for OSToy by running the following command:
$ oc new-project ostoy-$(uuidgen | cut -d - -f 2 | tr '[:upper:]' '[:lower:]')
Save the name of the namespace and project to an environment variable by running the following command:
$ export OSTOY_NAMESPACE=$(oc config view --minify -o 'jsonpath={..namespace}')
18.13.6. Creating an AWS IAM role
Get your AWS account ID by running the following command:
$ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
Get the OIDC provider by running the following command, replacing
<cluster-name>
with the name of your cluster:$ export OIDC_PROVIDER=$(rosa describe cluster -c <cluster-name> -o yaml | awk '/oidc_endpoint_url/ {print $2}' | cut -d '/' -f 3,4)
Create the trust policy file by running the following command:
$ cat <<EOF > ./ostoy-sa-trust.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "${OIDC_PROVIDER}:sub": "system:serviceaccount:${OSTOY_NAMESPACE}:ostoy-sa" } } } ] } EOF
Create the AWS IAM role to be used with your service account by running the following command:
$ aws iam create-role --role-name "ostoy-sa-role" --assume-role-policy-document file://ostoy-sa-trust.json
18.13.7. Attaching the S3 policy to the IAM role
Get the S3 full access policy ARN by running the following command:
$ export POLICY_ARN=$(aws iam list-policies --query 'Policies[?PolicyName==`AmazonS3FullAccess`].Arn' --output text)
Attach the policy to the AWS IAM role by running the following command:
$ aws iam attach-role-policy --role-name "ostoy-sa-role" --policy-arn "${POLICY_ARN}"
18.13.8. Creating the service account for your pod
Get the ARN for the AWS IAM role we created so that it will be included as an annotation when you create your service account by running the following command:
$ export APP_IAM_ROLE_ARN=$(aws iam get-role --role-name=ostoy-sa-role --query Role.Arn --output text)
Create the service account by running the following command:
$ cat <<EOF | oc apply -f - apiVersion: v1 kind: ServiceAccount metadata: name: ostoy-sa namespace: ${OSTOY_NAMESPACE} annotations: eks.amazonaws.com/role-arn: "$APP_IAM_ROLE_ARN" EOF
ImportantDo not change the name of the service account from "ostoy-sa" or you will have to change the trust relationship for the AWS IAM role.
Grant the service account the
restricted
role by running the following command:$ oc adm policy add-scc-to-user restricted system:serviceaccount:${OSTOY_NAMESPACE}:ostoy-sa
Confirm that the annotation was successful by running the following command:
$ oc describe serviceaccount ostoy-sa -n ${OSTOY_NAMESPACE}
Example output
Name: ostoy-sa Namespace: ostoy Labels: <none> Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::000000000000:role/ostoy-sa-role Image pull secrets: ostoy-sa-dockercfg-b2l94 Mountable secrets: ostoy-sa-dockercfg-b2l94 Tokens: ostoy-sa-token-jlc6d Events: <none>
18.13.9. Creating an S3 bucket
Create an S3 bucket using a manifest file by running the following command:
$ cat <<EOF | oc apply -f - apiVersion: s3.services.k8s.aws/v1alpha1 kind: Bucket metadata: name: ${OSTOY_NAMESPACE}-bucket namespace: ${OSTOY_NAMESPACE} spec: name: ${OSTOY_NAMESPACE}-bucket EOF
ImportantThe OSToy application expects to find a bucket named
<namespace>-bucket
. If you use anything other than the namespace of your OSToy project, this feature will not work. For example, if our project is "ostoy", the value forname
must beostoy-bucket
.Confirm the bucket was created by running the following command:
$ aws s3 ls | grep ${OSTOY_NAMESPACE}-bucket
18.13.10. Redeploying the OSToy app with the new service account
- Run your pod with the service account you created.
Deploy the microservice by running the following command:
$ - oc apply -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/ostoy-microservice-deployment.yaml
Deploy the
ostoy-frontend
by running the following command:$ - oc apply -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/ostoy-frontend-deployment.yaml
Patch the
ostoy-frontend
deployment by running the following command:$ oc patch deploy ostoy-frontend -n ${OSTOY_NAMESPACE} --type=merge --patch '{"spec": {"template": {"spec":{"serviceAccount":"ostoy-sa"}}}}'
Example output
spec: # Uncomment to use with ACK portion of the workshop # If you chose a different service account name please replace it. serviceAccount: ostoy-sa containers: - name: ostoy-frontend image: quay.io/ostoylab/ostoy-frontend:1.6.0 imagePullPolicy: IfNotPresent [...]
- Wait for the pod to update.
18.13.11. Confirming the environment variables
Use the following command to describe the pods and verify that the
AWS_WEB_IDENTITY_TOKEN_FILE
andAWS_ROLE_ARN
environment variables exist for our application:$ oc describe pod ostoy-frontend -n ${OSTOY_NAMESPACE} | grep "^\s*AWS_"
Example output
AWS_ROLE_ARN: arn:aws:iam::000000000000:role/ostoy-sa AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
18.13.12. Viewing the bucket contents through OSToy
Use your app to view the contents of your S3 bucket.
Get the route for the newly deployed application by running the following command:
$ oc get route ostoy-route -n ${OSTOY_NAMESPACE} -o jsonpath='{.spec.host}{"\n"}'
Open a new browser tab and enter the route obtained in the previous step.
ImportantBe sure to use
http://
and nothttps://
.- Click ACK S3 in the left menu in OSToy.
Because it is a new bucket, the bucket should be empty.
18.13.13. Creating files in your S3 bucket
Use OStoy to create a file and upload it to the S3 bucket. While S3 can accept any kind of file, for this tutorial use text files so that the contents can easily be rendered in the browser.
- Click ACK S3 in the left menu in OSToy.
- Scroll down to Upload a text file to S3.
- Enter a file name for your file.
- Enter content for your file.
Click Create file.
- Scroll to the top section for existing files and confirm that the file you just created is there.
Click the file name to view the file.
Confirm with the AWS CLI by running the following command to list the contents of your bucket:
$ aws s3 ls s3://${OSTOY_NAMESPACE}-bucket
Example output
$ aws s3 ls s3://ostoy-bucket 2023-05-04 22:20:51 51 OSToy.txt
Legal Notice
Copyright © 2024 Red Hat, Inc.
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.