Tutorials
Red Hat OpenShift Service on AWS tutorials
Abstract
Chapter 1. Tutorials overview Copy linkLink copied to clipboard!
Use the step-by-step tutorials from Red Hat experts to get the most out of your Managed OpenShift cluster.
This content is authored by Red Hat experts but has not yet been tested on every supported configuration.
Chapter 2. Tutorial: Verifying permissions for a Red Hat OpenShift Service on AWS classic architecture STS deployment Copy linkLink copied to clipboard!
To proceed with the deployment of a Red Hat OpenShift Service on AWS classic architecture cluster, an account must support the required roles and permissions. AWS Service Control Policies (SCPs) cannot block the API calls made by the installer or Operator roles.
Details about the IAM resources required for an STS-enabled installation of Red Hat OpenShift Service on AWS classic architecture can be found here: About IAM resources for Red Hat OpenShift Service on AWS classic architecture clusters that use STS.
This guide is validated for Red Hat OpenShift Service on AWS classic architecture v4.11.X.
2.1. Prerequisites Copy linkLink copied to clipboard!
2.2. Verifying Red Hat OpenShift Service on AWS classic architecture permissions Copy linkLink copied to clipboard!
To verify the permissions required for Red Hat OpenShift Service on AWS classic architecture, we can run the script included in the following section without ever creating any AWS resources.
The script uses the rosa
, aws
, and jq
CLI commands to create files in the working directory that will be used to verify permissions in the account connected to the current AWS configuration.
The AWS Policy Simulator is used to verify the permissions of each role policy against the API calls extracted by jq
; results are then stored in a text file appended with .results
.
This script is designed to verify the permissions for the current account and region.
2.3. Usage Instructions Copy linkLink copied to clipboard!
To use the script, run the following commands in a
bash
terminal (the -p option defines a prefix for the roles):Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the script completes, review each results file to ensure that none of the required API calls are blocked:
for file in $(ls *.results); do echo $file; cat $file; done
$ for file in $(ls *.results); do echo $file; cat $file; done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output will look similar to the following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf any actions are blocked, review the error provided by AWS and consult with your Administrator to determine if SCPs are blocking the required API calls.
Chapter 3. Tutorial: Deploying Red Hat OpenShift Service on AWS classic architecture with a Custom DNS Resolver Copy linkLink copied to clipboard!
A custom DHCP option set enables you to customize your VPC with your own DNS server, domain name, and more. Red Hat OpenShift Service on AWS classic architecture clusters support using custom DHCP option sets. By default, Red Hat OpenShift Service on AWS classic architecture clusters require setting the "domain name servers" option to AmazonProvidedDNS
to ensure successful cluster creation and operation. Customers who want to use custom DNS servers for DNS resolution must do additional configuration to ensure successful Red Hat OpenShift Service on AWS classic architecture cluster creation and operation.
In this tutorial, we will configure our DNS server to forward DNS lookups for specific DNS zones (further detailed below) to an Amazon Route 53 Inbound Resolver.
This tutorial uses the open-source BIND DNS server (named
) to demonstrate the configuration necessary to forward DNS lookups to an Amazon Route 53 Inbound Resolver located in the VPC you plan to deploy a Red Hat OpenShift Service on AWS classic architecture cluster into. Refer to the documentation of your preferred DNS server for how to configure zone forwarding.
3.1. Prerequisites Copy linkLink copied to clipboard!
-
ROSA CLI (
rosa
) -
AWS CLI (
aws
) - A manually created AWS VPC
- A DHCP option set configured to point to a custom DNS server and set as the default for your VPC
3.2. Setting up your environment Copy linkLink copied to clipboard!
Configure the following environment variables:
export VPC_ID=<vpc_ID> export REGION=<region> export VPC_CIDR=<vpc_CIDR>
$ export VPC_ID=<vpc_ID>
1 $ export REGION=<region>
2 $ export VPC_CIDR=<vpc_CIDR>
3 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure all fields output correctly before moving to the next section:
echo "VPC ID: ${VPC_ID}, VPC CIDR Range: ${VPC_CIDR}, Region: ${REGION}"
$ echo "VPC ID: ${VPC_ID}, VPC CIDR Range: ${VPC_CIDR}, Region: ${REGION}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3. Create an Amazon Route 53 Inbound Resolver Copy linkLink copied to clipboard!
Use the following procedure to deploy an Amazon Route 53 Inbound Resolver in the VPC we plan to deploy the cluster into.
In this example, we deploy the Amazon Route 53 Inbound Resolver into the same VPC the cluster will use. If you want to deploy it into a separate VPC, you must manually associate the private hosted zone(s) detailed below once cluster creation is started. You cannot associate the zone before the cluster creation process begins. Failure to associate the private hosted zone during the cluster creation process will result in cluster creation failures.
Create a security group and allow access to ports
53/tcp
and53/udp
from the VPC:SG_ID=$(aws ec2 create-security-group --group-name rosa-inbound-resolver --description "Security group for ROSA inbound resolver" --vpc-id ${VPC_ID} --region ${REGION} --output text) aws ec2 authorize-security-group-ingress --group-id ${SG_ID} --protocol tcp --port 53 --cidr ${VPC_CIDR} --region ${REGION} aws ec2 authorize-security-group-ingress --group-id ${SG_ID} --protocol udp --port 53 --cidr ${VPC_CIDR} --region ${REGION}
$ SG_ID=$(aws ec2 create-security-group --group-name rosa-inbound-resolver --description "Security group for ROSA inbound resolver" --vpc-id ${VPC_ID} --region ${REGION} --output text) $ aws ec2 authorize-security-group-ingress --group-id ${SG_ID} --protocol tcp --port 53 --cidr ${VPC_CIDR} --region ${REGION} $ aws ec2 authorize-security-group-ingress --group-id ${SG_ID} --protocol udp --port 53 --cidr ${VPC_CIDR} --region ${REGION}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an Amazon Route 53 Inbound Resolver in your VPC:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe above command attaches Amazon Route 53 Inbound Resolver endpoints to all subnets in the provided VPC using dynamically allocated IP addresses. If you prefer to manually specify the subnets and/or IP addresses, run the following command instead:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<subnet_ID>
with the subnet IDs and<endpoint_IP>
with the static IP addresses you want inbound resolver endpoints added to.
Get the IP addresses of your inbound resolver endpoints to configure in your DNS server configuration:
aws route53resolver list-resolver-endpoint-ip-addresses \ --resolver-endpoint-id ${RESOLVER_ID} \ --region=${REGION} \ --query 'IpAddresses[*].Ip'
$ aws route53resolver list-resolver-endpoint-ip-addresses \ --resolver-endpoint-id ${RESOLVER_ID} \ --region=${REGION} \ --query 'IpAddresses[*].Ip'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
[ "10.0.45.253", "10.0.23.131", "10.0.148.159" ]
[ "10.0.45.253", "10.0.23.131", "10.0.148.159" ]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4. Configure your DNS server Copy linkLink copied to clipboard!
Use the following procedure to configure your DNS server to forward the necessary private hosted zones to your Amazon Route 53 Inbound Resolver.
3.4.1. Red Hat OpenShift Service on AWS classic architecture Copy linkLink copied to clipboard!
Red Hat OpenShift Service on AWS classic architecture clusters require you to configure DNS forwarding for one private hosted zones:
-
<domain-prefix>.<unique-ID>.p1.openshiftapps.com
This Amazon Route 53 private hosted zones is created during cluster creation. The domain-prefix
is a customer-specified value, but the unique-ID
is randomly generated during cluster creation and cannot be preselected. As such, you must wait for the cluster creation process to begin before configuring forwarding for the p1.openshiftapps.com
private hosted zone.
- Create your cluster.
Once your cluster has begun the creation process, locate the newly created private hosted zone:
aws route53 list-hosted-zones-by-vpc \ --vpc-id ${VPC_ID} \ --vpc-region ${REGION} \ --query 'HostedZoneSummaries[*].Name' \ --output table
$ aws route53 list-hosted-zones-by-vpc \ --vpc-id ${VPC_ID} \ --vpc-region ${REGION} \ --query 'HostedZoneSummaries[*].Name' \ --output table
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
---------------------------------------------- | ListHostedZonesByVPC | +--------------------------------------------+ | domain-prefix.agls.p3.openshiftapps.com. | +--------------------------------------------+
---------------------------------------------- | ListHostedZonesByVPC | +--------------------------------------------+ | domain-prefix.agls.p3.openshiftapps.com. | +--------------------------------------------+
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt may take a few minutes for the cluster creation process to create the private hosted zones in Route 53. If you do not see an
p1.openshiftapps.com
domain, wait a few minutes and run the command again.Once you know the unique ID of the cluster domain, configure your DNS server to forward all DNS requests for
<domain-prefix>.<unique-ID>.p1.openshiftapps.com
to your Amazon Route 53 Inbound Resolver endpoints. For BIND DNS servers, edit your/etc/named.conf
file in your favorite text editor and add a new zone using the below example:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Tutorial: Using AWS WAF and Amazon CloudFront to protect Red Hat OpenShift Service on AWS classic architecture workloads Copy linkLink copied to clipboard!
AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to your protected web application resources.
You can use an Amazon CloudFront to add a Web Application Firewall (WAF) to your Red Hat OpenShift Service on AWS classic architecture workloads. Using an external solution protects Red Hat OpenShift Service on AWS classic architecture resources from experiencing denial of service due to handling the WAF.
WAFv1, WAF classic, is no longer supported. Use WAFv2.
4.1. Prerequisites Copy linkLink copied to clipboard!
- A Red Hat OpenShift Service on AWS classic architecture cluster.
-
You have access to the OpenShift CLI (
oc
). -
You have access to the AWS CLI (
aws
).
4.1.1. Environment setup Copy linkLink copied to clipboard!
Prepare the environment variables:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace with the custom domain you want to use for the
IngressController
.
NoteThe "Cluster" output from the previous command might be the name of your cluster, the internal ID of your cluster, or the cluster’s domain prefix. If you prefer to use another identifier, you can manually set this value by running the following command:
export CLUSTER=my-custom-value
$ export CLUSTER=my-custom-value
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2. Setting up the secondary ingress controller Copy linkLink copied to clipboard!
It is necessary to configure a secondary ingress controller to segment your external WAF-protected traffic from your standard (and default) cluster ingress controller.
Prerequisites
A publicly trusted SAN or wildcard certificate for your custom domain, such as
CN=*.apps.example.com
ImportantAmazon CloudFront uses HTTPS to communicate with your cluster’s secondary ingress controller. As explained in the Amazon CloudFront documentation, you cannot use a self-signed certificate for HTTPS communication between CloudFront and your cluster. Amazon CloudFront verifies that the certificate was issued by a trusted certificate authority.
Procedure
Create a new TLS secret from a private key and a public certificate, where
fullchain.pem
is your full wildcard certificate chain (including any intermediaries) andprivkey.pem
is your wildcard certificate’s private key.Example
oc -n openshift-ingress create secret tls waf-tls --cert=fullchain.pem --key=privkey.pem
$ oc -n openshift-ingress create secret tls waf-tls --cert=fullchain.pem --key=privkey.pem
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new
IngressController
resource:Example
waf-ingress-controller.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
IngressController
:Example
oc apply -f waf-ingress-controller.yaml
$ oc apply -f waf-ingress-controller.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that your IngressController has successfully created an external load balancer:
oc -n openshift-ingress get service/router-cloudfront-waf
$ oc -n openshift-ingress get service/router-cloudfront-waf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-cloudfront-waf LoadBalancer 172.30.16.141 a68a838a7f26440bf8647809b61c4bc8-4225395f488830bd.elb.us-east-1.amazonaws.com 80:30606/TCP,443:31065/TCP 2m19s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-cloudfront-waf LoadBalancer 172.30.16.141 a68a838a7f26440bf8647809b61c4bc8-4225395f488830bd.elb.us-east-1.amazonaws.com 80:30606/TCP,443:31065/TCP 2m19s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.1. Configure the AWS WAF Copy linkLink copied to clipboard!
The AWS WAF service is a web application firewall that lets you monitor, protect, and control the HTTP and HTTPS requests that are forwarded to your protected web application resources, like Red Hat OpenShift Service on AWS classic architecture.
Create a AWS WAF rules file to apply to our web ACL:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This will enable the Core (Common) and SQL AWS Managed Rule Sets.
Create an AWS WAF Web ACL using the rules we specified above:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3. Configure Amazon CloudFront Copy linkLink copied to clipboard!
Retrieve the newly created custom ingress controller’s NLB hostname:
NLB=$(oc -n openshift-ingress get service router-cloudfront-waf \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
$ NLB=$(oc -n openshift-ingress get service router-cloudfront-waf \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Import your certificate into Amazon Certificate Manager, where
cert.pem
is your wildcard certificate,fullchain.pem
is your wildcard certificate’s chain andprivkey.pem
is your wildcard certificate’s private key.NoteRegardless of what region your cluster is deployed, you must import this certificate to
us-east-1
as Amazon CloudFront is a global AWS service.Example
aws acm import-certificate --certificate file://cert.pem \ --certificate-chain file://fullchain.pem \ --private-key file://privkey.pem \ --region us-east-1
$ aws acm import-certificate --certificate file://cert.pem \ --certificate-chain file://fullchain.pem \ --private-key file://privkey.pem \ --region us-east-1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Log into the AWS console to create a CloudFront distribution.
Configure the CloudFront distribution by using the following information:
NoteIf an option is not specified in the table below, leave them the default (which may be blank).
Expand Option Value Origin domain
Output from the previous command [1]
Name
rosa-waf-ingress [2]
Viewer protocol policy
Redirect HTTP to HTTPS
Allowed HTTP methods
GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE
Cache policy
CachingDisabled
Origin request policy
AllViewer
Web Application Firewall (WAF)
Enable security protections
Use existing WAF configuration
true
Choose a web ACL
cloudfront-waf
Alternate domain name (CNAME)
*.apps.example.com [3]
Custom SSL certificate
Select the certificate you imported from the step above [4]
-
Run
echo ${NLB}
to get the origin domain. - If you have multiple clusters, ensure the origin name is unique.
- This should match the wildcard domain you used to create the custom ingress controller.
- This should match the alternate domain name entered above.
-
Run
Retrieve the Amazon CloudFront Distribution endpoint:
aws cloudfront list-distributions --query "DistributionList.Items[?Origins.Items[?DomainName=='${NLB}']].DomainName" --output text
$ aws cloudfront list-distributions --query "DistributionList.Items[?Origins.Items[?DomainName=='${NLB}']].DomainName" --output text
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the DNS of your custom wildcard domain with a CNAME to the Amazon CloudFront Distribution endpoint from the step above.
Example
*.apps.example.com CNAME d1b2c3d4e5f6g7.cloudfront.net
*.apps.example.com CNAME d1b2c3d4e5f6g7.cloudfront.net
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4. Deploy a sample application Copy linkLink copied to clipboard!
Create a new project for your sample application by running the following command:
oc new-project hello-world
$ oc new-project hello-world
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy a hello world application:
oc -n hello-world new-app --image=docker.io/openshift/hello-openshift
$ oc -n hello-world new-app --image=docker.io/openshift/hello-openshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a route for the application specifying your custom domain name:
Example
oc -n hello-world create route edge --service=hello-openshift hello-openshift-tls \ --hostname hello-openshift.${DOMAIN}
$ oc -n hello-world create route edge --service=hello-openshift hello-openshift-tls \ --hostname hello-openshift.${DOMAIN}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Label the route to admit it to your custom ingress controller:
oc -n hello-world label route.route.openshift.io/hello-openshift-tls route=waf
$ oc -n hello-world label route.route.openshift.io/hello-openshift-tls route=waf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5. Test the WAF Copy linkLink copied to clipboard!
Test that the app is accessible behind Amazon CloudFront:
Example
curl "https://hello-openshift.${DOMAIN}"
$ curl "https://hello-openshift.${DOMAIN}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Hello OpenShift!
Hello OpenShift!
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Test that the WAF denies a bad request:
Example
curl -X POST "https://hello-openshift.${DOMAIN}" \ -F "user='<script><alert>Hello></alert></script>'"
$ curl -X POST "https://hello-openshift.${DOMAIN}" \ -F "user='<script><alert>Hello></alert></script>'"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The expected result is a
403 ERROR
, which means the AWS WAF is protecting your application.
Chapter 5. Tutorial: Using AWS WAF and AWS ALBs to protect Red Hat OpenShift Service on AWS classic architecture workloads Copy linkLink copied to clipboard!
AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to your protected web application resources.
You can use an AWS Application Load Balancer (ALB) to add a Web Application Firewall (WAF) to your Red Hat OpenShift Service on AWS classic architecture workloads. Using an external solution protects Red Hat OpenShift Service on AWS classic architecture resources from experiencing denial of service due to handling the WAF.
It is recommended that you use the more flexible CloudFront method unless you absolutely must use an ALB based solution.
5.1. Prerequisites Copy linkLink copied to clipboard!
Multiple availability zone (AZ) Red Hat OpenShift Service on AWS classic architecture cluster.
NoteAWS ALBs require at least two public subnets across AZs, per the AWS documentation. For this reason, only multiple AZ Red Hat OpenShift Service on AWS classic architecture clusters can be used with ALBs.
-
You have access to the OpenShift CLI (
oc
). -
You have access to the AWS CLI (
aws
).
5.1.1. Environment setup Copy linkLink copied to clipboard!
Prepare the environment variables:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.1.2. AWS VPC and subnets Copy linkLink copied to clipboard!
This section only applies to clusters that were deployed into existing VPCs. If you did not deploy your cluster into an existing VPC, skip this section and proceed to the installation section below.
Set the below variables to the proper values for your Red Hat OpenShift Service on AWS classic architecture deployment:
export VPC_ID=<vpc-id> export PUBLIC_SUBNET_IDS=(<space-separated-list-of-ids>) export PRIVATE_SUBNET_IDS=(<space-separated-list-of-ids>)
$ export VPC_ID=<vpc-id>
1 $ export PUBLIC_SUBNET_IDS=(<space-separated-list-of-ids>)
2 $ export PRIVATE_SUBNET_IDS=(<space-separated-list-of-ids>)
3 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace with the VPC ID of the cluster, for example:
export VPC_ID=vpc-04c429b7dbc4680ba
. - 2
- Replace with a space-separated list of the private subnet IDs of the cluster, making sure to preserve the
()
. For example:export PUBLIC_SUBNET_IDS=(subnet-056fd6861ad332ba2 subnet-08ce3b4ec753fe74c subnet-071aa28228664972f)
. - 3
- Replace with a space-separated list of the private subnet IDs of the cluster, making sure to preserve the
()
. For example:export PRIVATE_SUBNET_IDS=(subnet-0b933d72a8d72c36a subnet-0817eb72070f1d3c2 subnet-0806e64159b66665a)
.
Add a tag to your cluster’s VPC with the cluster identifier:
aws ec2 create-tags --resources ${VPC_ID} \ --tags Key=kubernetes.io/cluster/${CLUSTER},Value=shared --region ${REGION}
$ aws ec2 create-tags --resources ${VPC_ID} \ --tags Key=kubernetes.io/cluster/${CLUSTER},Value=shared --region ${REGION}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a tag to your public subnets:
aws ec2 create-tags \ --resources ${PUBLIC_SUBNET_IDS} \ --tags Key=kubernetes.io/role/elb,Value='1' \ Key=kubernetes.io/cluster/${CLUSTER},Value=shared \ --region ${REGION}
$ aws ec2 create-tags \ --resources ${PUBLIC_SUBNET_IDS} \ --tags Key=kubernetes.io/role/elb,Value='1' \ Key=kubernetes.io/cluster/${CLUSTER},Value=shared \ --region ${REGION}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a tag to your private subnets:
aws ec2 create-tags \ --resources ${PRIVATE_SUBNET_IDS} \ --tags Key=kubernetes.io/role/internal-elb,Value='1' \ Key=kubernetes.io/cluster/${CLUSTER},Value=shared \ --region ${REGION}
$ aws ec2 create-tags \ --resources ${PRIVATE_SUBNET_IDS} \ --tags Key=kubernetes.io/role/internal-elb,Value='1' \ Key=kubernetes.io/cluster/${CLUSTER},Value=shared \ --region ${REGION}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2. Deploy the AWS Load Balancer Operator Copy linkLink copied to clipboard!
The AWS Load Balancer Operator is used to used to install, manage and configure an instance of aws-load-balancer-controller
in a Red Hat OpenShift Service on AWS classic architecture cluster. To deploy ALBs in Red Hat OpenShift Service on AWS classic architecture, we need to first deploy the AWS Load Balancer Operator.
Create a new project to deploy the AWS Load Balancer Operator into by running the following command:
oc new-project aws-load-balancer-operator
$ oc new-project aws-load-balancer-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an AWS IAM policy for the AWS Load Balancer Controller if one does not already exist by running the following command:
NoteThe policy is sourced from the upstream AWS Load Balancer Controller policy. This is required by the operator to function.
POLICY_ARN=$(aws iam list-policies --query \ "Policies[?PolicyName=='aws-load-balancer-operator-policy'].{ARN:Arn}" \ --output text)
$ POLICY_ARN=$(aws iam list-policies --query \ "Policies[?PolicyName=='aws-load-balancer-operator-policy'].{ARN:Arn}" \ --output text)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an AWS IAM trust policy for AWS Load Balancer Operator:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an AWS IAM role for the AWS Load Balancer Operator:
ROLE_ARN=$(aws iam create-role --role-name "${CLUSTER}-alb-operator" \ --assume-role-policy-document "file://${SCRATCH}/trust-policy.json" \ --query Role.Arn --output text)
$ ROLE_ARN=$(aws iam create-role --role-name "${CLUSTER}-alb-operator" \ --assume-role-policy-document "file://${SCRATCH}/trust-policy.json" \ --query Role.Arn --output text)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Attach the AWS Load Balancer Operator policy to the IAM role we created previously by running the following command:
aws iam attach-role-policy --role-name "${CLUSTER}-alb-operator" \ --policy-arn ${POLICY_ARN}
$ aws iam attach-role-policy --role-name "${CLUSTER}-alb-operator" \ --policy-arn ${POLICY_ARN}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a secret for the AWS Load Balancer Operator to assume our newly created AWS IAM role:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the AWS Load Balancer Operator:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy an instance of the AWS Load Balancer Controller using the operator:
NoteIf you get an error here wait a minute and try again, it means the Operator has not completed installing yet.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the that the operator and controller pods are both running:
oc -n aws-load-balancer-operator get pods
$ oc -n aws-load-balancer-operator get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You should see the following, if not wait a moment and retry:
NAME READY STATUS RESTARTS AGE aws-load-balancer-controller-cluster-6ddf658785-pdp5d 1/1 Running 0 99s aws-load-balancer-operator-controller-manager-577d9ffcb9-w6zqn 2/2 Running 0 2m4s
NAME READY STATUS RESTARTS AGE aws-load-balancer-controller-cluster-6ddf658785-pdp5d 1/1 Running 0 99s aws-load-balancer-operator-controller-manager-577d9ffcb9-w6zqn 2/2 Running 0 2m4s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3. Deploy a sample application Copy linkLink copied to clipboard!
Create a new project for our sample application:
oc new-project hello-world
$ oc new-project hello-world
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy a hello world application:
oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
$ oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Convert the pre-created service resource to a NodePort service type:
oc -n hello-world patch service hello-openshift -p '{"spec":{"type":"NodePort"}}'
$ oc -n hello-world patch service hello-openshift -p '{"spec":{"type":"NodePort"}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy an AWS ALB using the AWS Load Balancer Operator:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Curl the AWS ALB Ingress endpoint to verify the hello world application is accessible:
NoteAWS ALB provisioning takes a few minutes. If you receive an error that says
curl: (6) Could not resolve host
, please wait and try again.INGRESS=$(oc -n hello-world get ingress hello-openshift-alb -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') curl "http://${INGRESS}"
$ INGRESS=$(oc -n hello-world get ingress hello-openshift-alb -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') $ curl "http://${INGRESS}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Hello OpenShift!
Hello OpenShift!
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.1. Configure the AWS WAF Copy linkLink copied to clipboard!
The AWS WAF service is a web application firewall that lets you monitor, protect, and control the HTTP and HTTPS requests that are forwarded to your protected web application resources, like Red Hat OpenShift Service on AWS classic architecture.
Create a AWS WAF rules file to apply to our web ACL:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This will enable the Core (Common) and SQL AWS Managed Rule Sets.
Create an AWS WAF Web ACL using the rules we specified above:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Annotate the Ingress resource with the AWS WAF Web ACL ARN:
oc annotate -n hello-world ingress.networking.k8s.io/hello-openshift-alb \ alb.ingress.kubernetes.io/wafv2-acl-arn=${WAF_ARN}
$ oc annotate -n hello-world ingress.networking.k8s.io/hello-openshift-alb \ alb.ingress.kubernetes.io/wafv2-acl-arn=${WAF_ARN}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for 10 seconds for the rules to propagate and test that the app still works:
curl "http://${INGRESS}"
$ curl "http://${INGRESS}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Hello OpenShift!
Hello OpenShift!
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Test that the WAF denies a bad request:
curl -X POST "http://${INGRESS}" \ -F "user='<script><alert>Hello></alert></script>'"
$ curl -X POST "http://${INGRESS}" \ -F "user='<script><alert>Hello></alert></script>'"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteActivation of the AWS WAF integration can sometimes take several minutes. If you do not receive a
403 Forbidden
error, please wait a few seconds and try again.The expected result is a
403 Forbidden
error, which means the AWS WAF is protecting your application.
Chapter 6. Tutorial: Deploying OpenShift API for Data Protection on a Red Hat OpenShift Service on AWS classic architecture cluster Copy linkLink copied to clipboard!
This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.
Prerequisites
Environment
Prepare the environment variables:
NoteChange the cluster name to match your Red Hat OpenShift Service on AWS classic architecture cluster and ensure you are logged into the cluster as an Administrator. Ensure all fields are outputted correctly before moving on.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.1. Prepare AWS Account Copy linkLink copied to clipboard!
Create an IAM Policy to allow for S3 Access:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an IAM Role trust policy for the cluster:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Attach the IAM Policy to the IAM Role:
aws iam attach-role-policy --role-name "${ROLE_NAME}" \ --policy-arn ${POLICY_ARN}
$ aws iam attach-role-policy --role-name "${ROLE_NAME}" \ --policy-arn ${POLICY_ARN}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.2. Deploy OADP on the cluster Copy linkLink copied to clipboard!
Create a namespace for OADP:
oc create namespace openshift-adp
$ oc create namespace openshift-adp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a credentials secret:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<aws_region>
with the AWS region to use for the Security Token Service (STS) endpoint.
Deploy the OADP Operator:
NoteThere is currently an issue with version 1.1 of the Operator with backups that have a
PartiallyFailed
status. This does not seem to affect the backup and restore process, but it should be noted as there are issues with it.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the Operator to be ready:
watch oc -n openshift-adp get pods
$ watch oc -n openshift-adp get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE openshift-adp-controller-manager-546684844f-qqjhn 1/1 Running 0 22s
NAME READY STATUS RESTARTS AGE openshift-adp-controller-manager-546684844f-qqjhn 1/1 Running 0 22s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create Cloud Storage:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check your application’s storage default storage class:
oc get pvc -n <namespace>
$ oc get pvc -n <namespace>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Enter your application’s namespace.
Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get storageclass
$ oc get storageclass
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Using either gp3-csi, gp2-csi, gp3 or gp2 will work. If the application(s) that are being backed up are all using PV’s with CSI, include the CSI plugin in the OADP DPA configuration.
CSI only: Deploy a Data Protection Application:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you run this command for CSI volumes, you can skip the next step.
Non-CSI volumes: Deploy a Data Protection Application:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
In OADP 1.1.x Red Hat OpenShift Service on AWS classic architecture STS environments, the container image backup and restore (
spec.backupImages
) value must be set tofalse
as it is not supported. -
The Restic feature (
restic.enable=false
) is disabled and not supported in Red Hat OpenShift Service on AWS classic architecture STS environments. -
The DataMover feature (
dataMover.enable=false
) is disabled and not supported in Red Hat OpenShift Service on AWS classic architecture STS environments.
6.3. Perform a backup Copy linkLink copied to clipboard!
The following sample hello-world application has no attached persistent volumes. Either DPA configuration will work.
Create a workload to back up:
oc create namespace hello-world oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
$ oc create namespace hello-world $ oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expose the route:
oc expose service/hello-openshift -n hello-world
$ oc expose service/hello-openshift -n hello-world
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the application is working:
curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`
$ curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Hello OpenShift!
Hello OpenShift!
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Back up the workload:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until the backup is done:
watch "oc -n openshift-adp get backup hello-world -o json | jq .status"
$ watch "oc -n openshift-adp get backup hello-world -o json | jq .status"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the demo workload:
oc delete ns hello-world
$ oc delete ns hello-world
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restore from the backup:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the Restore to finish:
watch "oc -n openshift-adp get restore hello-world -o json | jq .status"
$ watch "oc -n openshift-adp get restore hello-world -o json | jq .status"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the workload is restored:
oc -n hello-world get pods
$ oc -n hello-world get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s
NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`
$ curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Hello OpenShift!
Hello OpenShift!
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - For troubleshooting tips please refer to the OADP team’s troubleshooting documentation
- Additional sample applications can be found in the OADP team’s sample applications directory
6.4. Cleanup Copy linkLink copied to clipboard!
Delete the workload:
oc delete ns hello-world
$ oc delete ns hello-world
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the backup and restore resources from the cluster if they are no longer required:
oc delete backups.velero.io hello-world oc delete restores.velero.io hello-world
$ oc delete backups.velero.io hello-world $ oc delete restores.velero.io hello-world
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To delete the backup/restore and remote objects in s3:
velero backup delete hello-world velero restore delete hello-world
$ velero backup delete hello-world $ velero restore delete hello-world
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the Data Protection Application:
oc -n openshift-adp delete dpa ${CLUSTER_NAME}-dpa
$ oc -n openshift-adp delete dpa ${CLUSTER_NAME}-dpa
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the Cloud Storage:
oc -n openshift-adp delete cloudstorage ${CLUSTER_NAME}-oadp
$ oc -n openshift-adp delete cloudstorage ${CLUSTER_NAME}-oadp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningIf this command hangs, you might need to delete the finalizer:
oc -n openshift-adp patch cloudstorage ${CLUSTER_NAME}-oadp -p '{"metadata":{"finalizers":null}}' --type=merge
$ oc -n openshift-adp patch cloudstorage ${CLUSTER_NAME}-oadp -p '{"metadata":{"finalizers":null}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the Operator if it is no longer required:
oc -n openshift-adp delete subscription oadp-operator
$ oc -n openshift-adp delete subscription oadp-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the namespace for the Operator:
oc delete ns redhat-openshift-adp
$ oc delete ns redhat-openshift-adp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the Custom Resource Definitions from the cluster if you no longer wish to have them:
for CRD in `oc get crds | grep velero | awk '{print $1}'`; do oc delete crd $CRD; done $ for CRD in `oc get crds | grep -i oadp | awk '{print $1}'`; do oc delete crd $CRD; done
$ for CRD in `oc get crds | grep velero | awk '{print $1}'`; do oc delete crd $CRD; done $ for CRD in `oc get crds | grep -i oadp | awk '{print $1}'`; do oc delete crd $CRD; done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the AWS S3 Bucket:
aws s3 rm s3://${CLUSTER_NAME}-oadp --recursive aws s3api delete-bucket --bucket ${CLUSTER_NAME}-oadp
$ aws s3 rm s3://${CLUSTER_NAME}-oadp --recursive $ aws s3api delete-bucket --bucket ${CLUSTER_NAME}-oadp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Detach the Policy from the role:
aws iam detach-role-policy --role-name "${ROLE_NAME}" \ --policy-arn "${POLICY_ARN}"
$ aws iam detach-role-policy --role-name "${ROLE_NAME}" \ --policy-arn "${POLICY_ARN}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the role:
aws iam delete-role --role-name "${ROLE_NAME}"
$ aws iam delete-role --role-name "${ROLE_NAME}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 7. Tutorial: AWS Load Balancer Operator on Red Hat OpenShift Service on AWS classic architecture Copy linkLink copied to clipboard!
This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.
Load Balancers created by the AWS Load Balancer Operator cannot be used for OpenShift Routes, and should only be used for individual services or ingress resources that do not need the full layer 7 capabilities of an OpenShift Route.
The AWS Load Balancer Controller manages AWS Elastic Load Balancers for a Red Hat OpenShift Service on AWS classic architecture cluster. The controller provisions AWS Application Load Balancers (ALB) when you create Kubernetes Ingress resources and AWS Network Load Balancers (NLB) when implementing Kubernetes Service resources with a type of LoadBalancer.
Compared with the default AWS in-tree load balancer provider, this controller is developed with advanced annotations for both ALBs and NLBs. Some advanced use cases are:
- Using native Kubernetes Ingress objects with ALBs
Integrate ALBs with the AWS Web Application Firewall (WAF) service
NoteWAFv1, WAF classic, is no longer supported. Use WAFv2.
- Specify custom NLB source IP ranges
- Specify custom NLB internal IP addresses
The AWS Load Balancer Operator is used to used to install, manage and configure an instance of aws-load-balancer-controller
in a Red Hat OpenShift Service on AWS classic architecture cluster.
7.1. Prerequisites Copy linkLink copied to clipboard!
AWS ALBs require a multi-AZ cluster, as well as three public subnets split across three AZs in the same VPC as the cluster. This makes ALBs unsuitable for many PrivateLink clusters. AWS NLBs do not have this restriction.
Legal Notice
Copy linkLink copied to clipboard!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.