Tutorials


Red Hat OpenShift Service on AWS classic architecture 4

Red Hat OpenShift Service on AWS tutorials

Red Hat OpenShift Documentation Team

Abstract

Tutorials on creating your first Red Hat OpenShift Service on AWS (ROSA) cluster.

Chapter 1. Tutorials overview

Use the step-by-step tutorials from Red Hat experts to get the most out of your Managed OpenShift cluster.

Important

This content is authored by Red Hat experts but has not yet been tested on every supported configuration.

To proceed with the deployment of a Red Hat OpenShift Service on AWS classic architecture cluster, an account must support the required roles and permissions. AWS Service Control Policies (SCPs) cannot block the API calls made by the installer or Operator roles.

Details about the IAM resources required for an STS-enabled installation of Red Hat OpenShift Service on AWS classic architecture can be found here: About IAM resources for Red Hat OpenShift Service on AWS classic architecture clusters that use STS.

This guide is validated for Red Hat OpenShift Service on AWS classic architecture v4.11.X.

2.1. Prerequisites

To verify the permissions required for Red Hat OpenShift Service on AWS classic architecture, we can run the script included in the following section without ever creating any AWS resources.

The script uses the rosa, aws, and jq CLI commands to create files in the working directory that will be used to verify permissions in the account connected to the current AWS configuration.

The AWS Policy Simulator is used to verify the permissions of each role policy against the API calls extracted by jq; results are then stored in a text file appended with .results.

This script is designed to verify the permissions for the current account and region.

2.3. Usage Instructions

  1. To use the script, run the following commands in a bash terminal (the -p option defines a prefix for the roles):

    $ mkdir scratch
    $ cd scratch
    $ cat << 'EOF' > verify-permissions.sh
    #!/bin/bash
    while getopts 'p:' OPTION; do
      case "$OPTION" in
        p)
          PREFIX="$OPTARG"
          ;;
        ?)
          echo "script usage: $(basename \$0) [-p PREFIX]" >&2
          exit 1
          ;;
      esac
    done
    shift "$(($OPTIND -1))"
    rosa create account-roles --mode manual --prefix $PREFIX
    INSTALLER_POLICY=$(cat sts_installer_permission_policy.json | jq )
    CONTROL_PLANE_POLICY=$(cat sts_instance_controlplane_permission_policy.json | jq)
    WORKER_POLICY=$(cat sts_instance_worker_permission_policy.json | jq)
    SUPPORT_POLICY=$(cat sts_support_permission_policy.json | jq)
    simulatePolicy () {
        outputFile="${2}.results"
        echo $2
        aws iam simulate-custom-policy --policy-input-list "$1" --action-names $(jq '.Statement | map(select(.Effect == "Allow"))[].Action | if type == "string" then . else .[] end' "$2" -r) --output text > $outputFile
    }
    simulatePolicy "$INSTALLER_POLICY" "sts_installer_permission_policy.json"
    simulatePolicy "$CONTROL_PLANE_POLICY" "sts_instance_controlplane_permission_policy.json"
    simulatePolicy "$WORKER_POLICY" "sts_instance_worker_permission_policy.json"
    simulatePolicy "$SUPPORT_POLICY" "sts_support_permission_policy.json"
    EOF
    $ chmod +x verify-permissions.sh
    $ ./verify-permissions.sh -p SimPolTest
    Copy to Clipboard Toggle word wrap
  2. After the script completes, review each results file to ensure that none of the required API calls are blocked:

    $ for file in $(ls *.results); do echo $file; cat $file; done
    Copy to Clipboard Toggle word wrap

    The output will look similar to the following:

    sts_installer_permission_policy.json.results
    EVALUATIONRESULTS       autoscaling:DescribeAutoScalingGroups   allowed *
    MATCHEDSTATEMENTS       PolicyInputList.1       IAM Policy
    ENDPOSITION     6       195
    STARTPOSITION   17      3
    EVALUATIONRESULTS       ec2:AllocateAddress     allowed *
    MATCHEDSTATEMENTS       PolicyInputList.1       IAM Policy
    ENDPOSITION     6       195
    STARTPOSITION   17      3
    EVALUATIONRESULTS       ec2:AssociateAddress    allowed *
    MATCHEDSTATEMENTS       PolicyInputList.1       IAM Policy
    ...
    Copy to Clipboard Toggle word wrap
    Note

    If any actions are blocked, review the error provided by AWS and consult with your Administrator to determine if SCPs are blocking the required API calls.

A custom DHCP option set enables you to customize your VPC with your own DNS server, domain name, and more. Red Hat OpenShift Service on AWS classic architecture clusters support using custom DHCP option sets. By default, Red Hat OpenShift Service on AWS classic architecture clusters require setting the "domain name servers" option to AmazonProvidedDNS to ensure successful cluster creation and operation. Customers who want to use custom DNS servers for DNS resolution must do additional configuration to ensure successful Red Hat OpenShift Service on AWS classic architecture cluster creation and operation.

In this tutorial, we will configure our DNS server to forward DNS lookups for specific DNS zones (further detailed below) to an Amazon Route 53 Inbound Resolver.

Note

This tutorial uses the open-source BIND DNS server (named) to demonstrate the configuration necessary to forward DNS lookups to an Amazon Route 53 Inbound Resolver located in the VPC you plan to deploy a Red Hat OpenShift Service on AWS classic architecture cluster into. Refer to the documentation of your preferred DNS server for how to configure zone forwarding.

3.1. Prerequisites

  • ROSA CLI (rosa)
  • AWS CLI (aws)
  • A manually created AWS VPC
  • A DHCP option set configured to point to a custom DNS server and set as the default for your VPC

3.2. Setting up your environment

  1. Configure the following environment variables:

    $ export VPC_ID=<vpc_ID> 
    1
    
    $ export REGION=<region> 
    2
    
    $ export VPC_CIDR=<vpc_CIDR> 
    3
    Copy to Clipboard Toggle word wrap
    1
    Replace <vpc_ID> with the ID of the VPC you want to install your cluster into.
    2
    Replace <region> with the AWS region you want to install your cluster into.
    3
    Replace <vpc_CIDR> with the CIDR range of your VPC.
  2. Ensure all fields output correctly before moving to the next section:

    $ echo "VPC ID: ${VPC_ID}, VPC CIDR Range: ${VPC_CIDR}, Region: ${REGION}"
    Copy to Clipboard Toggle word wrap

3.3. Create an Amazon Route 53 Inbound Resolver

Use the following procedure to deploy an Amazon Route 53 Inbound Resolver in the VPC we plan to deploy the cluster into.

Warning

In this example, we deploy the Amazon Route 53 Inbound Resolver into the same VPC the cluster will use. If you want to deploy it into a separate VPC, you must manually associate the private hosted zone(s) detailed below once cluster creation is started. You cannot associate the zone before the cluster creation process begins. Failure to associate the private hosted zone during the cluster creation process will result in cluster creation failures.

  1. Create a security group and allow access to ports 53/tcp and 53/udp from the VPC:

    $ SG_ID=$(aws ec2 create-security-group --group-name rosa-inbound-resolver --description "Security group for ROSA inbound resolver" --vpc-id ${VPC_ID} --region ${REGION} --output text)
    $ aws ec2 authorize-security-group-ingress --group-id ${SG_ID} --protocol tcp --port 53 --cidr ${VPC_CIDR} --region ${REGION}
    $ aws ec2 authorize-security-group-ingress --group-id ${SG_ID} --protocol udp --port 53 --cidr ${VPC_CIDR} --region ${REGION}
    Copy to Clipboard Toggle word wrap
  2. Create an Amazon Route 53 Inbound Resolver in your VPC:

    $ RESOLVER_ID=$(aws route53resolver create-resolver-endpoint \
      --name rosa-inbound-resolver \
      --creator-request-id rosa-$(date '+%Y-%m-%d') \
      --security-group-ids ${SG_ID} \
      --direction INBOUND \
      --ip-addresses $(aws ec2 describe-subnets --filter Name=vpc-id,Values=${VPC_ID} --region ${REGION} | jq -jr '.Subnets | map("SubnetId=\(.SubnetId) ") | .[]') \
      --region ${REGION} \
      --output text \
      --query 'ResolverEndpoint.Id')
    Copy to Clipboard Toggle word wrap
    Note

    The above command attaches Amazon Route 53 Inbound Resolver endpoints to all subnets in the provided VPC using dynamically allocated IP addresses. If you prefer to manually specify the subnets and/or IP addresses, run the following command instead:

    $ RESOLVER_ID=$(aws route53resolver create-resolver-endpoint \
      --name rosa-inbound-resolver \
      --creator-request-id rosa-$(date '+%Y-%m-%d') \
      --security-group-ids ${SG_ID} \
      --direction INBOUND \
      --ip-addresses SubnetId=<subnet_ID>,Ip=<endpoint_IP> SubnetId=<subnet_ID>,Ip=<endpoint_IP> \
    1
    
      --region ${REGION} \
      --output text \
      --query 'ResolverEndpoint.Id')
    Copy to Clipboard Toggle word wrap
    1
    Replace <subnet_ID> with the subnet IDs and <endpoint_IP> with the static IP addresses you want inbound resolver endpoints added to.
  3. Get the IP addresses of your inbound resolver endpoints to configure in your DNS server configuration:

    $ aws route53resolver list-resolver-endpoint-ip-addresses \
      --resolver-endpoint-id ${RESOLVER_ID} \
      --region=${REGION} \
      --query 'IpAddresses[*].Ip'
    Copy to Clipboard Toggle word wrap

    Example output

    [
        "10.0.45.253",
        "10.0.23.131",
        "10.0.148.159"
    ]
    Copy to Clipboard Toggle word wrap

3.4. Configure your DNS server

Use the following procedure to configure your DNS server to forward the necessary private hosted zones to your Amazon Route 53 Inbound Resolver.

Red Hat OpenShift Service on AWS classic architecture clusters require you to configure DNS forwarding for one private hosted zones:

  • <domain-prefix>.<unique-ID>.p1.openshiftapps.com

This Amazon Route 53 private hosted zones is created during cluster creation. The domain-prefix is a customer-specified value, but the unique-ID is randomly generated during cluster creation and cannot be preselected. As such, you must wait for the cluster creation process to begin before configuring forwarding for the p1.openshiftapps.com private hosted zone.

  1. Create your cluster.
  2. Once your cluster has begun the creation process, locate the newly created private hosted zone:

    $ aws route53 list-hosted-zones-by-vpc \
      --vpc-id ${VPC_ID} \
      --vpc-region ${REGION} \
      --query 'HostedZoneSummaries[*].Name' \
      --output table
    Copy to Clipboard Toggle word wrap

    Example output

    ----------------------------------------------
    |           ListHostedZonesByVPC             |
    +--------------------------------------------+
    |  domain-prefix.agls.p3.openshiftapps.com.  |
    +--------------------------------------------+
    Copy to Clipboard Toggle word wrap

    Note

    It may take a few minutes for the cluster creation process to create the private hosted zones in Route 53. If you do not see an p1.openshiftapps.com domain, wait a few minutes and run the command again.

  3. Once you know the unique ID of the cluster domain, configure your DNS server to forward all DNS requests for <domain-prefix>.<unique-ID>.p1.openshiftapps.com to your Amazon Route 53 Inbound Resolver endpoints. For BIND DNS servers, edit your /etc/named.conf file in your favorite text editor and add a new zone using the below example:

    Example

    zone "<domain-prefix>.<unique-ID>.p1.openshiftapps.com" { 
    1
    
      type forward;
      forward only;
      forwarders { 
    2
    
        10.0.45.253;
        10.0.23.131;
        10.0.148.159;
      };
    };
    Copy to Clipboard Toggle word wrap

    1
    Replace <domain-prefix> with your cluster domain prefix and <unique-ID> with your unique ID collected above.
    2
    Replace with the IP addresses of your inbound resolver endpoints collected above, ensuring that following each IP address there is a ;.

AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to your protected web application resources.

You can use an Amazon CloudFront to add a Web Application Firewall (WAF) to your Red Hat OpenShift Service on AWS classic architecture workloads. Using an external solution protects Red Hat OpenShift Service on AWS classic architecture resources from experiencing denial of service due to handling the WAF.

Note

WAFv1, WAF classic, is no longer supported. Use WAFv2.

4.1. Prerequisites

  • A Red Hat OpenShift Service on AWS classic architecture cluster.
  • You have access to the OpenShift CLI (oc).
  • You have access to the AWS CLI (aws).

4.1.1. Environment setup

  • Prepare the environment variables:

    $ export DOMAIN=apps.example.com 
    1
    
    $ export AWS_PAGER=""
    $ export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}"  | sed 's/-[a-z0-9]\{5\}$//')
    $ export REGION=$(oc get infrastructure cluster -o=jsonpath="{.status.platformStatus.aws.region}")
    $ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
    $ export SCRATCH="/tmp/${CLUSTER}/cloudfront-waf"
    $ mkdir -p ${SCRATCH}
    $ echo "Cluster: ${CLUSTER}, Region: ${REGION}, AWS Account ID: ${AWS_ACCOUNT_ID}"
    Copy to Clipboard Toggle word wrap
    1
    Replace with the custom domain you want to use for the IngressController.
    Note

    The "Cluster" output from the previous command might be the name of your cluster, the internal ID of your cluster, or the cluster’s domain prefix. If you prefer to use another identifier, you can manually set this value by running the following command:

    $ export CLUSTER=my-custom-value
    Copy to Clipboard Toggle word wrap

4.2. Setting up the secondary ingress controller

It is necessary to configure a secondary ingress controller to segment your external WAF-protected traffic from your standard (and default) cluster ingress controller.

Prerequisites

  • A publicly trusted SAN or wildcard certificate for your custom domain, such as CN=*.apps.example.com

    Important

    Amazon CloudFront uses HTTPS to communicate with your cluster’s secondary ingress controller. As explained in the Amazon CloudFront documentation, you cannot use a self-signed certificate for HTTPS communication between CloudFront and your cluster. Amazon CloudFront verifies that the certificate was issued by a trusted certificate authority.

Procedure

  1. Create a new TLS secret from a private key and a public certificate, where fullchain.pem is your full wildcard certificate chain (including any intermediaries) and privkey.pem is your wildcard certificate’s private key.

    Example

    $ oc -n openshift-ingress create secret tls waf-tls --cert=fullchain.pem --key=privkey.pem
    Copy to Clipboard Toggle word wrap

  2. Create a new IngressController resource:

    Example waf-ingress-controller.yaml

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: cloudfront-waf
      namespace: openshift-ingress-operator
    spec:
      domain: apps.example.com 
    1
    
      defaultCertificate:
        name: waf-tls
      endpointPublishingStrategy:
        loadBalancer:
          dnsManagementPolicy: Unmanaged
          providerParameters:
            aws:
              type: NLB
            type: AWS
          scope: External
        type: LoadBalancerService
      routeSelector: 
    2
    
        matchLabels:
         route: waf
    Copy to Clipboard Toggle word wrap

    1
    Replace with the custom domain you want to use for the IngressController.
    2
    Filters the set of routes serviced by the Ingress Controller. In this tutorial, we will use the waf route selector, but if no value was to be provided, no filtering would occur.
  3. Apply the IngressController:

    Example

    $ oc apply -f waf-ingress-controller.yaml
    Copy to Clipboard Toggle word wrap

  4. Verify that your IngressController has successfully created an external load balancer:

    $ oc -n openshift-ingress get service/router-cloudfront-waf
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                    TYPE           CLUSTER-IP      EXTERNAL-IP                                                                     PORT(S)                      AGE
    router-cloudfront-waf   LoadBalancer   172.30.16.141   a68a838a7f26440bf8647809b61c4bc8-4225395f488830bd.elb.us-east-1.amazonaws.com   80:30606/TCP,443:31065/TCP   2m19s
    Copy to Clipboard Toggle word wrap

4.2.1. Configure the AWS WAF

The AWS WAF service is a web application firewall that lets you monitor, protect, and control the HTTP and HTTPS requests that are forwarded to your protected web application resources, like Red Hat OpenShift Service on AWS classic architecture.

  1. Create a AWS WAF rules file to apply to our web ACL:

    $ cat << EOF > ${SCRATCH}/waf-rules.json
    [
        {
          "Name": "AWS-AWSManagedRulesCommonRuleSet",
          "Priority": 0,
          "Statement": {
            "ManagedRuleGroupStatement": {
              "VendorName": "AWS",
              "Name": "AWSManagedRulesCommonRuleSet"
            }
          },
          "OverrideAction": {
            "None": {}
          },
          "VisibilityConfig": {
            "SampledRequestsEnabled": true,
            "CloudWatchMetricsEnabled": true,
            "MetricName": "AWS-AWSManagedRulesCommonRuleSet"
          }
        },
        {
          "Name": "AWS-AWSManagedRulesSQLiRuleSet",
          "Priority": 1,
          "Statement": {
            "ManagedRuleGroupStatement": {
              "VendorName": "AWS",
              "Name": "AWSManagedRulesSQLiRuleSet"
            }
          },
          "OverrideAction": {
            "None": {}
          },
          "VisibilityConfig": {
            "SampledRequestsEnabled": true,
            "CloudWatchMetricsEnabled": true,
            "MetricName": "AWS-AWSManagedRulesSQLiRuleSet"
          }
        }
    ]
    EOF
    Copy to Clipboard Toggle word wrap

    This will enable the Core (Common) and SQL AWS Managed Rule Sets.

  2. Create an AWS WAF Web ACL using the rules we specified above:

    $ WAF_WACL=$(aws wafv2 create-web-acl \
      --name cloudfront-waf \
      --region ${REGION} \
      --default-action Allow={} \
      --scope CLOUDFRONT \
      --visibility-config SampledRequestsEnabled=true,CloudWatchMetricsEnabled=true,MetricName=${CLUSTER}-waf-metrics \
      --rules file://${SCRATCH}/waf-rules.json \
      --query 'Summary.Name' \
      --output text)
    Copy to Clipboard Toggle word wrap

4.3. Configure Amazon CloudFront

  1. Retrieve the newly created custom ingress controller’s NLB hostname:

    $ NLB=$(oc -n openshift-ingress get service router-cloudfront-waf \
      -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    Copy to Clipboard Toggle word wrap
  2. Import your certificate into Amazon Certificate Manager, where cert.pem is your wildcard certificate, fullchain.pem is your wildcard certificate’s chain and privkey.pem is your wildcard certificate’s private key.

    Note

    Regardless of what region your cluster is deployed, you must import this certificate to us-east-1 as Amazon CloudFront is a global AWS service.

    Example

    $ aws acm import-certificate --certificate file://cert.pem \
      --certificate-chain file://fullchain.pem \
      --private-key file://privkey.pem \
      --region us-east-1
    Copy to Clipboard Toggle word wrap

  3. Log into the AWS console to create a CloudFront distribution.
  4. Configure the CloudFront distribution by using the following information:

    Note

    If an option is not specified in the table below, leave them the default (which may be blank).

    Expand
    OptionValue

    Origin domain

    Output from the previous command [1]

    Name

    rosa-waf-ingress [2]

    Viewer protocol policy

    Redirect HTTP to HTTPS

    Allowed HTTP methods

    GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE

    Cache policy

    CachingDisabled

    Origin request policy

    AllViewer

    Web Application Firewall (WAF)

    Enable security protections

    Use existing WAF configuration

    true

    Choose a web ACL

    cloudfront-waf

    Alternate domain name (CNAME)

    *.apps.example.com [3]

    Custom SSL certificate

    Select the certificate you imported from the step above [4]

    1. Run echo ${NLB} to get the origin domain.
    2. If you have multiple clusters, ensure the origin name is unique.
    3. This should match the wildcard domain you used to create the custom ingress controller.
    4. This should match the alternate domain name entered above.
  5. Retrieve the Amazon CloudFront Distribution endpoint:

    $ aws cloudfront list-distributions --query "DistributionList.Items[?Origins.Items[?DomainName=='${NLB}']].DomainName" --output text
    Copy to Clipboard Toggle word wrap
  6. Update the DNS of your custom wildcard domain with a CNAME to the Amazon CloudFront Distribution endpoint from the step above.

    Example

    *.apps.example.com CNAME d1b2c3d4e5f6g7.cloudfront.net
    Copy to Clipboard Toggle word wrap

4.4. Deploy a sample application

  1. Create a new project for your sample application by running the following command:

    $ oc new-project hello-world
    Copy to Clipboard Toggle word wrap
  2. Deploy a hello world application:

    $ oc -n hello-world new-app --image=docker.io/openshift/hello-openshift
    Copy to Clipboard Toggle word wrap
  3. Create a route for the application specifying your custom domain name:

    Example

    $ oc -n hello-world create route edge --service=hello-openshift hello-openshift-tls \
    --hostname hello-openshift.${DOMAIN}
    Copy to Clipboard Toggle word wrap

  4. Label the route to admit it to your custom ingress controller:

    $ oc -n hello-world label route.route.openshift.io/hello-openshift-tls route=waf
    Copy to Clipboard Toggle word wrap

4.5. Test the WAF

  1. Test that the app is accessible behind Amazon CloudFront:

    Example

    $ curl "https://hello-openshift.${DOMAIN}"
    Copy to Clipboard Toggle word wrap

    Example output

    Hello OpenShift!
    Copy to Clipboard Toggle word wrap

  2. Test that the WAF denies a bad request:

    Example

    $ curl -X POST "https://hello-openshift.${DOMAIN}" \
      -F "user='<script><alert>Hello></alert></script>'"
    Copy to Clipboard Toggle word wrap

    Example output

    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
    <HTML><HEAD><META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
    <TITLE>ERROR: The request could not be satisfied</TITLE>
    </HEAD><BODY>
    <H1>403 ERROR</H1>
    <H2>The request could not be satisfied.</H2>
    <HR noshade size="1px">
    Request blocked.
    We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.
    <BR clear="all">
    If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.
    <BR clear="all">
    <HR noshade size="1px">
    <PRE>
    Generated by cloudfront (CloudFront)
    Request ID: nFk9q2yB8jddI6FZOTjdliexzx-FwZtr8xUQUNT75HThPlrALDxbag==
    </PRE>
    <ADDRESS>
    </ADDRESS>
    </BODY></HTML>
    Copy to Clipboard Toggle word wrap

    The expected result is a 403 ERROR, which means the AWS WAF is protecting your application.

AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to your protected web application resources.

You can use an AWS Application Load Balancer (ALB) to add a Web Application Firewall (WAF) to your Red Hat OpenShift Service on AWS classic architecture workloads. Using an external solution protects Red Hat OpenShift Service on AWS classic architecture resources from experiencing denial of service due to handling the WAF.

Important

It is recommended that you use the more flexible CloudFront method unless you absolutely must use an ALB based solution.

5.1. Prerequisites

  • Multiple availability zone (AZ) Red Hat OpenShift Service on AWS classic architecture cluster.

    Note

    AWS ALBs require at least two public subnets across AZs, per the AWS documentation. For this reason, only multiple AZ Red Hat OpenShift Service on AWS classic architecture clusters can be used with ALBs.

  • You have access to the OpenShift CLI (oc).
  • You have access to the AWS CLI (aws).

5.1.1. Environment setup

  • Prepare the environment variables:

    $ export AWS_PAGER=""
    $ export CLUSTER=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}")
    $ export REGION=$(oc get infrastructure cluster -o=jsonpath="{.status.platformStatus.aws.region}")
    $ export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed  's|^https://||')
    $ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
    $ export SCRATCH="/tmp/${CLUSTER}/alb-waf"
    $ mkdir -p ${SCRATCH}
    $ echo "Cluster: $(echo ${CLUSTER} | sed 's/-[a-z0-9]\{5\}$//'), Region: ${REGION}, OIDC Endpoint: ${OIDC_ENDPOINT}, AWS Account ID: ${AWS_ACCOUNT_ID}"
    Copy to Clipboard Toggle word wrap

5.1.2. AWS VPC and subnets

Note

This section only applies to clusters that were deployed into existing VPCs. If you did not deploy your cluster into an existing VPC, skip this section and proceed to the installation section below.

  1. Set the below variables to the proper values for your Red Hat OpenShift Service on AWS classic architecture deployment:

    $ export VPC_ID=<vpc-id> 
    1
    
    $ export PUBLIC_SUBNET_IDS=(<space-separated-list-of-ids>) 
    2
    
    $ export PRIVATE_SUBNET_IDS=(<space-separated-list-of-ids>) 
    3
    Copy to Clipboard Toggle word wrap
    1
    Replace with the VPC ID of the cluster, for example: export VPC_ID=vpc-04c429b7dbc4680ba.
    2
    Replace with a space-separated list of the private subnet IDs of the cluster, making sure to preserve the (). For example: export PUBLIC_SUBNET_IDS=(subnet-056fd6861ad332ba2 subnet-08ce3b4ec753fe74c subnet-071aa28228664972f).
    3
    Replace with a space-separated list of the private subnet IDs of the cluster, making sure to preserve the (). For example: export PRIVATE_SUBNET_IDS=(subnet-0b933d72a8d72c36a subnet-0817eb72070f1d3c2 subnet-0806e64159b66665a).
  2. Add a tag to your cluster’s VPC with the cluster identifier:

    $ aws ec2 create-tags --resources ${VPC_ID} \
      --tags Key=kubernetes.io/cluster/${CLUSTER},Value=shared --region ${REGION}
    Copy to Clipboard Toggle word wrap
  3. Add a tag to your public subnets:

    $ aws ec2 create-tags \
      --resources ${PUBLIC_SUBNET_IDS} \
      --tags Key=kubernetes.io/role/elb,Value='1' \
            Key=kubernetes.io/cluster/${CLUSTER},Value=shared \
      --region ${REGION}
    Copy to Clipboard Toggle word wrap
  4. Add a tag to your private subnets:

    $ aws ec2 create-tags \
      --resources ${PRIVATE_SUBNET_IDS} \
      --tags Key=kubernetes.io/role/internal-elb,Value='1' \
            Key=kubernetes.io/cluster/${CLUSTER},Value=shared \
      --region ${REGION}
    Copy to Clipboard Toggle word wrap

5.2. Deploy the AWS Load Balancer Operator

The AWS Load Balancer Operator is used to used to install, manage and configure an instance of aws-load-balancer-controller in a Red Hat OpenShift Service on AWS classic architecture cluster. To deploy ALBs in Red Hat OpenShift Service on AWS classic architecture, we need to first deploy the AWS Load Balancer Operator.

  1. Create a new project to deploy the AWS Load Balancer Operator into by running the following command:

    $ oc new-project aws-load-balancer-operator
    Copy to Clipboard Toggle word wrap
  2. Create an AWS IAM policy for the AWS Load Balancer Controller if one does not already exist by running the following command:

    Note

    The policy is sourced from the upstream AWS Load Balancer Controller policy. This is required by the operator to function.

    $ POLICY_ARN=$(aws iam list-policies --query \
         "Policies[?PolicyName=='aws-load-balancer-operator-policy'].{ARN:Arn}" \
         --output text)
    Copy to Clipboard Toggle word wrap
    $ if [[ -z "${POLICY_ARN}" ]]; then
        wget -O "${SCRATCH}/load-balancer-operator-policy.json" \
           https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json
         POLICY_ARN=$(aws --region "$REGION" --query Policy.Arn \
         --output text iam create-policy \
         --policy-name aws-load-balancer-operator-policy \
         --policy-document "file://${SCRATCH}/load-balancer-operator-policy.json")
    fi
    Copy to Clipboard Toggle word wrap
  3. Create an AWS IAM trust policy for AWS Load Balancer Operator:

    $ cat <<EOF > "${SCRATCH}/trust-policy.json"
    {
     "Version": "2012-10-17",
     "Statement": [
     {
     "Effect": "Allow",
     "Condition": {
       "StringEquals" : {
         "${OIDC_ENDPOINT}:sub": ["system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-operator-controller-manager", "system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-controller-cluster"]
       }
     },
     "Principal": {
       "Federated": "arn:aws:iam::$AWS_ACCOUNT_ID:oidc-provider/${OIDC_ENDPOINT}"
     },
     "Action": "sts:AssumeRoleWithWebIdentity"
     }
     ]
    }
    EOF
    Copy to Clipboard Toggle word wrap
  4. Create an AWS IAM role for the AWS Load Balancer Operator:

    $ ROLE_ARN=$(aws iam create-role --role-name "${CLUSTER}-alb-operator" \
       --assume-role-policy-document "file://${SCRATCH}/trust-policy.json" \
       --query Role.Arn --output text)
    Copy to Clipboard Toggle word wrap
  5. Attach the AWS Load Balancer Operator policy to the IAM role we created previously by running the following command:

    $ aws iam attach-role-policy --role-name "${CLUSTER}-alb-operator" \
         --policy-arn ${POLICY_ARN}
    Copy to Clipboard Toggle word wrap
  6. Create a secret for the AWS Load Balancer Operator to assume our newly created AWS IAM role:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: aws-load-balancer-operator
      namespace: aws-load-balancer-operator
    stringData:
      credentials: |
        [default]
        role_arn = ${ROLE_ARN}
        web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token
    EOF
    Copy to Clipboard Toggle word wrap
  7. Install the AWS Load Balancer Operator:

    $ cat << EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: aws-load-balancer-operator
      namespace: aws-load-balancer-operator
    spec:
      upgradeStrategy: Default
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: aws-load-balancer-operator
      namespace: aws-load-balancer-operator
    spec:
      channel: stable-v1.0
      installPlanApproval: Automatic
      name: aws-load-balancer-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      startingCSV: aws-load-balancer-operator.v1.0.0
    EOF
    Copy to Clipboard Toggle word wrap
  8. Deploy an instance of the AWS Load Balancer Controller using the operator:

    Note

    If you get an error here wait a minute and try again, it means the Operator has not completed installing yet.

    $ cat << EOF | oc apply -f -
    apiVersion: networking.olm.openshift.io/v1
    kind: AWSLoadBalancerController
    metadata:
      name: cluster
    spec:
      credentials:
        name: aws-load-balancer-operator
      enabledAddons:
        - AWSWAFv2
    EOF
    Copy to Clipboard Toggle word wrap
  9. Check the that the operator and controller pods are both running:

    $ oc -n aws-load-balancer-operator get pods
    Copy to Clipboard Toggle word wrap

    You should see the following, if not wait a moment and retry:

    NAME                                                             READY   STATUS    RESTARTS   AGE
    aws-load-balancer-controller-cluster-6ddf658785-pdp5d            1/1     Running   0          99s
    aws-load-balancer-operator-controller-manager-577d9ffcb9-w6zqn   2/2     Running   0          2m4s
    Copy to Clipboard Toggle word wrap

5.3. Deploy a sample application

  1. Create a new project for our sample application:

    $ oc new-project hello-world
    Copy to Clipboard Toggle word wrap
  2. Deploy a hello world application:

    $ oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
    Copy to Clipboard Toggle word wrap
  3. Convert the pre-created service resource to a NodePort service type:

    $ oc -n hello-world patch service hello-openshift -p '{"spec":{"type":"NodePort"}}'
    Copy to Clipboard Toggle word wrap
  4. Deploy an AWS ALB using the AWS Load Balancer Operator:

    $ cat << EOF | oc apply -f -
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: hello-openshift-alb
      namespace: hello-world
      annotations:
        alb.ingress.kubernetes.io/scheme: internet-facing
    spec:
      ingressClassName: alb
      rules:
        - http:
            paths:
              - path: /
                pathType: Exact
                backend:
                  service:
                    name: hello-openshift
                    port:
                      number: 8080
    EOF
    Copy to Clipboard Toggle word wrap
  5. Curl the AWS ALB Ingress endpoint to verify the hello world application is accessible:

    Note

    AWS ALB provisioning takes a few minutes. If you receive an error that says curl: (6) Could not resolve host, please wait and try again.

    $ INGRESS=$(oc -n hello-world get ingress hello-openshift-alb -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    $ curl "http://${INGRESS}"
    Copy to Clipboard Toggle word wrap

    Example output

    Hello OpenShift!
    Copy to Clipboard Toggle word wrap

5.3.1. Configure the AWS WAF

The AWS WAF service is a web application firewall that lets you monitor, protect, and control the HTTP and HTTPS requests that are forwarded to your protected web application resources, like Red Hat OpenShift Service on AWS classic architecture.

  1. Create a AWS WAF rules file to apply to our web ACL:

    $ cat << EOF > ${SCRATCH}/waf-rules.json
    [
        {
          "Name": "AWS-AWSManagedRulesCommonRuleSet",
          "Priority": 0,
          "Statement": {
            "ManagedRuleGroupStatement": {
              "VendorName": "AWS",
              "Name": "AWSManagedRulesCommonRuleSet"
            }
          },
          "OverrideAction": {
            "None": {}
          },
          "VisibilityConfig": {
            "SampledRequestsEnabled": true,
            "CloudWatchMetricsEnabled": true,
            "MetricName": "AWS-AWSManagedRulesCommonRuleSet"
          }
        },
        {
          "Name": "AWS-AWSManagedRulesSQLiRuleSet",
          "Priority": 1,
          "Statement": {
            "ManagedRuleGroupStatement": {
              "VendorName": "AWS",
              "Name": "AWSManagedRulesSQLiRuleSet"
            }
          },
          "OverrideAction": {
            "None": {}
          },
          "VisibilityConfig": {
            "SampledRequestsEnabled": true,
            "CloudWatchMetricsEnabled": true,
            "MetricName": "AWS-AWSManagedRulesSQLiRuleSet"
          }
        }
    ]
    EOF
    Copy to Clipboard Toggle word wrap

    This will enable the Core (Common) and SQL AWS Managed Rule Sets.

  2. Create an AWS WAF Web ACL using the rules we specified above:

    $ WAF_ARN=$(aws wafv2 create-web-acl \
      --name ${CLUSTER}-waf \
      --region ${REGION} \
      --default-action Allow={} \
      --scope REGIONAL \
      --visibility-config SampledRequestsEnabled=true,CloudWatchMetricsEnabled=true,MetricName=${CLUSTER}-waf-metrics \
      --rules file://${SCRATCH}/waf-rules.json \
      --query 'Summary.ARN' \
      --output text)
    Copy to Clipboard Toggle word wrap
  3. Annotate the Ingress resource with the AWS WAF Web ACL ARN:

    $ oc annotate -n hello-world ingress.networking.k8s.io/hello-openshift-alb \
      alb.ingress.kubernetes.io/wafv2-acl-arn=${WAF_ARN}
    Copy to Clipboard Toggle word wrap
  4. Wait for 10 seconds for the rules to propagate and test that the app still works:

    $ curl "http://${INGRESS}"
    Copy to Clipboard Toggle word wrap

    Example output

    Hello OpenShift!
    Copy to Clipboard Toggle word wrap

  5. Test that the WAF denies a bad request:

    $ curl -X POST "http://${INGRESS}" \
      -F "user='<script><alert>Hello></alert></script>'"
    Copy to Clipboard Toggle word wrap

    Example output

    <html>
    <head><title>403 Forbidden</title></head>
    <body>
    <center><h1>403 Forbidden</h1></center>
    </body>
    </html
    Copy to Clipboard Toggle word wrap

    Note

    Activation of the AWS WAF integration can sometimes take several minutes. If you do not receive a 403 Forbidden error, please wait a few seconds and try again.

    The expected result is a 403 Forbidden error, which means the AWS WAF is protecting your application.

Important

This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.

Environment

  • Prepare the environment variables:

    Note

    Change the cluster name to match your Red Hat OpenShift Service on AWS classic architecture cluster and ensure you are logged into the cluster as an Administrator. Ensure all fields are outputted correctly before moving on.

    $ export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}"  | sed 's/-[a-z0-9]\{5\}$//')
    $ export ROSA_CLUSTER_ID=$(rosa describe cluster -c ${CLUSTER_NAME} --output json | jq -r .id)
    $ export REGION=$(rosa describe cluster -c ${CLUSTER_NAME} --output json | jq -r .region.id)
    $ export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed  's|^https://||')
    $ export AWS_ACCOUNT_ID=`aws sts get-caller-identity --query Account --output text`
    $ export CLUSTER_VERSION=`rosa describe cluster -c ${CLUSTER_NAME} -o json | jq -r .version.raw_id | cut -f -2 -d '.'`
    $ export ROLE_NAME="${CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials"
    $ export AWS_PAGER=""
    $ export SCRATCH="/tmp/${CLUSTER_NAME}/oadp"
    $ mkdir -p ${SCRATCH}
    $ echo "Cluster ID: ${ROSA_CLUSTER_ID}, Region: ${REGION}, OIDC Endpoint: ${OIDC_ENDPOINT}, AWS Account ID: ${AWS_ACCOUNT_ID}"
    Copy to Clipboard Toggle word wrap

6.1. Prepare AWS Account

  1. Create an IAM Policy to allow for S3 Access:

    $ POLICY_ARN=$(aws iam list-policies --query "Policies[?PolicyName=='RosaOadpVer1'].{ARN:Arn}" --output text)
    if [[ -z "${POLICY_ARN}" ]]; then
    $ cat << EOF > ${SCRATCH}/policy.json
    {
    "Version": "2012-10-17",
    "Statement": [
     {
       "Effect": "Allow",
       "Action": [
         "s3:CreateBucket",
         "s3:DeleteBucket",
         "s3:PutBucketTagging",
         "s3:GetBucketTagging",
         "s3:PutEncryptionConfiguration",
         "s3:GetEncryptionConfiguration",
         "s3:PutLifecycleConfiguration",
         "s3:GetLifecycleConfiguration",
         "s3:GetBucketLocation",
         "s3:ListBucket",
         "s3:GetObject",
         "s3:PutObject",
         "s3:DeleteObject",
         "s3:ListBucketMultipartUploads",
         "s3:AbortMultipartUpload",
         "s3:ListMultipartUploadParts",
         "ec2:DescribeSnapshots",
         "ec2:DescribeVolumes",
         "ec2:DescribeVolumeAttribute",
         "ec2:DescribeVolumesModifications",
         "ec2:DescribeVolumeStatus",
         "ec2:CreateTags",
         "ec2:CreateVolume",
         "ec2:CreateSnapshot",
         "ec2:DeleteSnapshot"
       ],
       "Resource": "*"
     }
    ]}
    EOF
    $ POLICY_ARN=$(aws iam create-policy --policy-name "RosaOadpVer1" \
    --policy-document file:///${SCRATCH}/policy.json --query Policy.Arn \
    --tags Key=rosa_openshift_version,Value=${CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-oadp Key=operator_name,Value=openshift-oadp \
    --output text)
    fi
    $ echo ${POLICY_ARN}
    Copy to Clipboard Toggle word wrap
  2. Create an IAM Role trust policy for the cluster:

    $ cat <<EOF > ${SCRATCH}/trust-policy.json
    {
      "Version": "2012-10-17",
      "Statement": [{
        "Effect": "Allow",
        "Principal": {
          "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_ENDPOINT}"
        },
        "Action": "sts:AssumeRoleWithWebIdentity",
        "Condition": {
          "StringEquals": {
             "${OIDC_ENDPOINT}:sub": [
               "system:serviceaccount:openshift-adp:openshift-adp-controller-manager",
               "system:serviceaccount:openshift-adp:velero"]
          }
        }
      }]
    }
    EOF
    $ ROLE_ARN=$(aws iam create-role --role-name \
     "${ROLE_NAME}" \
      --assume-role-policy-document file://${SCRATCH}/trust-policy.json \
      --tags Key=rosa_cluster_id,Value=${ROSA_CLUSTER_ID} Key=rosa_openshift_version,Value=${CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=openshift-oadp \
      --query Role.Arn --output text)
    
    $ echo ${ROLE_ARN}
    Copy to Clipboard Toggle word wrap
  3. Attach the IAM Policy to the IAM Role:

    $ aws iam attach-role-policy --role-name "${ROLE_NAME}" \
     --policy-arn ${POLICY_ARN}
    Copy to Clipboard Toggle word wrap

6.2. Deploy OADP on the cluster

  1. Create a namespace for OADP:

    $ oc create namespace openshift-adp
    Copy to Clipboard Toggle word wrap
  2. Create a credentials secret:

    $ cat <<EOF > ${SCRATCH}/credentials
    [default]
    role_arn = ${ROLE_ARN}
    web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token
    region=<aws_region> 
    1
    
    EOF
    $ oc -n openshift-adp create secret generic cloud-credentials \
     --from-file=${SCRATCH}/credentials
    Copy to Clipboard Toggle word wrap
    1
    Replace <aws_region> with the AWS region to use for the Security Token Service (STS) endpoint.
  3. Deploy the OADP Operator:

    Note

    There is currently an issue with version 1.1 of the Operator with backups that have a PartiallyFailed status. This does not seem to affect the backup and restore process, but it should be noted as there are issues with it.

    $ cat << EOF | oc create -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
     generateName: openshift-adp-
     namespace: openshift-adp
     name: oadp
    spec:
     targetNamespaces:
     - openshift-adp
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
     name: redhat-oadp-operator
     namespace: openshift-adp
    spec:
     channel: stable-1.2
     installPlanApproval: Automatic
     name: redhat-oadp-operator
     source: redhat-operators
     sourceNamespace: openshift-marketplace
    EOF
    Copy to Clipboard Toggle word wrap
  4. Wait for the Operator to be ready:

    $ watch oc -n openshift-adp get pods
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                                READY   STATUS    RESTARTS   AGE
    openshift-adp-controller-manager-546684844f-qqjhn   1/1     Running   0          22s
    Copy to Clipboard Toggle word wrap

  5. Create Cloud Storage:

    $ cat << EOF | oc create -f -
    apiVersion: oadp.openshift.io/v1alpha1
    kind: CloudStorage
    metadata:
     name: ${CLUSTER_NAME}-oadp
     namespace: openshift-adp
    spec:
     creationSecret:
       key: credentials
       name: cloud-credentials
     enableSharedConfig: true
     name: ${CLUSTER_NAME}-oadp
     provider: aws
     region: $REGION
    EOF
    Copy to Clipboard Toggle word wrap
  6. Check your application’s storage default storage class:

    $ oc get pvc -n <namespace> 
    1
    Copy to Clipboard Toggle word wrap
    1
    Enter your application’s namespace.

    Example output

    NAME     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    applog   Bound    pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8   1Gi        RWO            gp3-csi        4d19h
    mysql    Bound    pvc-16b8e009-a20a-4379-accc-bc81fedd0621   1Gi        RWO            gp3-csi        4d19h
    Copy to Clipboard Toggle word wrap

    $ oc get storageclass
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
    gp2                 kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   true                   4d21h
    gp2-csi             ebs.csi.aws.com         Delete          WaitForFirstConsumer   true                   4d21h
    gp3                 ebs.csi.aws.com         Delete          WaitForFirstConsumer   true                   4d21h
    gp3-csi (default)   ebs.csi.aws.com         Delete          WaitForFirstConsumer   true                   4d21h
    Copy to Clipboard Toggle word wrap

    Using either gp3-csi, gp2-csi, gp3 or gp2 will work. If the application(s) that are being backed up are all using PV’s with CSI, include the CSI plugin in the OADP DPA configuration.

  7. CSI only: Deploy a Data Protection Application:

    $ cat << EOF | oc create -f -
    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
     name: ${CLUSTER_NAME}-dpa
     namespace: openshift-adp
    spec:
     backupImages: true
     features:
       dataMover:
         enable: false
     backupLocations:
     - bucket:
         cloudStorageRef:
           name: ${CLUSTER_NAME}-oadp
         credential:
           key: credentials
           name: cloud-credentials
         prefix: velero
         default: true
         config:
           region: ${REGION}
     configuration:
       velero:
         defaultPlugins:
         - openshift
         - aws
         - csi
       restic:
         enable: false
    EOF
    Copy to Clipboard Toggle word wrap
    Note

    If you run this command for CSI volumes, you can skip the next step.

  8. Non-CSI volumes: Deploy a Data Protection Application:

    $ cat << EOF | oc create -f -
    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
     name: ${CLUSTER_NAME}-dpa
     namespace: openshift-adp
    spec:
     backupImages: true
     features:
       dataMover:
         enable: false
     backupLocations:
     - bucket:
         cloudStorageRef:
           name: ${CLUSTER_NAME}-oadp
         credential:
           key: credentials
           name: cloud-credentials
         prefix: velero
         default: true
         config:
           region: ${REGION}
     configuration:
       velero:
         defaultPlugins:
         - openshift
         - aws
       restic:
         enable: false
     snapshotLocations:
       - velero:
           config:
             credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials
             enableSharedConfig: 'true'
             profile: default
             region: ${REGION}
           provider: aws
    EOF
    Copy to Clipboard Toggle word wrap
Note
  • In OADP 1.1.x Red Hat OpenShift Service on AWS classic architecture STS environments, the container image backup and restore (spec.backupImages) value must be set to false as it is not supported.
  • The Restic feature (restic.enable=false) is disabled and not supported in Red Hat OpenShift Service on AWS classic architecture STS environments.
  • The DataMover feature (dataMover.enable=false) is disabled and not supported in Red Hat OpenShift Service on AWS classic architecture STS environments.

6.3. Perform a backup

Note

The following sample hello-world application has no attached persistent volumes. Either DPA configuration will work.

  1. Create a workload to back up:

    $ oc create namespace hello-world
    $ oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
    Copy to Clipboard Toggle word wrap
  2. Expose the route:

    $ oc expose service/hello-openshift -n hello-world
    Copy to Clipboard Toggle word wrap
  3. Check that the application is working:

    $ curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`
    Copy to Clipboard Toggle word wrap

    Example output

    Hello OpenShift!
    Copy to Clipboard Toggle word wrap

  4. Back up the workload:

    $ cat << EOF | oc create -f -
    apiVersion: velero.io/v1
    kind: Backup
    metadata:
     name: hello-world
     namespace: openshift-adp
    spec:
     includedNamespaces:
     - hello-world
     storageLocation: ${CLUSTER_NAME}-dpa-1
     ttl: 720h0m0s
    EOF
    Copy to Clipboard Toggle word wrap
  5. Wait until the backup is done:

    $ watch "oc -n openshift-adp get backup hello-world -o json | jq .status"
    Copy to Clipboard Toggle word wrap

    Example output

    {
     "completionTimestamp": "2022-09-07T22:20:44Z",
     "expiration": "2022-10-07T22:20:22Z",
     "formatVersion": "1.1.0",
     "phase": "Completed",
     "progress": {
       "itemsBackedUp": 58,
       "totalItems": 58
     },
     "startTimestamp": "2022-09-07T22:20:22Z",
     "version": 1
    }
    Copy to Clipboard Toggle word wrap

  6. Delete the demo workload:

    $ oc delete ns hello-world
    Copy to Clipboard Toggle word wrap
  7. Restore from the backup:

    $ cat << EOF | oc create -f -
    apiVersion: velero.io/v1
    kind: Restore
    metadata:
     name: hello-world
     namespace: openshift-adp
    spec:
     backupName: hello-world
    EOF
    Copy to Clipboard Toggle word wrap
  8. Wait for the Restore to finish:

    $ watch "oc -n openshift-adp get restore hello-world -o json | jq .status"
    Copy to Clipboard Toggle word wrap

    Example output

    {
     "completionTimestamp": "2022-09-07T22:25:47Z",
     "phase": "Completed",
     "progress": {
       "itemsRestored": 38,
       "totalItems": 38
     },
     "startTimestamp": "2022-09-07T22:25:28Z",
     "warnings": 9
    }
    Copy to Clipboard Toggle word wrap

  9. Check that the workload is restored:

    $ oc -n hello-world get pods
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                              READY   STATUS    RESTARTS   AGE
    hello-openshift-9f885f7c6-kdjpj   1/1     Running   0          90s
    Copy to Clipboard Toggle word wrap

    $ curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`
    Copy to Clipboard Toggle word wrap

    Example output

    Hello OpenShift!
    Copy to Clipboard Toggle word wrap

  10. For troubleshooting tips please refer to the OADP team’s troubleshooting documentation
  11. Additional sample applications can be found in the OADP team’s sample applications directory

6.4. Cleanup

  1. Delete the workload:

    $ oc delete ns hello-world
    Copy to Clipboard Toggle word wrap
  2. Remove the backup and restore resources from the cluster if they are no longer required:

    $ oc delete backups.velero.io hello-world
    $ oc delete restores.velero.io hello-world
    Copy to Clipboard Toggle word wrap
  3. To delete the backup/restore and remote objects in s3:

    $ velero backup delete hello-world
    $ velero restore delete hello-world
    Copy to Clipboard Toggle word wrap
  4. Delete the Data Protection Application:

    $ oc -n openshift-adp delete dpa ${CLUSTER_NAME}-dpa
    Copy to Clipboard Toggle word wrap
  5. Delete the Cloud Storage:

    $ oc -n openshift-adp delete cloudstorage ${CLUSTER_NAME}-oadp
    Copy to Clipboard Toggle word wrap
    Warning

    If this command hangs, you might need to delete the finalizer:

    $ oc -n openshift-adp patch cloudstorage ${CLUSTER_NAME}-oadp -p '{"metadata":{"finalizers":null}}' --type=merge
    Copy to Clipboard Toggle word wrap
  6. Remove the Operator if it is no longer required:

    $ oc -n openshift-adp delete subscription oadp-operator
    Copy to Clipboard Toggle word wrap
  7. Remove the namespace for the Operator:

    $ oc delete ns redhat-openshift-adp
    Copy to Clipboard Toggle word wrap
  8. Remove the Custom Resource Definitions from the cluster if you no longer wish to have them:

    $ for CRD in `oc get crds | grep velero | awk '{print $1}'`; do oc delete crd $CRD; done
    $ for CRD in `oc get crds | grep -i oadp | awk '{print $1}'`; do oc delete crd $CRD; done
    Copy to Clipboard Toggle word wrap
  9. Delete the AWS S3 Bucket:

    $ aws s3 rm s3://${CLUSTER_NAME}-oadp --recursive
    $ aws s3api delete-bucket --bucket ${CLUSTER_NAME}-oadp
    Copy to Clipboard Toggle word wrap
  10. Detach the Policy from the role:

    $ aws iam detach-role-policy --role-name "${ROLE_NAME}" \
     --policy-arn "${POLICY_ARN}"
    Copy to Clipboard Toggle word wrap
  11. Delete the role:

    $ aws iam delete-role --role-name "${ROLE_NAME}"
    Copy to Clipboard Toggle word wrap
Important

This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.

Tip

Load Balancers created by the AWS Load Balancer Operator cannot be used for OpenShift Routes, and should only be used for individual services or ingress resources that do not need the full layer 7 capabilities of an OpenShift Route.

The AWS Load Balancer Controller manages AWS Elastic Load Balancers for a Red Hat OpenShift Service on AWS classic architecture cluster. The controller provisions AWS Application Load Balancers (ALB) when you create Kubernetes Ingress resources and AWS Network Load Balancers (NLB) when implementing Kubernetes Service resources with a type of LoadBalancer.

Compared with the default AWS in-tree load balancer provider, this controller is developed with advanced annotations for both ALBs and NLBs. Some advanced use cases are:

  • Using native Kubernetes Ingress objects with ALBs
  • Integrate ALBs with the AWS Web Application Firewall (WAF) service

    Note

    WAFv1, WAF classic, is no longer supported. Use WAFv2.

  • Specify custom NLB source IP ranges
  • Specify custom NLB internal IP addresses

The AWS Load Balancer Operator is used to used to install, manage and configure an instance of aws-load-balancer-controller in a Red Hat OpenShift Service on AWS classic architecture cluster.

7.1. Prerequisites

Note

AWS ALBs require a multi-AZ cluster, as well as three public subnets split across three AZs in the same VPC as the cluster. This makes ALBs unsuitable for many PrivateLink clusters. AWS NLBs do not have this restriction.

Legal Notice

Copyright © 2025 Red Hat

OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).

Modified versions must remove all Red Hat trademarks.

Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.

Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.

Linux® is the registered trademark of Linus Torvalds in the United States and other countries.

Java® is a registered trademark of Oracle and/or its affiliates.

XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.

MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.

Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.

The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat