Este conteúdo não está disponível no idioma selecionado.
Chapter 1. AWS Load Balancer Operator
To manage AWS Elastic Load Balancers (ELB) directly from your cluster, install the AWS Load Balancer Operator. This optional Operator is supported by Red Hat for use on SRE-managed Red Hat OpenShift Service on AWS clusters.
Load Balancers created by the AWS Load Balancer Operator cannot be used for OpenShift Routes, and should only be used for individual services or ingress resources that do not need the full layer 7 capabilities of an OpenShift Route.
The AWS Load Balancer Operator is used to install, manage, and configure the AWS Load Balancer Controller in a Red Hat OpenShift Service on AWS cluster.
The AWS Load Balancer Controller provisions AWS Application Load Balancers (ALB) when you create Kubernetes Ingress resources and AWS Network Load Balancers (NLB) when you create a Kubernetes Service resource with a type of LoadBalancer.
Compared with the default AWS in-tree load balancer provider, this controller is developed with advanced annotations for both ALBs and NLBs. Some advanced use cases are:
- Using native Kubernetes Ingress objects with ALBs
- Integrate ALBs with the AWS Web Application Firewall (WAF) service
- Specify custom NLB source IP ranges
- Specify custom NLB internal IP addresses
1.1. Prepare to install the AWS Load Balancer Operator Copiar o linkLink copiado para a área de transferência!
To prepare your cluster for the AWS Load Balancer Operator, verify that your AWS VPC resources are tagged correctly and meet all requirements. You can also configure environment variables to customize the installation.
- Cluster requirements
- Your cluster must be deployed across three availability zones and use a pre-existing VPC that has three public subnets.
These requirements mean that the AWS Load Balancer Operator may not be suitable for some PrivateLink clusters. AWS NLBs may be a better choice for such clusters.
1.1.1. Set up temporary environment variables Copiar o linkLink copiado para a área de transferência!
To streamline the installation of the AWS Load Balancer Operator, define temporary environment variables for resource identifiers and configuration details. This optional configuration simplifies the execution of installation commands by storing reusable values.
If you do not want to use environment variables to store certain values, you can manually enter those values in the relevant installation commands.
Prerequisites
-
You have installed the AWS CLI (
aws). -
You have installed the OpenShift CLI (
oc).
Procedure
Log in to your cluster as a cluster administrator using the OpenShift CLI (
oc).$ oc login --token=<token> --server=<cluster_url>Run the following commands to set up environment variables.
$ export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.apiServerURL}" | sed 's|^https://||' | awk -F . '{print $2}')$ export REGION=$(oc get infrastructure cluster -o=jsonpath="{.status.platformStatus.aws.region}")$ export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||')$ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)$ export SCRATCH="/tmp/${CLUSTER_NAME}/alb-operator"$ mkdir -p ${SCRATCH}These commands create environment variables that you can use in this terminal session to pass their values to the command line interface.
Verify that the variable values are set correctly by running the following command:
$ echo "Cluster name: ${CLUSTER_NAME} Region: ${REGION} OIDC Endpoint: ${OIDC_ENDPOINT} AWS Account ID: ${AWS_ACCOUNT_ID}"Example output
Cluster name: <cluster_id> Region: <region> OIDC Endpoint: oidc.op1.openshiftapps.com/<oidc_id> AWS Account ID: <aws_id>- Use the same terminal session to continue with AWS Load Balancer Operator installation, to ensure that your environment variables are not lost.
1.1.2. Tag the AWS VPC and subnets Copiar o linkLink copiado para a área de transferência!
To prepare your environment for the AWS Load Balancer Operator, tag your AWS VPC resources. This configuration ensures that the Operator can correctly identify and manage your network resources.
Prerequisites
-
You have installed the AWS CLI (
aws). -
You have installed the OpenShift CLI (
oc).
Procedure
Optional: Set up environment variables for AWS VPC resources.
$ export VPC_ID=<vpc-id>$ export PUBLIC_SUBNET_IDS="<public-subnet-a-id> <public-subnet-b-id> <public-subnet-c-id>"$ export PRIVATE_SUBNET_IDS="<private-subnet-a-id> <private-subnet-b-id> <private-subnet-c-id>"Tag your VPC to associate it with your cluster:
$ aws ec2 create-tags --resources ${VPC_ID} --tags Key=kubernetes.io/cluster/${CLUSTER_NAME},Value=owned --region ${REGION}Tag your public subnets to allow changes by elastic load balancing roles, and tag your private subnets to allow changes by internal elastic load balancing roles:
cat <<EOF > "${SCRATCH}/tag-subnets.sh" #!/bin/bash aws ec2 create-tags \ --resources ${PUBLIC_SUBNET_IDS} \ --tags Key=kubernetes.io/role/elb,Value='' \ --region ${REGION} aws ec2 create-tags \ --resources ${PRIVATE_SUBNET_IDS} \ --tags Key=kubernetes.io/role/internal-elb,Value='' \ --region ${REGION} EOFRun the script:
bash ${SCRATCH}/tag-subnets.sh
Additional resources
1.2. Installing the AWS Load Balancer Operator Copiar o linkLink copiado para a área de transferência!
You can install the AWS Load Balancer Operator by using the OpenShift CLI (oc). Use the same terminal session you used in Setting up your environment to install the AWS Load Balancer Operator to make use of the environment variables.
Procedure
Create a new project within your cluster for the AWS Load Balancer Operator:
$ oc new-project aws-load-balancer-operatorCreate an AWS IAM policy for the AWS Load Balancer Operator.
Download the appropriate IAM policy:
$ curl -o ${SCRATCH}/operator-permission-policy.json https://raw.githubusercontent.com/openshift/aws-load-balancer-operator/refs/heads/main/hack/operator-permission-policy.jsonCreate the permission policy for the Operator:
$ aws iam create-policy \ --policy-name aws-load-balancer-operator-policy \ --policy-document file://${SCRATCH}/operator-permission-policy.json \ --region ${REGION}Take note of the Operator policy ARN in the output. This is referred to as the
$OPERATOR_POLICY_ARNfor the remainder of this process.
Create an AWS IAM role for the AWS Load Balancer Operator:
Create the trust policy for the Operator role:
$ cat <<EOF > "${SCRATCH}/operator-trust-policy.json" { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Condition": { "StringEquals" : { "${OIDC_ENDPOINT}:sub": ["system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-operator-controller-manager", "system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-controller-cluster"] } }, "Principal": { "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_ENDPOINT}" }, "Action": "sts:AssumeRoleWithWebIdentity" } ] } EOFCreate the Operator role using the trust policy:
$ aws iam create-role --role-name "${CLUSTER_NAME}-alb-operator" \ --assume-role-policy-document "file://${SCRATCH}/operator-trust-policy.json"Take note of the Operator role ARN in the output. This is referred to as the
$OPERATOR_ROLE_ARNfor the remainder of this process.Associate the Operator role and policy:
$ aws iam attach-role-policy --role-name "${CLUSTER_NAME}-alb-operator" \ --policy-arn $OPERATOR_POLICY_ARN
Install the AWS Load Balancer Operator by creating an
OperatorGroupand aSubscription:$ cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator spec: targetNamespaces: [] --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator spec: channel: stable-v1 name: aws-load-balancer-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: "${OPERATOR_ROLE_ARN}" EOFCreate an AWS IAM policy for the AWS Load Balancer Controller.
Download the appropriate IAM policy:
$ curl -o ${SCRATCH}/controller-permission-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.12.0/docs/install/iam_policy.jsonCreate the permission policy for the Controller:
$ aws iam create-policy \ --region ${REGION} \ --policy-name aws-load-balancer-controller-policy \ --policy-document file://${SCRATCH}/controller-permission-policy.jsonTake note of the Controller policy ARN in the output. This is referred to as the
$CONTROLLER_POLICY_ARNfor the remainder of this process.
Create an AWS IAM role for the AWS Load Balancer Controller:
Create the trust policy for the Controller role:
$ cat <<EOF > ${SCRATCH}/controller-trust-policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_ENDPOINT}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "${OIDC_ENDPOINT}:sub": "system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-controller-cluster" } } } ] } EOFCreate the Controller role using the trust policy:
CONTROLLER_ROLE_ARN=$(aws iam create-role --role-name "${CLUSTER_NAME}-albo-controller" \ --assume-role-policy-document "file://${SCRATCH}/controller-trust-policy.json" \ --query Role.Arn --output text) echo ${CONTROLLER_ROLE_ARN}Take note of the Controller role ARN in the output. This is referred to as the
$CONTROLLER_ROLE_ARNfor the remainder of this process.Associate the Controller role and policy:
$ aws iam attach-role-policy \ --role-name "${CLUSTER_NAME}-albo-controller" \ --policy-arn ${CONTROLLER_POLICY_ARN}
Deploy an instance of the AWS Load Balancer Controller:
$ cat << EOF | oc apply -f - apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController metadata: name: cluster spec: credentialsRequestConfig: stsIAMRoleARN: ${CONTROLLER_ROLE_ARN} EOFNoteIf you get an error here, wait a minute and try again. This situation happens because the Operator has not completed installation yet.
Confirm that the Operator and Controller pods are both running:
$ oc -n aws-load-balancer-operator get podsIf you do not see output similar to the following, wait a few moments and retry.
Example output
NAME READY STATUS RESTARTS AGE aws-load-balancer-controller-cluster-6ddf658785-pdp5d 1/1 Running 0 99s aws-load-balancer-operator-controller-manager-577d9ffcb9-w6zqn 2/2 Running 0 2m4s
Additional resources
1.3. Validating Operator installation Copiar o linkLink copiado para a área de transferência!
To confirm that the AWS Load Balancer Operator and Controller are installed correctly, deploy a basic sample application. This validation process involves creating ingress and load balancing services to test the deployment.
Procedure
Create a new project:
$ oc new-project hello-worldCreate a new
hello-worldapplication based on thehello-openshiftimage:$ oc new-app -n hello-world --image=docker.io/openshift/hello-openshiftConfigure a
NodePortservice for an AWS Application Load Balancer (ALB) to connect to:$ cat << EOF | oc apply -f - apiVersion: v1 kind: Service metadata: name: hello-openshift-nodeport namespace: hello-world spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: NodePort selector: deployment: hello-openshift EOFDeploy an AWS ALB for the application:
$ cat << EOF | oc apply -f - apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: hello-openshift-alb namespace: hello-world annotations: alb.ingress.kubernetes.io/scheme: internet-facing spec: ingressClassName: alb rules: - http: paths: - path: / pathType: Exact backend: service: name: hello-openshift-nodeport port: number: 80 EOFTest access to the AWS ALB endpoint for the application:
NoteALB provisioning takes a few minutes. If you receive an error that says
curl: (6) Could not resolve host, please wait and try again.$ ALB_INGRESS=$(oc -n hello-world get ingress hello-openshift-alb \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')$ curl "http://${ALB_INGRESS}"Example output
Hello OpenShift!Deploy an AWS Network Load Balancer (NLB) for the application:
$ cat << EOF | oc apply -f - apiVersion: v1 kind: Service metadata: name: hello-openshift-nlb namespace: hello-world annotations: service.beta.kubernetes.io/aws-load-balancer-type: external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: LoadBalancer selector: deployment: hello-openshift EOFTest access to the NLB endpoint for the application:
NoteNLB provisioning takes a few minutes. If you receive an error that says
curl: (6) Could not resolve host, please wait and try again.$ NLB=$(oc -n hello-world get service hello-openshift-nlb \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')$ curl "http://${NLB}"Expected output shows
Hello OpenShift!.You can now delete the sample application and all resources in the
hello-worldnamespace.$ oc delete project hello-world
1.4. Removing the AWS Load Balancer Operator Copiar o linkLink copiado para a área de transferência!
If you no longer need to use the AWS Load Balancer Operator, you can remove the Operator and delete any related roles and policies.
Procedure
Delete the Operator Subscription:
$ oc delete subscription aws-load-balancer-operator -n aws-load-balancer-operatorDetach and delete the relevant AWS IAM roles:
$ aws iam detach-role-policy \ --role-name "<cluster-id>-alb-operator" \ --policy-arn <operator-policy-arn>$ aws iam delete-role \ --role-name "<cluster-id>-alb-operator"Delete the AWS IAM policy:
$ aws iam delete-policy --policy-arn <operator-policy-arn>