Chapter 8. Deploy an AWS Route 53 loadbalancer
This topic describes the procedure required to configure DNS based failover for Multi-AZ Red Hat build of Keycloak clusters using AWS Route53 for an active/passive setup. These instructions are intended to be used with the setup described in the Concepts for active-passive deployments chapter. Use it together with the other building blocks outlined in the Building blocks active-passive deployments chapter.
We provide these blueprints to show a minimal functionally complete example with a good baseline performance for regular installations. You would still need to adapt it to your environment and your organization’s standards and security best practices.
8.1. Architecture
All Red Hat build of Keycloak client requests are routed by a DNS name managed by Route53 records. Route53 is responsibile to ensure that all client requests are routed to the Primary cluster when it is available and healthy, or to the backup cluster in the event of the primary availability-zone or Red Hat build of Keycloak deployment failing.
If the primary site fails, the DNS changes will need to propagate to the clients. Depending on the client’s settings, the propagation may take some minutes based on the client’s configuration. When using mobile connections, some internet providers might not respect the TTL of the DNS entries, which can lead to an extended time before the clients can connect to the new site.
Figure 8.1. AWS Global Accelerator Failover
Two Openshift Routes are exposed on both the Primary and Backup ROSA cluster. The first Route uses the Route53 DNS name to service client requests, whereas the second Route is used by Route53 to monitor the health of the Red Hat build of Keycloak cluster.
8.2. Prerequisites
- Deployment of Red Hat build of Keycloak as described in Deploy Red Hat build of Keycloak for HA with the Red Hat build of Keycloak Operator on a ROSA cluster running OpenShift 4.14 or later in two AWS availability zones in AWS one region.
- An owned domain for client requests to be routed through.
8.3. Procedure
Create a Route53 Hosted Zone using the root domain name through which you want all Red Hat build of Keycloak clients to connect.
Take note of the "Hosted zone ID", because this ID is required in later steps.
Retrieve the "Hosted zone ID" and DNS name associated with each ROSA cluster.
For both the Primary and Backup cluster, perform the following steps:
- Log in to the ROSA cluster.
Retrieve the cluster LoadBalancer Hosted Zone ID and DNS hostname
Command:
HOSTNAME=$(oc -n openshift-ingress get svc router-default \ -o jsonpath='{.status.loadBalancer.ingress[].hostname}' ) aws elbv2 describe-load-balancers \ --query "LoadBalancers[?DNSName=='${HOSTNAME}'].{CanonicalHostedZoneId:CanonicalHostedZoneId,DNSName:DNSName}" \ --region eu-west-1 \1 --output json
- 1
- The AWS region hosting your ROSA cluster
Output:
[ { "CanonicalHostedZoneId": "Z2IFOLAFXWLO4F", "DNSName": "ad62c8d2fcffa4d54aec7ffff902c925-61f5d3e1cbdc5d42.elb.eu-west-1.amazonaws.com" } ]
NoteROSA clusters running OpenShift 4.13 and earlier use classic load balancers instead of application load balancers. Use the
aws elb describe-load-balancers
command and an updated query string instead.
Create Route53 health checks
Command:
function createHealthCheck() { # Creating a hash of the caller reference to allow for names longer than 64 characters REF=($(echo $1 | sha1sum )) aws route53 create-health-check \ --caller-reference "$REF" \ --query "HealthCheck.Id" \ --no-cli-pager \ --output text \ --health-check-config ' { "Type": "HTTPS", "ResourcePath": "/lb-check", "FullyQualifiedDomainName": "'$1'", "Port": 443, "RequestInterval": 30, "FailureThreshold": 1, "EnableSNI": true } ' } CLIENT_DOMAIN="client.keycloak-benchmark.com" 1 PRIMARY_DOMAIN="primary.${CLIENT_DOMAIN}" 2 BACKUP_DOMAIN="backup.${CLIENT_DOMAIN}" 3 createHealthCheck ${PRIMARY_DOMAIN} createHealthCheck ${BACKUP_DOMAIN}
- 1
- The domain which Red Hat build of Keycloak clients should connect to. This should be the same, or a subdomain, of the root domain used to create the Hosted Zone.
- 2
- The subdomain that will be used for health probes on the Primary cluster
- 3
- The subdomain that will be used for health probes on the Backup cluster
Output:
233e180f-f023-45a3-954e-415303f21eab 1 799e2cbb-43ae-4848-9b72-0d9173f04912 2
Create the Route53 record set
Command:
HOSTED_ZONE_ID="Z09084361B6LKQQRCVBEY" 1 PRIMARY_LB_HOSTED_ZONE_ID="Z2IFOLAFXWLO4F" PRIMARY_LB_DNS=ad62c8d2fcffa4d54aec7ffff902c925-61f5d3e1cbdc5d42.elb.eu-west-1.amazonaws.com PRIMARY_HEALTH_ID=233e180f-f023-45a3-954e-415303f21eab BACKUP_LB_HOSTED_ZONE_ID="Z2IFOLAFXWLO4F" BACKUP_LB_DNS=a184a0e02a5d44a9194e517c12c2b0ec-1203036292.elb.eu-west-1.amazonaws.com BACKUP_HEALTH_ID=799e2cbb-43ae-4848-9b72-0d9173f04912 aws route53 change-resource-record-sets \ --hosted-zone-id Z09084361B6LKQQRCVBEY \ --query "ChangeInfo.Id" \ --output text \ --change-batch ' { "Comment": "Creating Record Set for '${CLIENT_DOMAIN}'", "Changes": [{ "Action": "CREATE", "ResourceRecordSet": { "Name": "'${PRIMARY_DOMAIN}'", "Type": "A", "AliasTarget": { "HostedZoneId": "'${PRIMARY_LB_HOSTED_ZONE_ID}'", "DNSName": "'${PRIMARY_LB_DNS}'", "EvaluateTargetHealth": true } } }, { "Action": "CREATE", "ResourceRecordSet": { "Name": "'${BACKUP_DOMAIN}'", "Type": "A", "AliasTarget": { "HostedZoneId": "'${BACKUP_LB_HOSTED_ZONE_ID}'", "DNSName": "'${BACKUP_LB_DNS}'", "EvaluateTargetHealth": true } } }, { "Action": "CREATE", "ResourceRecordSet": { "Name": "'${CLIENT_DOMAIN}'", "Type": "A", "SetIdentifier": "client-failover-primary-'${SUBDOMAIN}'", "Failover": "PRIMARY", "HealthCheckId": "'${PRIMARY_HEALTH_ID}'", "AliasTarget": { "HostedZoneId": "'${HOSTED_ZONE_ID}'", "DNSName": "'${PRIMARY_DOMAIN}'", "EvaluateTargetHealth": true } } }, { "Action": "CREATE", "ResourceRecordSet": { "Name": "'${CLIENT_DOMAIN}'", "Type": "A", "SetIdentifier": "client-failover-backup-'${SUBDOMAIN}'", "Failover": "SECONDARY", "HealthCheckId": "'${BACKUP_HEALTH_ID}'", "AliasTarget": { "HostedZoneId": "'${HOSTED_ZONE_ID}'", "DNSName": "'${BACKUP_DOMAIN}'", "EvaluateTargetHealth": true } } }] } '
- 1
- The ID of the Hosted Zone created earlier
Output:
/change/C053410633T95FR9WN3YI
Wait for the Route53 records to be updated
Command:
aws route53 wait resource-record-sets-changed --id /change/C053410633T95FR9WN3YI
Update or create the Red Hat build of Keycloak deployment
For both the Primary and Backup cluster, perform the following steps:
- Log in to the ROSA cluster
Ensure the
Keycloak
CR has the following configurationapiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: keycloak spec: hostname: hostname: ${CLIENT_DOMAIN} 1
- 1
- The domain clients used to connect to Red Hat build of Keycloak
To ensure that request forwarding works, edit the Red Hat build of Keycloak CR to specify the hostname through which clients will access the Red Hat build of Keycloak instances. This hostname must be the
$CLIENT_DOMAIN
used in the Route53 configuration.Create health check Route
Command:
cat <<EOF | oc apply -n $NAMESPACE -f - 1 apiVersion: route.openshift.io/v1 kind: Route metadata: name: aws-health-route spec: host: $DOMAIN 2 port: targetPort: https tls: insecureEdgeTerminationPolicy: Redirect termination: passthrough to: kind: Service name: keycloak-service weight: 100 wildcardPolicy: None EOF
8.4. Verify
Navigate to the chosen CLIENT_DOMAIN in your local browser and log in to the Red Hat build of Keycloak console.
To test failover works as expected, log in to the Primary cluster and scale the Red Hat build of Keycloak deployment to zero Pods. Scaling will cause the Primary’s health checks to fail and Route53 should start routing traffic to the Red Hat build of Keycloak Pods on the Backup cluster.