Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 12. Switch back to the primary site
These procedures switch back to the primary site back after a failover or switchover to the secondary site. In a setup as outlined in Concepts for active-passive deployments together with the blueprints outlined in Building blocks active-passive deployments.
12.1. When to use this procedure
These procedures bring the primary site back to operation when the secondary site is handling all the traffic. At the end of the chapter, the primary site is online again and handles the traffic.
This procedure is necessary when the primary site has lost its state in Data Grid, a network partition occurred between the primary and the secondary site while the secondary site was active, or the replication was disabled as described in the Switch over to the secondary site chapter.
If the data in Data Grid on both sites is still in sync, the procedure for Data Grid can be skipped.
See the Multi-site deployments chapter for different operational procedures.
12.2. Procedures
12.2.1. Data Grid Cluster
For the context of this chapter, Site-A
is the primary site, recovering back to operation, and Site-B
is the secondary site, running in production.
After the Data Grid in the primary site is back online and has joined the cross-site channel (see Deploy Data Grid for HA with the Data Grid Operator#verifying-the-deployment on how to verify the Data Grid deployment), the state transfer must be manually started from the secondary site.
After clearing the state in the primary site, it transfers the full state from the secondary site to the primary site, and it must be completed before the primary site can start handling incoming requests.
Transferring the full state may impact the Data Grid cluster perform by increasing the response time and/or resources usage.
The first procedure is to delete any stale data from the primary site.
- Log in to the primary site.
Shutdown Red Hat build of Keycloak. This action will clear all Red Hat build of Keycloak caches and prevents the state of Red Hat build of Keycloak from being out-of-sync with Data Grid.
When deploying Red Hat build of Keycloak using the Red Hat build of Keycloak Operator, change the number of Red Hat build of Keycloak instances in the Red Hat build of Keycloak Custom Resource to 0.
Connect into Data Grid Cluster using the Data Grid CLI tool:
Command:
oc -n keycloak exec -it pods/infinispan-0 -- ./bin/cli.sh --trustall --connect https://127.0.0.1:11222
It asks for the username and password for the Data Grid cluster. Those credentials are the one set in the Deploy Data Grid for HA with the Data Grid Operator chapter in the configuring credentials section.
Output:
Username: developer Password: [infinispan-0-29897@ISPN//containers/default]>
NoteThe pod name depends on the cluster name defined in the Data Grid CR. The connection can be done with any pod in the Data Grid cluster.
Disable the replication from primary site to the secondary site by running the following command. It prevents the clear request to reach the secondary site and delete all the correct cached data.
Command:
site take-offline --all-caches --site=site-b
Output:
{ "offlineClientSessions" : "ok", "authenticationSessions" : "ok", "sessions" : "ok", "clientSessions" : "ok", "work" : "ok", "offlineSessions" : "ok", "loginFailures" : "ok", "actionTokens" : "ok" }
Check the replication status is
offline
.Command:
site status --all-caches --site=site-b
Output:
{ "status" : "offline" }
If the status is not
offline
, repeat the previous step.WarningMake sure the replication is
offline
otherwise the clear data will clear both sites.Clear all the cached data in primary site using the following commands:
Command:
clearcache actionTokens clearcache authenticationSessions clearcache clientSessions clearcache loginFailures clearcache offlineClientSessions clearcache offlineSessions clearcache sessions clearcache work
These commands do not print any output.
Re-enable the cross-site replication from primary site to the secondary site.
Command:
site bring-online --all-caches --site=site-b
Output:
{ "offlineClientSessions" : "ok", "authenticationSessions" : "ok", "sessions" : "ok", "clientSessions" : "ok", "work" : "ok", "offlineSessions" : "ok", "loginFailures" : "ok", "actionTokens" : "ok" }
Check the replication status is
online
.Command:
site status --all-caches --site=site-b
Output:
{ "status" : "online" }
Now we are ready to transfer the state from the secondary site to the primary site.
- Log in into your secondary site.
Connect into Data Grid Cluster using the Data Grid CLI tool:
Command:
oc -n keycloak exec -it pods/infinispan-0 -- ./bin/cli.sh --trustall --connect https://127.0.0.1:11222
It asks for the username and password for the Data Grid cluster. Those credentials are the one set in the Deploy Data Grid for HA with the Data Grid Operator chapter in the configuring credentials section.
Output:
Username: developer Password: [infinispan-0-29897@ISPN//containers/default]>
NoteThe pod name depends on the cluster name defined in the Data Grid CR. The connection can be done with any pod in the Data Grid cluster.
Trigger the state transfer from the secondary site to the primary site.
Command:
site push-site-state --all-caches --site=site-a
Output:
{ "offlineClientSessions" : "ok", "authenticationSessions" : "ok", "sessions" : "ok", "clientSessions" : "ok", "work" : "ok", "offlineSessions" : "ok", "loginFailures" : "ok", "actionTokens" : "ok" }
Check the replication status is
online
for all caches.Command:
site status --all-caches --site=site-a
Output:
{ "status" : "online" }
Wait for the state transfer to complete by checking the output of
push-site-status
command for all caches.Command:
site push-site-status --cache=actionTokens site push-site-status --cache=authenticationSessions site push-site-status --cache=clientSessions site push-site-status --cache=loginFailures site push-site-status --cache=offlineClientSessions site push-site-status --cache=offlineSessions site push-site-status --cache=sessions site push-site-status --cache=work
Output:
{ "site-a" : "OK" } { "site-a" : "OK" } { "site-a" : "OK" } { "site-a" : "OK" } { "site-a" : "OK" } { "site-a" : "OK" } { "site-a" : "OK" } { "site-a" : "OK" }
Check the table in this section for the Cross-Site Documentation for the possible status values.
If an error is reported, repeat the state transfer for that specific cache.
Command:
site push-site-state --cache=<cache-name> --site=site-a
Clear/reset the state transfer status with the following command
Command:
site clear-push-site-status --cache=actionTokens site clear-push-site-status --cache=authenticationSessions site clear-push-site-status --cache=clientSessions site clear-push-site-status --cache=loginFailures site clear-push-site-status --cache=offlineClientSessions site clear-push-site-status --cache=offlineSessions site clear-push-site-status --cache=sessions site clear-push-site-status --cache=work
Output:
"ok" "ok" "ok" "ok" "ok" "ok" "ok" "ok"
- Log in to the primary site.
Start Red Hat build of Keycloak.
When deploying Red Hat build of Keycloak using the Red Hat build of Keycloak Operator, change the number of Red Hat build of Keycloak instances in the Red Hat build of Keycloak Custom Resource to the original value.
Both Data Grid clusters are in sync and the switchover from secondary back to the primary site can be performed.
12.2.2. AWS Aurora Database
Assuming a Regional multi-AZ Aurora deployment, the current writer instance should be in the same region as the active Red Hat build of Keycloak cluster to avoid latencies and communication across availability zones.
Switching the writer instance of Aurora will lead to a short downtime. The writer instance in the other site with a slightly longer latency might be acceptable for some deployments. Therefore, this situation might be deferred to a maintenance window or skipped depending on the circumstances of the deployment.
To change the writer instance, run a failover. This change will make the database unavailable for a short time. Red Hat build of Keycloak will need to re-establish database connections.
To fail over the writer instance to the other AZ, issue the following command:
aws rds failover-db-cluster --db-cluster-identifier ...
12.2.3. Route53
If switching over to the secondary site has been triggered by changing the health endpoint, edit the health check in AWS to point to a correct endpoint (health/live
). After some minutes, the clients will notice the change and traffic will gradually move over to the secondary site.
12.3. Further reading
See Concepts to automate Data Grid CLI commands on how to automate Infinispan CLI commands.