Chapter 10. Configuring distributed caches
Configure the caching layer to cluster multiple Red Hat build of Keycloak instances and to increase performance.
Red Hat build of Keycloak is designed for high availability and multi-node clustered setups. The current distributed cache implementation is built on top of Infinispan, a high-performance, distributable in-memory data grid.
10.1. Enable distributed caching Copy linkLink copied to clipboard!
When you start Red Hat build of Keycloak in production mode, by using the start
command, caching is enabled and all Red Hat build of Keycloak nodes in your network are discovered.
By default, caches use the jdbc-ping
stack which is based upon a TCP transport and uses the configured database to track nodes joining the cluster. Red Hat build of Keycloak allows you to either choose from a set of pre-defined default transport stacks, or to define your own custom stack, as you will see later in this chapter.
To explicitly enable distributed infinispan caching, enter this command:
bin/kc.[sh|bat] start --cache=ispn
bin/kc.[sh|bat] start --cache=ispn
When you start Red Hat build of Keycloak in development mode, by using the start-dev
command, Red Hat build of Keycloak uses only local caches and distributed caches are completely disabled by implicitly setting the --cache=local
option. The local
cache mode is intended only for development and testing purposes.
10.2. Configuring caches Copy linkLink copied to clipboard!
Red Hat build of Keycloak provides a cache configuration file with sensible defaults located at conf/cache-ispn.xml
.
The cache configuration is a regular {infinispan_configuring_docs}[Infinispan configuration file].
The following table gives an overview of the specific caches Red Hat build of Keycloak uses. You configure these caches in conf/cache-ispn.xml
:
Cache name | Cache Type | Description |
---|---|---|
realms | Local | Cache persisted realm data |
users | Local | Cache persisted user data |
authorization | Local | Cache persisted authorization data |
keys | Local | Cache external public keys |
crl | Local | Cache for X.509 authenticator CRLs |
work | Replicated | Propagate invalidation messages across nodes |
authenticationSessions | Distributed | Caches authentication sessions, created/destroyed/expired during the authentication process |
sessions | Distributed | Cache persisted user session data |
clientSessions | Distributed | Cache persisted client session data |
offlineSessions | Distributed | Cache persisted offline user session data |
offlineClientSessions | Distributed | Cache persisted offline client session data |
loginFailures | Distributed | keep track of failed logins, fraud detection |
actionTokens | Distributed | Caches action Tokens |
10.2.1. Cache types and defaults Copy linkLink copied to clipboard!
Local caches
Red Hat build of Keycloak caches persistent data locally to avoid unnecessary round-trips to the database.
The following data is kept local to each node in the cluster using local caches:
- realms and related data like clients, roles, and groups.
- users and related data like granted roles and group memberships.
- authorization and related data like resources, permissions, and policies.
- keys
Local caches for realms, users, and authorization are configured to hold up to 10,000 entries per default. The local key cache can hold up to 1,000 entries per default and defaults to expire every one hour. Therefore, keys are forced to be periodically downloaded from external clients or identity providers.
In order to achieve an optimal runtime and avoid additional round-trips to the database you should consider looking at the configuration for each cache to make sure the maximum number of entries is aligned with the size of your database. More entries you can cache, less often the server needs to fetch data from the database. You should evaluate the trade-offs between memory utilization and performance.
Invalidation of local caches
Local caching improves performance, but adds a challenge in multi-node setups.
When one Red Hat build of Keycloak node updates data in the shared database, all other nodes need to be aware of it, so they invalidate that data from their caches.
The work
cache is a replicated cache and used for sending these invalidation messages. The entries/messages in this cache are very short-lived, and you should not expect this cache growing in size over time.
Authentication sessions
Authentication sessions are created whenever a user tries to authenticate. They are automatically destroyed once the authentication process completes or due to reaching their expiration time.
The authenticationSessions
distributed cache is used to store authentication sessions and any other data associated with it during the authentication process.
By relying on a distributable cache, authentication sessions are available to any node in the cluster so that users can be redirected to any node without losing their authentication state. However, production-ready deployments should always consider session affinity and favor redirecting users to the node where their sessions were initially created. By doing that, you are going to avoid unnecessary state transfer between nodes and improve CPU, memory, and network utilization.
User sessions
Once the user is authenticated, a user session is created. The user session tracks your active users and their state so that they can seamlessly authenticate to any application without being asked for their credentials again. For each application, the user authenticates with a client session, so that the server can track the applications the user is authenticated with and their state on a per-application basis.
User and client sessions are automatically destroyed whenever the user performs a logout, the client performs a token revocation, or due to reaching their expiration time.
The session data are stored in the database by default and loaded on-demand to the following caches:
-
sessions
-
clientSessions
By relying on a distributable cache, cached user and client sessions are available to any node in the cluster so that users can be redirected to any node without the need to load session data from the database. However, production-ready deployments should always consider session affinity and favor redirecting users to the node where their sessions were initially created. By doing that, you are going to avoid unnecessary state transfer between nodes and improve CPU, memory, and network utilization.
These in-memory caches for user sessions and client sessions are limited to, by default, 10000 entries per node which reduces the overall memory usage of Red Hat build of Keycloak for larger installations. The internal caches will run with only a single owner for each cache entry.
Offline user sessions
As an OpenID Connect Provider, the server is capable of authenticating users and issuing offline tokens. When issuing an offline token after successful authentication, the server creates an offline user session and offline client session.
The following caches are used to store offline sessions:
- offlineSessions
- offlineClientSessions
Like the user and client sessions caches, the offline user and client session caches are limited to 10000 entries per node by default. Items which are evicted from the memory will be loaded on-demand from the database when needed.
Password brute force detection
The loginFailures
distributed cache is used to track data about failed login attempts. This cache is needed for the Brute Force Protection feature to work in a multi-node Red Hat build of Keycloak setup.
Action tokens
Action tokens are used for scenarios when a user needs to confirm an action asynchronously, for example in the emails sent by the forgot password flow. The actionTokens
distributed cache is used to track metadata about action tokens.
10.2.2. Volatile user sessions Copy linkLink copied to clipboard!
By default, regular user sessions are stored in the database and loaded on-demand to the cache. It is possible to configure Red Hat build of Keycloak to store regular user sessions in the cache only and minimize calls to the database.
Since all the sessions in this setup are stored in-memory, there are two side effects related to this:
- Losing sessions when all Red Hat build of Keycloak nodes restart.
- Increased memory consumption.
When using volatile user sessions, the cache is the source of truth for user and client sessions. Red Hat build of Keycloak automatically adjusts the number of entries that can be stored in memory, and increases the number of copies to prevent data loss.
It is not recommended to use volatile user sessions when using offline sessions extensively due to potentially high memory usage. For volatile sessions, the time offline sessions are cached in memory can be limited with the SPI options spi-user-sessions—infinispan—offline-client-session-cache-entry-lifespan-override
and spi-user-sessions—infinispan—offline-session-cache-entry-lifespan-override
.
Follow these steps to enable this setup:
Disable
persistent-user-sessions
feature using the following command:bin/kc.sh start --features-disabled=persistent-user-sessions ...
bin/kc.sh start --features-disabled=persistent-user-sessions ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Disabling persistent-user-sessions
is not possible when multi-site
feature is enabled.
10.2.3. Configuring cache maximum size Copy linkLink copied to clipboard!
In order to reduce memory usage, it’s possible to place an upper bound on the number of entries which are stored in a given cache. To specify an upper bound of on a cache, you must provide the following command line argument --cache-embedded-${CACHE_NAME}-max-count=
, with ${CACHE_NAME}
replaced with the name of the cache you would like to apply the upper bound to. For example, to apply an upper-bound of 1000
to the offlineSessions
cache you would configure --cache-embedded-offline-sessions-max-count=1000
. An upper bound can not be defined on the following caches: actionToken
, authenticationSessions
, loginFailures
, work
.
Setting a maximum cache size for sessions
, clientSessions
, offlineSessions
and offlineClientSessions
is not supported when volatile sessions are enabled.
10.2.4. Specify your own cache configuration file Copy linkLink copied to clipboard!
To specify your own cache configuration file, enter this command:
bin/kc.[sh|bat] start --cache-config-file=my-cache-file.xml
bin/kc.[sh|bat] start --cache-config-file=my-cache-file.xml
The configuration file is relative to the conf/
directory.
10.2.5. CLI options for remote server Copy linkLink copied to clipboard!
For configuration of Red Hat build of Keycloak server for high availability and multi-node clustered setup there was introduced following CLI options cache-remote-host
, cache-remote-port
, cache-remote-username
and cache-remote-password
simplifying configuration within the XML file. Once any of the declared CLI parameters are present, it is expected there is no configuration related to remote store present in the XML file.
10.2.5.1. Connecting to an insecure Infinispan server Copy linkLink copied to clipboard!
Disabling security is not recommended in production!
In a development or test environment, it is easier to start an unsecured Infinispan server. For these use case, the CLI options cache-remote-tls-enabled
disables the encryption (TLS) between Red Hat build of Keycloak and Data Grid. Red Hat build of Keycloak will fail to start if the Data Grid server is configured to accept only encrypted connections.
The CLI options cache-remote-username
and cache-remote-password
are optional and, if not set, Red Hat build of Keycloak will connect to the Data Grid server without presenting any credentials. If the Data Grid server has authentication enabled, Red Hat build of Keycloak will fail to start.
10.3. Transport stacks Copy linkLink copied to clipboard!
Transport stacks ensure that Red Hat build of Keycloak nodes in a cluster communicate in a reliable fashion. Red Hat build of Keycloak supports a wide range of transport stacks:
-
jdbc-ping
-
kubernetes
-
jdbc-ping-udp
(deprecated) -
tcp
(deprecated) -
udp
(deprecated) -
ec2
(deprecated) -
azure
(deprecated) -
google
(deprecated)
To apply a specific cache stack, enter this command:
bin/kc.[sh|bat] start --cache-stack=<stack>
bin/kc.[sh|bat] start --cache-stack=<stack>
The default stack is set to jdbc-ping
when distributed caches are enabled, which is backwards compatible with the defaults in the 26.x release stream of Red Hat build of Keycloak.
10.3.1. Available transport stacks Copy linkLink copied to clipboard!
The following table shows transport stacks that are available without any further configuration than using the --cache-stack
runtime option:
Stack name | Transport protocol | Discovery |
---|---|---|
| TCP |
Database registry using the JGroups |
| UDP |
Database registry using the JGroups |
The following table shows transport stacks that are available using the --cache-stack
runtime option and a minimum configuration:
Stack name | Transport protocol | Discovery |
---|---|---|
| TCP |
DNS resolution using the JGroups |
| TCP |
IP multicast using the JGroups |
| UDP |
IP multicast using the JGroups |
When using the tcp
, udp
or jdbc-ping-udp
stack, each cluster must use a different multicast address and/or port so that their nodes form distinct clusters. By default, Red Hat build of Keycloak uses 239.6.7.8
as multicast address for jgroups.mcast_addr
and 46655
for the multicast port jgroups.mcast_port
.
Use -D<property>=<value>
to pass the properties via the JAVA_OPTS_APPEND
environment variable or in the CLI command.
Additional Stacks
It is recommended to use one of the stacks available above. Additional stacks are provided by Infinispan, but it is outside the scope of this guide how to configure them. Please refer to Setting up Infinispan cluster transport and Customizing JGroups stacks for further documentation.
10.4. Securing transport stacks Copy linkLink copied to clipboard!
Encryption using TLS is enabled by default for TCP-based transport stacks, which is also the default configuration. No additional CLI options or modifications of the cache XML are required as long as you are using a TCP-based transport stack.
If you are using a transport stack based on UDP
or TCP_NIO2
, proceed as follows to configure the encryption of the transport stack:
-
Set the option
cache-embedded-mtls-enabled
tofalse
. - Follow the documentation in JGroups Encryption documentation and Encrypting cluster transport.
With TLS enabled, Red Hat build of Keycloak auto-generates a self-signed RSA 2048 bit certificate to secure the connection and uses TLS 1.3 to secure the communication. The keys and the certificate are stored in the database so they are available to all nodes. By default, the certificate is valid for 60 days and is rotated at runtime every 30 days. Use the option cache-embedded-mtls-rotation-interval-days
to change this.
10.4.1. Running inside a service mesh Copy linkLink copied to clipboard!
When using a service mesh like Istio, you might need to allow a direct mTLS communication between the Red Hat build of Keycloak Pods to allow for the mutual authentication to work. Otherwise, you might see error messages like JGRP000006: failed accepting connection from peer SSLSocket
that indicate that a wrong certificate was presented, and the cluster will not form correctly.
You then have the option to allow direct mTLS communication between the Red Hat build of Keycloak Pods, or rely on the service mesh transport security to encrypt the communication and to authenticate peers.
To allow direct mTLS communication for Red Hat build of Keycloak when using Istio:
Apply the following configuration to allow direct communication.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
As an alternative, to disable the mTLS communication, and rely on the service mesh to encrypt the traffic:
-
Set the option
cache-embedded-mtls-enabled
tofalse
. - Configure your service mesh to authorize only traffic from other Red Hat build of Keycloak Pods for the data transmission port (default: 7800).
10.4.2. Providing your own keys and certificates Copy linkLink copied to clipboard!
Although not recommended for standard setups, if it is essential in a specific setup, you can configure the keystore with the certificate for the transport stack manually. cache-embedded-mtls-key-store-file
sets the path to the keystore, and cache-embedded-mtls-key-store-password
sets the password to decrypt it. The truststore contains the valid certificates to accept connection from, and it can be configured with cache-embedded-mtls-trust-store-file
(path to the truststore), and cache-embedded-mtls-trust-store-password
(password to decrypt it). To restrict unauthorized access, always use a self-signed certificate for each Red Hat build of Keycloak deployment.
10.5. Network Ports Copy linkLink copied to clipboard!
To ensure a healthy Red Hat build of Keycloak clustering, some network ports need to be open. The table below shows the TCP ports that need to be open for the jdbc-ping
stack, and a description of the traffic that goes through it.
Port | Property | Description |
---|---|---|
|
| Unicast data transmission. |
|
|
Failure detection by protocol |
Use -D<property>=<value>
to modify the ports above in your JAVA_OPTS_APPEND
environment variable or in your CLI command.
10.6. Network bind address Copy linkLink copied to clipboard!
To ensure a healthy Red Hat build of Keycloak clustering, the network port must be bound on an interface that is accessible from all other nodes of the cluster.
By default, it picks a site local (non-routable) IP address, for example, from the 192.168.0.0/16 or 10.0.0.0/8 address range.
To override the address, set the jgroups.bind.address
property.
Use -Djgroups.bind.address=<IP>
to modify the bind address in your JAVA_OPTS_APPEND
environment variable or in your CLI command.
To set up for IPv6 only and have Red Hat build of Keycloak pick the bind address automatically, use the following settings:
export JAVA_OPTS_APPEND="-Djava.net.preferIPv4Stack=false -Djava.net.preferIPv6Addresses=true"
export JAVA_OPTS_APPEND="-Djava.net.preferIPv4Stack=false -Djava.net.preferIPv6Addresses=true"
10.7. Running instances on different networks Copy linkLink copied to clipboard!
If you run Red Hat build of Keycloak instances on different networks, for example behind firewalls or in containers, the different instances will not be able to reach each other by their local IP address. In such a case, set up a port forwarding rule (sometimes called “virtual server”) to their local IP address.
When using port forwarding, use the following properties so each node correctly advertises its external address to the other nodes:
Property | Description |
---|---|
| Port that other instances in the Red Hat build of Keycloak cluster should use to contact this node. |
| IP address that other instances in the Red Hat build of Keycloak should use to contact this node. |
Use -D<property>=<value>
set this in your JAVA_OPTS_APPEND
environment variable or in your CLI command.
10.8. Exposing metrics from caches Copy linkLink copied to clipboard!
Metrics from caches are automatically exposed when the metrics are enabled.
To enable histograms for the cache metrics, set cache-metrics-histograms-enabled
to true
. While these metrics provide more insights into the latency distribution, collecting them might have a performance impact, so you should be cautious to activate them in an already saturated system.
bin/kc.[sh|bat] start --metrics-enabled=true --cache-metrics-histograms-enabled=true
bin/kc.[sh|bat] start --metrics-enabled=true --cache-metrics-histograms-enabled=true
For more details about how to enable metrics, see Gaining insights with metrics.
10.9. Relevant options Copy linkLink copied to clipboard!
Value | |
---|---|
|
|
| |
Available only when metrics are enabled |
|
Available only when 'cache' type is set to 'ispn'
Use 'jdbc-ping' instead by leaving it unset Deprecated values: |
|
10.9.1. Embedded Cache Copy linkLink copied to clipboard!
Value | |
---|---|
| |
Available only when embedded Infinispan clusters configured | |
| |
| |
Available only when a TCP based cache-stack is used |
|
Available only when property 'cache-embedded-mtls-enabled' is enabled | |
Available only when property 'cache-embedded-mtls-enabled' is enabled | |
Available only when property 'cache-embedded-mtls-enabled' is enabled | (default) |
Available only when property 'cache-embedded-mtls-enabled' is enabled | |
Available only when property 'cache-embedded-mtls-enabled' is enabled | |
Available only when embedded Infinispan clusters configured | |
Available only when embedded Infinispan clusters configured | |
| |
Available only when embedded Infinispan clusters configured | |
|
10.9.2. Remote Cache Copy linkLink copied to clipboard!
Value | |
---|---|
| |
Available only when remote host is set | |
Available only when remote host is set | (default) |
Available only when remote host is set |
|
Available only when remote host is set |