Server Configuration Guide
Abstract
Chapter 1. Configuring Red Hat build of Keycloak Copy linkLink copied to clipboard!
Configure and start Red Hat build of Keycloak.
This chapter explains the configuration methods for Red Hat build of Keycloak and how to start and apply the preferred configuration. It includes configuration guidelines for optimizing Red Hat build of Keycloak for faster startup and low memory footprint.
1.1. Configuring sources for Red Hat build of Keycloak Copy linkLink copied to clipboard!
Red Hat build of Keycloak loads the configuration from four sources, which are listed here in order of application.
- Command-line parameters
- Environment variables
-
Options defined in the file, or in a user-created configuration file.
conf/keycloak.conf - Sensitive options defined in a user-created Java KeyStore file.
When an option is set in more than one source, the one that comes first in the list determines the value for that option. For example, the value for an option set by a command-line parameter has a higher priority than an environment variable for the same option.
1.1.1. Example: Configuring the db-url-host parameter Copy linkLink copied to clipboard!
The following example shows how the
db-url
| Source | Format |
|---|---|
| Command line parameters |
|
| Environment variable |
|
| Configuration file |
|
| Java KeyStore file |
|
Based on the priority of application, the value that is used at startup is
cliValue
If
--db-url=cliValue
KC_DB_URL=envVarValue
db-url=confFileValue
kc.db-url=keystoreValue
1.2. Formats for configuration Copy linkLink copied to clipboard!
The configuration uses a unified-per-source format, which simplifies translation of a key/value pair from one configuration source to another. Note that these formats apply to spi options as well.
- Command-line parameter format
-
Values for the command-line use the
--<key-with-dashes>=<value>format. For some values, an-<abbreviation>=<value>shorthand also exists. - Environment variable format
-
Values for environment variables use the uppercased
KC_<key_with_underscores>=<value>format. - Configuration file format
-
Values that go into the configuration file use the
<key-with-dashes>=<value>format. - KeyStore configuration file format
-
Values that go into the KeyStore configuration file use the
kc.<key-with-dashes>format.<value>is then a password stored in the KeyStore.
At the end of each configuration chapter, look for the Relevant options heading, which defines the applicable configuration formats. For all configuration options, see All configuration. Choose the configuration source and format that applies to your use case.
1.2.1. Example - Alternative formats based on configuration source Copy linkLink copied to clipboard!
The following example shows the configuration format for
db-url-host
command-line parameter
bin/kc.[sh|bat] start --db-url-host=mykeycloakdb
environment variable
export KC_DB_URL_HOST=mykeycloakdb
conf/keycloak.conf
db-url-host=mykeycloakdb
1.2.2. Formats for command-line parameters Copy linkLink copied to clipboard!
Red Hat build of Keycloak is packed with many command line parameters for configuration. To see the available configuration formats, enter the following command:
bin/kc.[sh|bat] start --help
Alternatively, see All configuration for all server options.
1.2.3. Format for referencing environment variables Copy linkLink copied to clipboard!
You can use placeholders to resolve an environment specific value from environment variables inside the
keycloak.conf
${ENV_VAR}
db-url-host=${MY_DB_HOST}
In case the environment variable cannot be resolved, you can specify a fallback value. Use a
:
mydb
db-url-host=${MY_DB_HOST:mydb}
1.2.4. Format to include a specific configuration file Copy linkLink copied to clipboard!
By default, the server always fetches configuration options from the
conf/keycloak.conf
You can also specify an explicit configuration file location using the
[-cf|--config-file]
bin/kc.[sh|bat] --config-file=/path/to/myconfig.conf start
Setting that option makes Red Hat build of Keycloak read configuration from the specified file instead of
conf/keycloak.conf
1.2.5. Setting sensitive options using a Java KeyStore file Copy linkLink copied to clipboard!
Thanks to Keystore Configuration Source you can directly load properties from a Java KeyStore using the
[--config-keystore]
[--config-keystore-password]
[--config-keystore-type]
PKCS12
The secrets in a KeyStore need to be stored using the
PBE
keytool
keytool -importpass -alias kc.db-password -keystore keystore.p12 -storepass keystorepass -storetype PKCS12 -v
After executing the command, you will be prompted to Enter the password to be stored, which represents a value of the
kc.db-password
When the KeyStore is created, you can start the server using the following parameters:
bin/kc.[sh|bat] start --config-keystore=/path/to/keystore.p12 --config-keystore-password=keystorepass --config-keystore-type=PKCS12
1.2.6. Format for raw Quarkus properties Copy linkLink copied to clipboard!
In most cases, the available configuration options should suffice to configure the server. However, for a specific behavior or capability that is missing in the Red Hat build of Keycloak configuration, you can use properties from the underlying Quarkus framework.
If possible, avoid using properties directly from Quarkus, because they are unsupported by Red Hat build of Keycloak. If your need is essential, consider opening an enhancement request first. This approach helps us improve the configuration of Red Hat build of Keycloak to fit your needs.
If an enhancement request is not possible, you can configure the server using raw Quarkus properties:
-
Create a file in the
quarkus.propertiesdirectory.conf Define the required properties in that file.
You can use only a subset of the Quarkus extensions that are defined in the Quarkus documentation. Also, note these differences for Quarkus properties:
-
A lock icon for a Quarkus property in the Quarkus documentation indicates a build time property. You run the command to apply this property. For details about the build command, see the subsequent sections on optimizing Red Hat build of Keycloak.
build - No lock icon for a property in the Quarkus guide indicates a runtime property for Quarkus and Red Hat build of Keycloak.
-
A lock icon for a Quarkus property in the Quarkus documentation indicates a build time property. You run the
You can also store Quarkus properties in a Java KeyStore.
Note that some Quarkus properties are already mapped in the Red Hat build of Keycloak configuration, such as
quarkus.http.port
quarkus.properties
1.2.7. Using special characters in values Copy linkLink copied to clipboard!
Red Hat build of Keycloak depends upon Quarkus and MicroProfile for processing configuration values. Be aware that value expressions are supported. For example,
${some_key}
some_key
To disable expression evaluation, the
\
$
my$$password
my\$\$password
\
--db-password='my\$\$password'
--db-password="my\\$\\$password"
kc.db-password=my\\$\\$password
Windows-specific considerations
When specifying Windows file paths in configuration values, backslashes must also be escaped. For example, if you want to specify the path
C:\path\to\file
C:\\path\\to\\file
C:/path/to/file
When using PowerShell and your values contain special characters like commas, use single quotes around double quotes:
.\kc.bat start --log-level='"INFO,org.hibernate:debug"'
PowerShell handles quotes differently. It interprets the quoted string before passing it to the
kc.bat
1.2.8. Formats for environment variable keys with special characters Copy linkLink copied to clipboard!
Non-alphanumeric characters in your configuration key must be converted to
_
Environment variables are converted back to normal option keys by lower-casing the name and replacing
_
-
_
.
.
-
Automatic mapping of the environment variable key to option key may not preserve the intended key
For example
kc.log-level-package.class_name
KC_LOG_LEVEL_PACKAGE_CLASS_NAME
kc.log-level-package.class.name
_
.
kc.log-level-package.class_name
You have a couple of options in this case:
-
create an entry in your file that references an environment variable of your choosing. e.g.
keycloak.conf. See more on referencing environment variables in Section 1.2.3, “Format for referencing environment variables”.kc.log-level-package.class_name=${CLASS_NAME_LEVEL} -
or in situations where modifying the may not be easy, you can use a pair of environment variables
keycloak.confandKC_UNIQUEIFIER=value, e.g.KCKEY_UNIQUEIFIER=keyandKC_MYKEY=debug, orKCKEY_MYKEY=log-level-package.class_nameandKC_LOG_LEVEL_PACKAGE_CLASS_NAME=debugKCKEY_LOG_LEVEL_PACKAGE_CLASS_NAME=log-level-package.class_name
1.3. Starting Red Hat build of Keycloak Copy linkLink copied to clipboard!
You can start Red Hat build of Keycloak in
development mode
production mode
1.3.1. Starting Red Hat build of Keycloak in development mode Copy linkLink copied to clipboard!
Use development mode to try out Red Hat build of Keycloak for the first time to get it up and running quickly. This mode offers convenient defaults for developers, such as for developing a new Red Hat build of Keycloak theme.
To start in development mode, enter the following command:
bin/kc.[sh|bat] start-dev
Defaults
Development mode sets the following default configuration:
- HTTP is enabled
- Strict hostname resolution is disabled
- Cache is set to local (No distributed cache mechanism used for high availability)
- Theme-caching and template-caching is disabled
1.3.2. Starting Red Hat build of Keycloak in production mode Copy linkLink copied to clipboard!
Use production mode for deployments of Red Hat build of Keycloak in production environments. This mode follows a secure by default principle.
To start in production mode, enter the following command:
bin/kc.[sh|bat] start
Without further configuration, this command will not start Red Hat build of Keycloak and show you an error instead. This response is done on purpose, because Red Hat build of Keycloak follows a secure by default principle. Production mode expects a hostname to be set up and an HTTPS/TLS setup to be available when started.
Defaults
Production mode sets the following defaults:
- HTTP is disabled as transport layer security (HTTPS) is essential
- Hostname configuration is expected
- HTTPS/TLS configuration is expected
Before deploying Red Hat build of Keycloak in a production environment, make sure to follow the steps outlined in Configuring Red Hat build of Keycloak for production.
By default, example configuration options for the production mode are commented out in the default
conf/keycloak.conf
1.4. Creating the initial admin user Copy linkLink copied to clipboard!
You can create the initial admin user by using the web frontend, which you access using a local connection (localhost). You can instead create this user by using environment variables. Set
KC_BOOTSTRAP_ADMIN_USERNAME=<username>
KC_BOOTSTRAP_ADMIN_PASSWORD=<password>
Red Hat build of Keycloak parses these values at first startup to create an initial user with administrative rights. Once the first user with administrative rights exists, you can use the Admin Console or the command line tool
kcadm.[sh|bat]
If the initial administrator already exists and the environment variables are still present at startup, an error message stating the failed creation of the initial administrator is shown in the logs. Red Hat build of Keycloak ignores the values and starts up correctly.
1.5. Optimize the Red Hat build of Keycloak startup Copy linkLink copied to clipboard!
We recommend optimizing Red Hat build of Keycloak to provide faster startup and better memory consumption before deploying Red Hat build of Keycloak in a production environment. This section describes how to apply Red Hat build of Keycloak optimizations for the best performance and runtime behavior.
1.5.1. Creating an optimized Red Hat build of Keycloak build Copy linkLink copied to clipboard!
By default, when you use the
start
start-dev
build
This
build
build
1.5.1.1. First step: Run a build explicitly Copy linkLink copied to clipboard!
To run a
build
bin/kc.[sh|bat] build <build-options>
This command shows
build options
build
For a non-optimized startup of Red Hat build of Keycloak, this distinction has no effect. However, if you run a build before the startup, only a subset of options is available to the build command. The restriction is due to the build options getting persisted into an optimized Red Hat build of Keycloak image. For example, configuration for credentials such as
db-password
All build options are persisted in a plain text. Do not store any sensitive data as the build options. This applies across all the available configuration sources, including the KeyStore Config Source. Hence, we also do not recommend to store any build options in a Java keystore. Also, when it comes to the configuration options, we recommend to use the KeyStore Config Source primarily for storing sensitive data. For non-sensitive data you can use the remaining configuration sources.
Build options are marked in All configuration with a tool icon. To find available build options, enter the following command:
bin/kc.[sh|bat] build --help
Example: Run a build to set the database to PostgreSQL before startup
bin/kc.[sh|bat] build --db=postgres
1.5.1.2. Second step: Start Red Hat build of Keycloak using --optimized Copy linkLink copied to clipboard!
After a successful build, you can start Red Hat build of Keycloak and turn off the default startup behavior by entering the following command:
bin/kc.[sh|bat] start --optimized <configuration-options>
The
--optimized
You can enter all configuration options at startup; these options are the ones in All configuration that are not marked with a tool icon.
-
If a build option is found at startup with a value that is equal to the value used when entering the , that option gets silently ignored when you use the
buildparameter.--optimized -
If that option has a different value than the value used when a build was entered, a warning appears in the logs and the previously built value is used. For this value to take effect, run a new before starting.
build
Create an optimized build
The following example shows the creation of an optimized build followed by the use of the
--optimized
Set the build option for the PostgreSQL database vendor using the build command
bin/kc.[sh|bat] build --db=postgresSet the runtime configuration options for postgres in the
file.conf/keycloak.confdb-url-host=keycloak-postgres db-username=keycloak db-password=change_me hostname=mykeycloak.acme.com https-certificate-fileStart the server with the optimized parameter
bin/kc.[sh|bat] start --optimized
You can achieve most optimizations to startup and runtime behavior by using the
build
keycloak.conf
1.6. Using system variables in the realm configuration Copy linkLink copied to clipboard!
Some of the realm capabilities allow administrators to reference system variables such as environment variables and system properties when configuring the realm and its components.
By default, Red Hat build of Keycloak disallow using system variables but only those explicitly specified through the
spi-admin--allowed-system-variables
Start the server and expose a set of system variables to the server runtime
bin/kc.[sh|bat] start --spi-admin--allowed-system-variables=FOO,BAR
In future releases, this capability will be removed in favor of preventing any usage of system variables in the realm configuration.
1.7. Underlying concepts Copy linkLink copied to clipboard!
This section gives an overview of the underlying concepts Red Hat build of Keycloak uses, especially when it comes to optimizing the startup.
Red Hat build of Keycloak uses the Quarkus framework and a re-augmentation/mutable-jar approach under the covers. This process is started when a
build
The following are some optimizations performed by the
build
- A new closed-world assumption about installed providers is created, meaning that no need exists to re-create the registry and initialize the factories at every Red Hat build of Keycloak startup.
- Configuration files are pre-parsed to reduce I/O when starting the server.
- Database specific resources are configured and prepared to run against a certain database vendor.
- By persisting build options into the server image, the server does not perform any additional step to interpret configuration options and (re)configure itself.
You can read more at the specific Quarkus guide
Chapter 2. Configuring Red Hat build of Keycloak for production Copy linkLink copied to clipboard!
Prepare Red Hat build of Keycloak for use in production.
A Red Hat build of Keycloak production environment provides secure authentication and authorization for deployments that range from on-premise deployments that support a few thousand users to deployments that serve millions of users.
This chapter describes the general areas of configuration required for a production ready Red Hat build of Keycloak environment. This information focuses on the general concepts instead of the actual implementation, which depends on your environment. The key aspects covered in this chapter apply to all environments, whether it is containerized, on-premise, GitOps, or Ansible.
2.1. TLS for secure communication Copy linkLink copied to clipboard!
Red Hat build of Keycloak continually exchanges sensitive data, which means that all communication to and from Red Hat build of Keycloak requires a secure communication channel. To prevent several attack vectors, you enable HTTP over TLS, or HTTPS, for that channel.
To configure secure communication channels for Red Hat build of Keycloak, see Configuring TLS and Configuring outgoing HTTP requests.
To secure the cache communication for Red Hat build of Keycloak, see Configuring distributed caches.
2.2. The hostname for Red Hat build of Keycloak Copy linkLink copied to clipboard!
In a production environment, Red Hat build of Keycloak instances usually run in a private network, but Red Hat build of Keycloak needs to expose certain public facing endpoints to communicate with the applications to be secured.
For details on the endpoint categories and instructions on how to configure the public hostname for them, see Configuring the hostname (v2).
2.2.1. Exposing the Red Hat build of Keycloak Administration APIs and UI on a different hostname Copy linkLink copied to clipboard!
It is considered a best practice to expose the Red Hat build of Keycloak Administration REST API and Console on a different hostname or context-path than the one used for the public frontend URLs that are used e.g. by login flows. This separation ensures that the Administration interfaces are not exposed to the public internet, which reduces the attack surface.
Access to REST APIs needs to be blocked on the reverse proxy level, if they are not intended to be publicly exposed.
For details, see Configuring the hostname (v2).
2.3. Reverse proxy in a distributed environment Copy linkLink copied to clipboard!
Apart from Configuring the hostname (v2), production environments usually include a reverse proxy / load balancer component. It separates and unifies access to the network used by your company or organization. For a Red Hat build of Keycloak production environment, this component is recommended.
For details on configuring proxy communication modes in Red Hat build of Keycloak, see Configuring a reverse proxy. That chapter also recommends which paths should be hidden from public access and which paths should be exposed so that Red Hat build of Keycloak can secure your applications.
2.4. Limit the number of queued requests Copy linkLink copied to clipboard!
A production environment should protect itself from an overload situation, so that it responds to as many valid requests as possible, and to continue regular operations once the situation returns to normal again. One way of doing this is rejecting additional requests once a certain threshold is reached.
Load shedding should be implemented on all levels, including the load balancers in your environment. In addition to that, there is a feature in Red Hat build of Keycloak to limit the number of requests that can’t be processed right away and need to be queued. By default, there is no limit set. Set the option
http-max-queued-requests
503 Server not Available
2.5. Production grade database Copy linkLink copied to clipboard!
The database used by Red Hat build of Keycloak is crucial for the overall performance, availability, reliability and integrity of Red Hat build of Keycloak. For details on how to configure a supported database, see Configuring the database.
2.6. Running Red Hat build of Keycloak in a cluster Copy linkLink copied to clipboard!
To ensure that users can continue to log in when a Red Hat build of Keycloak instance goes down, a typical production environment contains two or more Red Hat build of Keycloak instances.
Red Hat build of Keycloak runs on top of JGroups and Infinispan, which provide a reliable, high-availability stack for a clustered scenario. In the default setup, communication between the nodes is encrypted using TLS.
To find out more about using multiple nodes, the different caches and an appropriate stack for your environment, see Configuring distributed caches.
2.6.1. Configure Firewall ports Copy linkLink copied to clipboard!
A set of network ports must be open to allow a healthy network communication between Red Hat build of Keycloak servers. See Configuring distributed caches. It describes what ports need to be open and their usage.
2.7. Configure Red Hat build of Keycloak Server with IPv4 or IPv6 Copy linkLink copied to clipboard!
The system properties
java.net.preferIPv4Stack
java.net.preferIPv6Addresses
By default, Red Hat build of Keycloak is accessible via IPv4 and IPv6 addresses at the same time. In order to run only with IPv4 addresses, you need to specify the property
java.net.preferIPv4Stack=true
These system properties are conveniently set by the
JAVA_OPTS_APPEND
export JAVA_OPTS_APPEND="-Djava.net.preferIPv4Stack=true"
To set up the server for IPv6 only, set an environment variable as follows for the distributed caches to form a cluster:
export JAVA_OPTS_APPEND="-Djava.net.preferIPv4Stack=false -Djava.net.preferIPv6Addresses=true"
See Configuring distributed caches for more details.
Chapter 3. Bootstrapping and recovering an admin account Copy linkLink copied to clipboard!
Bootstrap Red Hat build of Keycloak and recover access by creating a temporary admin account.
3.1. A temporary admin account Copy linkLink copied to clipboard!
A user or service admin account created using one of the methods described below is temporary. This means the account should exist only for the duration necessary to perform operations needed to gain permanent and more secure admin access. After that, the account needs to be removed manually. Various UI/UX elements, such as the Administration Console warning banner, labels, and log messages, will indicate to a Red Hat build of Keycloak administrator that the account is temporary.
3.2. Bootstrapping a temporary admin account at Red Hat build of Keycloak startup Copy linkLink copied to clipboard!
Red Hat build of Keycloak
start
start-dev
start
start-dev
bin/kc.[sh|bat] start --bootstrap-admin-username tmpadm --bootstrap-admin-password pass
bin/kc.[sh|bat] start-dev --bootstrap-admin-client-id tmpadm --bootstrap-admin-client-secret secret
The username or client ID values can be omitted; see the Section 3.5, “Default values” section below for more information.
The purpose of these options is solely for bootstrapping temporary admin accounts. These accounts will be created only during the initial start of the Red Hat build of Keycloak server when the master realm doesn’t exist yet. The accounts are always created in the master realm. For recovering lost admin access, use the dedicated command described in the sections below.
3.3. Bootstrapping an admin user or service account using the dedicated command Copy linkLink copied to clipboard!
The
bootstrap-admin
Additionally, it is strongly recommended to use the dedicated command with the same options that the Red Hat build of Keycloak server is started with (e.g.,
db
If you have built an optimized version of Red Hat build of Keycloak with the
build
--optimized
if you do not use
--optimized
bootstrap-admin
3.3.1. Create an admin user Copy linkLink copied to clipboard!
To create a temporary admin user, execute the following command:
bin/kc.[sh|bat] bootstrap-admin user
If no other parameters are specified and/or no corresponding environment variables are set, the user is prompted to enter the required information. The username value can be omitted to use the default values. For more information, see the Section 3.5, “Default values” and Section 3.7, “Environment variables” sections below.
Alternatively, the parameters can be directly specified in the command:
bin/kc.[sh|bat] bootstrap-admin user --username tmpadm --password:env PASS_VAR
This command creates a temporary admin user with the username
tmpadm
3.3.2. Create a service account Copy linkLink copied to clipboard!
In automated scenarios, a temporary admin service account can be a more suitable alternative to a temporary admin user.
To create a temporary admin service account, execute the following command:
bin/kc.[sh|bat] bootstrap-admin service
Similarly, if no corresponding environment variables or additional parameters are set, the user will be prompted to enter the required information. The client ID value can be omitted to use the default values. For more information, see the Section 3.5, “Default values” and Section 3.7, “Environment variables” sections below.
Alternatively, the parameters can be directly specified in the command:
bin/kc.[sh|bat] bootstrap-admin service --client-id tmpclient --client-secret:env=SECRET_VAR
This command creates a temporary admin service account with the client ID
tmpclient
3.4. Regaining access to the realm with an increased security Copy linkLink copied to clipboard!
Passwordless, OTP, or other advanced authentication methods can be enforced for a realm with lost admin access. In such a case, the admin service account needs to be created to recover lost admin access to the realm. After the service account is created, authentication against the Red Hat build of Keycloak instance is required to perform all necessary operations:
bin/kcadm.[sh|bat] config credentials --server http://localhost:8080 --realm master --client <service_account_client_name> --secret <service_account_secret>
Next, retrieve the
credentialId
CredentialRepresentation
type
otp
bin/kcadm.[sh|bat] get users/{userId}/credentials -r {realm-name}
Finally, the retrieved ID can be used to remove the advanced authentication method (in our case, OTP):
bin/kcadm.[sh|bat] delete users/{userId}/credentials/{credentialId} -r {realm-name}
3.5. Default values Copy linkLink copied to clipboard!
For both the startup and dedicated command scenarios, the username and client ID are optional and default to
temp-admin
3.6. Disable the parameters prompt Copy linkLink copied to clipboard!
To disable the prompt for the parameters, the
--no-prompt
bin/kc.[sh|bat] bootstrap-admin user --username tmpadm --no-prompt
If no corresponding environment variable is set, the command will fail with an error message indicating that the required password parameter is missing.
The
--no-prompt
bin/kc.[sh|bat] bootstrap-admin user --password:env PASS_VAR --no-prompt
This creates a temporary admin user with the default username without prompting for confirmation. For more information, see the Section 3.5, “Default values” section above.
3.7. Environment variables Copy linkLink copied to clipboard!
For the
bootstrap-admin user
bin/kc.[sh|bat] bootstrap-admin user --username:env <YourUsernameEnv> --password:env <YourPassEnv>
For the
bootstrap-admin service
temp-admin
bin/kc.[sh|bat] bootstrap-admin service --client-id:env <YourClientIdEnv> --client-secret:env <YourSecretEnv>
Chapter 4. Directory Structure Copy linkLink copied to clipboard!
Understand the purpose of the directories under the installation root.
4.1. Installation Locations Copy linkLink copied to clipboard!
If you are installing from a zip file then by default there will be an install root directory of
rhbk-26.4.10
/opt/keycloak
In the rest of the documentation, relative paths are understood to be relative to the install root - for example,
conf/file.xml
<install root>/conf/file.xml
4.2. Directory Structure Copy linkLink copied to clipboard!
Under the Red Hat build of Keycloak install root there exists a number of folders:
bin/ - contains all the shell scripts for the server, including
,kc.sh|bat, andkcadm.sh|batkcreg.sh|bat- client/ - used internally
conf/ - directory used for configuration files, including
- see Configuring Red Hat build of Keycloak. Many options for specifying a configuration file expect paths relative to this directory.keycloak.conf-
truststores/ - default path used by the option - see Configuring trusted certificates
truststore-paths
-
truststores/ - default path used by the
data/ - directory for the server to store runtime information, such as transaction logs
- logs/ - default directory for file logging - see Configuring logging
- lib/ - used internally
- providers/ - directory for user provided dependencies - see Configuring providers for extending the server and Configuring the database for an example of adding a JDBC driver.
- themes/ - directory for customizations to the Admin Console - see Developing Themes
Chapter 5. Running Red Hat build of Keycloak in a container Copy linkLink copied to clipboard!
Run Red Hat build of Keycloak from a container image.
This chapter describes how to optimize and run the Red Hat build of Keycloak container image to provide the best experience running a container.
This chapter applies only for building an image that you run in a OpenShift environment. Only an OpenShift environment is supported for this image. It is not supported if you run it in other Kubernetes distributions.
5.1. Creating a customized and optimized container image Copy linkLink copied to clipboard!
The default Red Hat build of Keycloak container image ships ready to be configured and optimized.
For the best start up of your Red Hat build of Keycloak container, build an image by running the
build
5.1.1. Writing your optimized Red Hat build of Keycloak Containerfile Copy linkLink copied to clipboard!
The following
Containerfile
Containerfile:
FROM registry.redhat.io/rhbk/keycloak-rhel9:26.4 AS builder
# Enable health and metrics support
ENV KC_HEALTH_ENABLED=true
ENV KC_METRICS_ENABLED=true
# Configure a database vendor
ENV KC_DB=postgres
WORKDIR /opt/keycloak
# for demonstration purposes only, please make sure to use proper certificates in production instead
RUN keytool -genkeypair -storepass password -storetype PKCS12 -keyalg RSA -keysize 2048 -dname "CN=server" -alias server -ext "SAN:c=DNS:localhost,IP:127.0.0.1" -keystore conf/server.keystore
RUN /opt/keycloak/bin/kc.sh build
FROM registry.redhat.io/rhbk/keycloak-rhel9:26.4
COPY --from=builder /opt/keycloak/ /opt/keycloak/
# change these values to point to a running postgres instance
ENV KC_DB=postgres
ENV KC_DB_URL=<DBURL>
ENV KC_DB_USERNAME=<DBUSERNAME>
ENV KC_DB_PASSWORD=<DBPASSWORD>
ENV KC_HOSTNAME=localhost
ENTRYPOINT ["/opt/keycloak/bin/kc.sh"]
The build process includes multiple stages:
-
Run the command to set server build options to create an optimized image.
build -
The files generated by the stage are copied into a new image.
build - In the final image, additional configuration options for the hostname and database are set so that you don’t need to set them again when running the container.
-
In the entrypoint, the enables access to all the distribution sub-commands.
kc.sh
To install custom providers, you just need to define a step to include the JAR file(s) into the
/opt/keycloak/providers
RUNs
build
# A example build step that downloads a JAR file from a URL and adds it to the providers directory
FROM registry.redhat.io/rhbk/keycloak-rhel9:26.4 as builder
...
# Add the provider JAR file to the providers directory
ADD --chown=keycloak:keycloak --chmod=644 <MY_PROVIDER_JAR_URL> /opt/keycloak/providers/myprovider.jar
...
# Context: RUN the build command
RUN /opt/keycloak/bin/kc.sh build
5.1.2. Installing additional RPM packages Copy linkLink copied to clipboard!
If you try to install new software in a stage
FROM registry.redhat.io/rhbk/keycloak-rhel9
microdnf
dnf
rpm
bash
First, consider if your use case can be implemented in a different way, and so avoid installing new RPMs into the final container:
-
A instruction in your Containerfile can be replaced with
RUN curl, since that instruction natively supports remote URLs.ADD -
Some common CLI tools can be replaced by creative use of the Linux filesystem. For example, becomes
ip addr show tap0cat /sys/class/net/tap0/address - Tasks that need RPMs can be moved to a former stage of an image build, and the results copied across instead.
Here is an example. Running
update-ca-trust
FROM registry.access.redhat.com/ubi9 AS ubi-micro-build
COPY mycertificate.crt /etc/pki/ca-trust/source/anchors/mycertificate.crt
RUN update-ca-trust
FROM registry.redhat.io/rhbk/keycloak-rhel9
COPY --from=ubi-micro-build /etc/pki /etc/pki
It is possible to install new RPMs if absolutely required, following this two-stage pattern established by ubi-micro:
FROM registry.access.redhat.com/ubi9 AS ubi-micro-build
RUN mkdir -p /mnt/rootfs
RUN dnf install --installroot /mnt/rootfs <package names go here> --releasever 9 --setopt install_weak_deps=false --nodocs -y && \
dnf --installroot /mnt/rootfs clean all && \
rpm --root /mnt/rootfs -e --nodeps setup
FROM registry.redhat.io/rhbk/keycloak-rhel9
COPY --from=ubi-micro-build /mnt/rootfs /
This approach uses a chroot,
/mnt/rootfs
Some packages have a large tree of dependencies. By installing new RPMs you may unintentionally increase the container’s attack surface. Check the list of installed packages carefully.
5.1.3. Custom ENTRYPOINT shell scripts Copy linkLink copied to clipboard!
If you use a custom entry point script, start Red Hat build of Keycloak with
exec
Correct approach for an ENTRYPOINT shell script
#!/bin/bash
# (add your custom logic here)
# Run the 'exec' command as the last step of the script.
# As it replaces the current shell process, no additional shell commands will run after the 'exec' command.
exec /opt/keycloak/bin/kc.sh start "$@"
Without
exec
SIGTERM
5.1.4. Building the container image Copy linkLink copied to clipboard!
To build the actual container image, run the following command from the directory containing your Containerfile:
podman build . -t mykeycloak -f Containerfile
Podman can be used only for creating or customizing images. Podman is not supported for running Red Hat build of Keycloak in production environments.
5.1.5. Starting the optimized Red Hat build of Keycloak container image Copy linkLink copied to clipboard!
To start the image, run:
podman run --name mykeycloak -p 8443:8443 -p 9000:9000 \
-e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me \
mykeycloak \
start --optimized --hostname=localhost
Red Hat build of Keycloak starts in production mode, using only secured HTTPS communication, and is available on
https://localhost:8443
Health check endpoints are available at
https://localhost:9000/health
https://localhost:9000/health/ready
https://localhost:9000/health/live
Opening up
https://localhost:9000/metrics
5.1.6. Known issues with Docker Copy linkLink copied to clipboard!
-
If a command seems to be taking an excessive amount of time, then likely your Docker systemd service has the file limit setting
RUN dnf installconfigured incorrectly. Either update the service configuration to use a better value, such as 1024000, or directly useLimitNOFILEin the RUN command:ulimit
...
RUN ulimit -n 1024000 && dnf install --installroot ...
...
-
If you are including provider JARs and your container fails a with a notification that a provider JAR has changed, this is due to Docker truncating or otherwise modifying file modification timestamps from what the
start --optimizedcommand recorded to what is seen at runtime. In this case you will need to force the image to use a known timestamp of your choosing with abuildcommand prior to running atouch:build
...
# ADD or copy one or more provider jars
ADD --chown=keycloak:keycloak --chmod=644 some-jar.jar /opt/keycloak/providers/
...
RUN touch -m --date=@1743465600 /opt/keycloak/providers/*
RUN /opt/keycloak/bin/kc.sh build
...
5.2. Exposing the container to a different port Copy linkLink copied to clipboard!
By default, the server is listening for
http
https
8080
8443
If you want to expose the container using a different port, you need to set the
hostname
- Exposing the container using a port other than the default ports
podman run --name mykeycloak -p 3000:8443 \
-e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me \
mykeycloak \
start --optimized --hostname=https://localhost:3000
By setting the
hostname
https://localhost:3000
5.3. Trying Red Hat build of Keycloak in development mode Copy linkLink copied to clipboard!
The easiest way to try Red Hat build of Keycloak from a container for development or testing purposes is to use the Development mode. You use the
start-dev
podman run --name mykeycloak -p 127.0.0.1:8080:8080 \
-e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me \
registry.redhat.io/rhbk/keycloak-rhel9:26.4 \
start-dev
Invoking this command starts the Red Hat build of Keycloak server in development mode.
This mode should be strictly avoided in production environments because it has insecure defaults. For more information about running Red Hat build of Keycloak in production, see Configuring Red Hat build of Keycloak for production.
5.4. Running a standard Red Hat build of Keycloak container Copy linkLink copied to clipboard!
In keeping with concepts such as immutable infrastructure, containers need to be re-provisioned routinely. In these environments, you need containers that start fast, therefore you need to create an optimized image as described in the preceding section. However, if your environment has different requirements, you can run a standard Red Hat build of Keycloak image by just running the
start
podman run --name mykeycloak -p 127.0.0.1:8080:8080 \
-e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me \
registry.redhat.io/rhbk/keycloak-rhel9:26.4 \
start \
--hostname=localhost --http-enabled=true
--db=postgres --features=token-exchange \
--db-url=<JDBC-URL> --db-username=<DB-USER> --db-password=<DB-PASSWORD> \
--https-key-store-file=<file> --https-key-store-password=<password>
Running this command starts a Red Hat build of Keycloak server that detects and applies the build options first. In the example, the line
--db=postgres --features=token-exchange
Red Hat build of Keycloak then starts up and applies the configuration for the specific environment. This approach significantly increases startup time and creates an image that is mutable, which is not the best practice.
5.5. Provide initial admin credentials when running in a container Copy linkLink copied to clipboard!
Red Hat build of Keycloak only allows to create the initial admin user from a local network connection. This is not the case when running in a container, so you have to provide the following environment variables when you run the image:
# setting the admin username
-e KC_BOOTSTRAP_ADMIN_USERNAME=<admin-user-name>
# setting the initial password
-e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me
5.6. Importing A Realm On Startup Copy linkLink copied to clipboard!
The Red Hat build of Keycloak containers have a directory
/opt/keycloak/data/import
--import-realm
podman run --name keycloak_unoptimized -p 127.0.0.1:8080:8080 \
-e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me \
-v /path/to/realm/data:/opt/keycloak/data/import \
registry.redhat.io/rhbk/keycloak-rhel9:26.4 \
start-dev --import-realm
Feel free to join the open GitHub Discussion around enhancements of the admin bootstrapping process.
5.7. Specifying different memory settings Copy linkLink copied to clipboard!
The Red Hat build of Keycloak container, instead of specifying hardcoded values for the initial and maximum heap size, uses relative values to the total memory of a container. This behavior is achieved by JVM options
-XX:MaxRAMPercentage=70
-XX:InitialRAMPercentage=50
The
-XX:MaxRAMPercentage
-XX:InitialRAMPercentage
As the heap size is dynamically calculated based on the total container memory, you should always set the memory limit for the container. Previously, the maximum heap size was set to 512 MB, and in order to approach similar values, you should set the memory limit to at least 750 MB. For smaller production-ready deployments, the recommended memory limit is 2 GB.
The JVM options related to the heap might be overridden by setting the environment variable
JAVA_OPTS_KC_HEAP
JAVA_OPTS_KC_HEAP
kc.sh
kc.bat
For example, you can specify the environment variable and memory limit as follows:
podman run --name mykeycloak -p 127.0.0.1:8080:8080 -m 1g \
-e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me \
-e JAVA_OPTS_KC_HEAP="-XX:MaxHeapFreeRatio=30 -XX:MaxRAMPercentage=65" \
registry.redhat.io/rhbk/keycloak-rhel9:26.4 \
start-dev
If the memory limit is not set, the memory consumption rapidly increases as the heap size can grow up to 70% of the total container memory. Once the JVM allocates the memory, it is returned to the OS reluctantly with the current Red Hat build of Keycloak GC settings.
5.8. Relevant options Copy linkLink copied to clipboard!
| Value | |
|---|---|
| 🛠
|
|
|
| |
|
| |
|
| |
| 🛠
|
|
|
Available only when hostname:v2 feature is enabled | |
|
| |
|
| (default) |
| 🛠
|
|
| 🛠
|
|
Chapter 6. Configuring TLS Copy linkLink copied to clipboard!
Configure Red Hat build of Keycloak’s https certificates for ingoing and outgoing requests.
Transport Layer Security (short: TLS) is crucial to exchange data over a secured channel. For production environments, you should never expose Red Hat build of Keycloak endpoints through HTTP, as sensitive data is at the core of what Red Hat build of Keycloak exchanges with other applications. In this chapter, you will learn how to configure Red Hat build of Keycloak to use HTTPS/TLS.
Red Hat build of Keycloak can be configured to load the required certificate infrastructure using files in PEM format or from a Java Keystore. When both alternatives are configured, the PEM files takes precedence over the Java Keystores.
6.1. Providing certificates in PEM format Copy linkLink copied to clipboard!
When you use a pair of matching certificate and private key files in PEM format, you configure Red Hat build of Keycloak to use them by running the following command:
bin/kc.[sh|bat] start --https-certificate-file=/path/to/certfile.pem --https-certificate-key-file=/path/to/keyfile.pem
Red Hat build of Keycloak creates a keystore out of these files in memory and uses this keystore afterwards.
6.2. Providing a Keystore Copy linkLink copied to clipboard!
When no keystore file is explicitly configured, but
http-enabled
conf/server.keystore
As an alternative, you can use an existing keystore by running the following command:
bin/kc.[sh|bat] start --https-key-store-file=/path/to/existing-keystore-file
Recognized file extensions for a keystore:
-
,
.p12, and.pkcs12for a pkcs12 file.pfx -
, and
.jksfor a jks file.keystore -
,
.key, and.crtfor a pem file.pem
If your keystore does not have an extension matching its file type, you will also need to set the
https-key-store-type
6.2.1. Setting the Keystore password Copy linkLink copied to clipboard!
You can set a secure password for your keystore using the
https-key-store-password
bin/kc.[sh|bat] start --https-key-store-password=<value>
If no password is set, the default password
password
6.2.1.1. Securing credentials Copy linkLink copied to clipboard!
Avoid setting a password in plaintext by using the CLI or adding it to
conf/keycloak.conf
6.3. Configuring TLS protocols Copy linkLink copied to clipboard!
By default, Red Hat build of Keycloak does not enable deprecated TLS protocols. If your client supports only deprecated protocols, consider upgrading the client. However, as a temporary work-around, you can enable deprecated protocols by running the following command:
bin/kc.[sh|bat] start --https-protocols=<protocol>[,<protocol>]
For example to only enable TLSv1.3, use a command such as the following:
kc.sh start --https-protocols=TLSv1.3
6.4. Switching the HTTPS port Copy linkLink copied to clipboard!
Red Hat build of Keycloak listens for HTTPS traffic on port
8443
bin/kc.[sh|bat] start --https-port=<port>
6.5. Certificate and Key Reloading Copy linkLink copied to clipboard!
By default Red Hat build of Keycloak will reload the certificates, keys, and keystores specified in
https-*
https-certificates-reload-period
https-*
ms
h
m
s
d
-1
6.6. Relevant options Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
|
|
|
| |
|
| |
|
| (default) |
|
| |
|
| |
|
| (default) |
|
| |
|
| (default) |
|
|
|
6.6.1. Management server Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
Available only when http-management-scheme is inherited | |
|
Available only when http-management-scheme is inherited | |
|
Available only when http-management-scheme is inherited | (default) |
|
Available only when http-management-scheme is inherited | |
|
Available only when http-management-scheme is inherited | (default) |
Chapter 7. Configuring the hostname (v2) Copy linkLink copied to clipboard!
Configure the frontend and backchannel endpoints exposed by Red Hat build of Keycloak.
7.1. The importance of setting the hostname option Copy linkLink copied to clipboard!
By default, Red Hat build of Keycloak mandates the configuration of the
hostname
Red Hat build of Keycloak freely discloses its own URLs, for instance through the OIDC Discovery endpoint, or as part of the password reset link in an email. If the hostname was dynamically interpreted from a hostname header, it could provide a potential attacker with an opportunity to manipulate a URL in the email, redirect a user to the attacker’s fake domain, and steal sensitive data such as action tokens, passwords, etc.
By explicitly setting the
hostname
bin/kc.[sh|bat] start --hostname my.keycloak.org
The examples start the Red Hat build of Keycloak instance in production mode, which requires a public certificate and private key in order to secure communications. For more information, refer to the Configuring Red Hat build of Keycloak for production.
7.2. Defining specific parts of the hostname option Copy linkLink copied to clipboard!
As demonstrated in the previous example, the scheme and port are not explicitly required. In such cases, Red Hat build of Keycloak automatically handles these aspects. For instance, the server would be accessible at
https://my.keycloak.org:8443
443
hostname
bin/kc.[sh|bat] start --hostname https://my.keycloak.org
Similarly, your reverse proxy might expose Red Hat build of Keycloak at a different context path. It is possible to configure Red Hat build of Keycloak to reflect that via the
hostname
hostname-admin
bin/kc.[sh|bat] start --hostname https://my.keycloak.org:123/auth
7.3. Utilizing an internal URL for communication among clients Copy linkLink copied to clipboard!
Red Hat build of Keycloak has the capability to offer a separate URL for backchannel requests, enabling internal communication while maintaining the use of a public URL for frontchannel requests. Moreover, the backchannel is dynamically resolved based on incoming headers. Consider the following example:
bin/kc.[sh|bat] start --hostname https://my.keycloak.org --hostname-backchannel-dynamic true
In this manner, your applications, referred to as clients, can connect with Red Hat build of Keycloak through your local network, while the server remains publicly accessible at
https://my.keycloak.org
7.4. Using edge TLS termination Copy linkLink copied to clipboard!
As you can observe, the HTTPS protocol is the default choice, adhering to Red Hat build of Keycloak’s commitment to security best practices. However, Red Hat build of Keycloak also provides the flexibility for users to opt for HTTP if necessary. This can be achieved simply by specifying the HTTP listener, consult the Configuring TLS for details. With an edge TLS-termination proxy you can start the server as follows:
bin/kc.[sh|bat] start --hostname https://my.keycloak.org --http-enabled true
The result of this configuration is that you can continue to access Red Hat build of Keycloak at
https://my.keycloak.org
8080
7.5. Using a reverse proxy Copy linkLink copied to clipboard!
When a proxy is forwarding http or reencrypted TLS requests, the
proxy-headers
If either
forwarded
xforwarded
Forwarded
X-Forwarded-*
7.5.1. Fully dynamic URLs. Copy linkLink copied to clipboard!
For example if your reverse proxy correctly sets the Forwarded header, and you don’t want to hardcode the hostname, Red Hat build of Keycloak can accommodate this. You simply need to initiate the server as follows:
bin/kc.[sh|bat] start --hostname-strict false --proxy-headers forwarded
With this configuration, the server respects the value set by the Forwarded header. This also implies that all endpoints are dynamically resolved.
7.5.2. Partially dynamic URLs Copy linkLink copied to clipboard!
The
proxy-headers
hostname
bin/kc.[sh|bat] start --hostname my.keycloak.org --proxy-headers xforwarded
In this case, scheme, and port are resolved dynamically from X-Forwarded-* headers, while hostname is statically defined as
my.keycloak.org
7.5.3. Fixed URLs Copy linkLink copied to clipboard!
The
proxy-headers
hostname
bin/kc.[sh|bat] start --hostname https://my.keycloak.org --proxy-headers xforwarded
In this case, while nothing is dynamically resolved from the X-Forwarded-* headers, the X-Forwarded-* headers are used to determine the correct origin of the request.
7.6. Exposing the Administration Console on a separate hostname Copy linkLink copied to clipboard!
If you wish to expose the Admin Console on a different host, you can do so with the following command:
bin/kc.[sh|bat] start --hostname https://my.keycloak.org --hostname-admin https://admin.my.keycloak.org:8443
This allows you to access Red Hat build of Keycloak at
https://my.keycloak.org
https://admin.my.keycloak.org:8443
https://my.keycloak.org
Keep in mind that hostname and proxy options do not change the ports on which the server listens. Instead it changes only the ports of static resources like JavaScript and CSS links, OIDC well-known endpoints, redirect URIs, etc. that will be used in front of the proxy. You need to use HTTP configuration options to change the actual ports the server is listening on. Refer to the All configuration for details.
Using the
hostname-admin
hostname
hostname-admin
7.7. Background - server endpoints Copy linkLink copied to clipboard!
Red Hat build of Keycloak exposes several endpoints, each with a different purpose. They are typically used for communication among applications or for managing the server. We recognize 3 main endpoint groups:
- Frontend
- Backend
- Administration
If you want to work with either of these endpoints, you need to set the base URL. The base URL consists of a several parts:
- a scheme (e.g. https protocol)
- a hostname (e.g. example.keycloak.org)
- a port (e.g. 8443)
- a path (e.g. /auth)
The base URL for each group has an important impact on how tokens are issued and validated, on how links are created for actions that require the user to be redirected to Red Hat build of Keycloak (for example, when resetting password through email links), and, most importantly, how applications will discover these endpoints when fetching the OpenID Connect Discovery Document from
realms/{realm-name}/.well-known/openid-configuration
7.7.1. Frontend Copy linkLink copied to clipboard!
Users and applications use the frontend URL to access Red Hat build of Keycloak through a front channel. The front channel is a publicly accessible communication channel. For example browser-based flows (accessing the login page, clicking on the link to reset a password or binding the tokens) can be considered as frontchannel requests.
In order to make Red Hat build of Keycloak accessible via the frontend URL, you need to set the
hostname
bin/kc.[sh|bat] start --hostname my.keycloak.org
7.7.2. Backend Copy linkLink copied to clipboard!
The backend endpoints are those accessible through a public domain or through a private network. They’re related to direct backend communication between Red Hat build of Keycloak and a client (an application secured by Red Hat build of Keycloak). Such communication might be over a local network, avoiding a reverse proxy. Examples of the endpoints that belong to this group are the authorization endpoint, token and token introspection endpoint, userinfo endpoint, JWKS URI endpoint, etc.
The default value of
hostname-backchannel-dynamic
false
bin/kc.[sh|bat] start --hostname https://my.keycloak.org --hostname-backchannel-dynamic true
Note that
hostname
7.7.3. Administration Copy linkLink copied to clipboard!
Similarly to the base frontend URL, you can also set the base URL for resources and endpoints of the administration console. The server exposes the administration console and static resources using a specific URL. This URL is used for redirect URLs, loading resources (CSS, JS), Administration REST API etc. It can be done by setting the
hostname-admin
bin/kc.[sh|bat] start --hostname https://my.keycloak.org --hostname-admin https://admin.my.keycloak.org:8443
Again, the
hostname
7.8. Sources for resolving the URL Copy linkLink copied to clipboard!
As indicated in the previous sections, URLs can be resolved in several ways: they can be dynamically generated, hardcoded, or a combination of both:
Dynamic from an incoming request:
- Host header, scheme, server port, context path
-
Proxy-set headers: and
ForwardedX-Forwarded-*
Hardcoded:
-
Server-wide config (e.g ,
hostname, etc.)hostname-admin - Realm configuration for frontend URL
-
Server-wide config (e.g
7.9. Validations Copy linkLink copied to clipboard!
-
URL and
hostnameURL are verified that full URL is used, incl. scheme and hostname. Port is validated only if present, otherwise default port for given protocol is assumed (80 or 443).hostname-admin In production profile (
), eitherkc.sh|bat startor--hostnamemust be explicitly configured.--hostname-strict false-
This does not apply for dev profile () where
kc.sh|bat start-devis the default value.--hostname-strict false
-
This does not apply for dev profile (
If
is not configured:--hostname-
must be set to false.
hostname-backchannel-dynamic -
must be set to false.
hostname-strict
-
-
If is configured,
hostname-adminmust be set to a URL (not just hostname). Otherwise Red Hat build of Keycloak would not know what is the correct frontend URL (incl. port etc.) when accessing the Admin Console.hostname -
If is set to true,
hostname-backchannel-dynamicmust be set to a URL (not just hostname). Otherwise Red Hat build of Keycloak would not know what is the correct frontend URL (incl. port etc.) when being access via the dynamically resolved backchannel.hostname
Additionally if hostname is configured, then hostname-strict is ignored.
7.10. Troubleshooting Copy linkLink copied to clipboard!
To troubleshoot the hostname configuration, you can use a dedicated debug tool which can be enabled as:
Red Hat build of Keycloak configuration:
bin/kc.[sh|bat] start --hostname=mykeycloak --hostname-debug=true
After Red Hat build of Keycloak starts properly, open your browser and go to:
http://mykeycloak:8080/realms/<your-realm>/hostname-debug
7.11. Relevant options Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
Available only when hostname:v2 feature is enabled | |
|
Available only when hostname:v2 feature is enabled | |
|
Available only when hostname:v2 feature is enabled |
|
|
Available only when hostname:v2 feature is enabled |
|
|
Available only when hostname:v2 feature is enabled |
|
Chapter 8. Configuring a reverse proxy Copy linkLink copied to clipboard!
Configure Red Hat build of Keycloak with a reverse proxy, API gateway, or load balancer.
Distributed environments frequently require the use of a reverse proxy. Red Hat build of Keycloak offers several options to securely integrate with such environments.
8.1. Port to be proxied Copy linkLink copied to clipboard!
Red Hat build of Keycloak runs on the following ports by default:
-
(
8443when you enable HTTP explicitly by8080)--http-enabled=true -
9000
The port
8443
8080
The port
9000
You only need to proxy port
8443
8080
9000
8.2. Configure the reverse proxy headers Copy linkLink copied to clipboard!
Red Hat build of Keycloak will parse the reverse proxy headers based on the
proxy-headers
- By default if the option is not specified, no reverse proxy headers are parsed. This should be used when no proxy is in use or with https passthrough.
-
enables parsing of the
forwardedheader as per RFC7239.Forwarded -
enables parsing of non-standard
xforwardedheaders, such asX-Forwarded-*,X-Forwarded-For,X-Forwarded-Proto, andX-Forwarded-Host.X-Forwarded-Port
If you are using a reverse proxy for anything other than https passthrough and do not set the
proxy-headers
For example:
bin/kc.[sh|bat] start --proxy-headers forwarded
If either
forwarded
xforwarded
Forwarded
X-Forwarded-*
forwarded
xforwarded
Take extra precautions to ensure that the client address is properly set by your reverse proxy via the
Forwarded
X-Forwarded-For
When using the
xforwarded
X-Forwarded-Port
X-Forwarded-Host
If the TLS connection is terminated at the reverse proxy (edge termination), enabling HTTP through the
http-enabled
8.3. Different context-path on reverse proxy Copy linkLink copied to clipboard!
Red Hat build of Keycloak assumes it is exposed through the reverse proxy under the same context path as Red Hat build of Keycloak is configured for. By default Red Hat build of Keycloak is exposed through the root (
/
/
hostname
--hostname=https://my.keycloak.org/auth
/auth
For more details on exposing Red Hat build of Keycloak on different hostname or context-path incl. Administration REST API and Console, see Configuring the hostname (v2).
Alternatively you can also change the context path of Red Hat build of Keycloak itself to match the context path for the reverse proxy using the
http-relative-path
8.4. Enable sticky sessions Copy linkLink copied to clipboard!
Typical cluster deployment consists of the load balancer (reverse proxy) and 2 or more Red Hat build of Keycloak servers on private network. For performance purposes, it may be useful if load balancer forwards all requests related to particular browser session to the same Red Hat build of Keycloak backend node.
The reason is, that Red Hat build of Keycloak is using Infinispan distributed cache under the covers for save data related to current authentication session and user session. The Infinispan distributed caches are configured with limited number of owners. That means that session related data are stored only in some cluster nodes and the other nodes need to lookup the data remotely if they want to access it.
For example if authentication session with ID 123 is saved in the Infinispan cache on node1, and then node2 needs to lookup this session, it needs to send the request to node1 over the network to return the particular session entity.
It is beneficial if particular session entity is always available locally, which can be done with the help of sticky sessions. The workflow in the cluster environment with the public frontend load balancer and two backend Red Hat build of Keycloak nodes can be like this:
- User sends initial request to see the Red Hat build of Keycloak login screen
- This request is served by the frontend load balancer, which forwards it to some random node (eg. node1). Strictly said, the node doesn’t need to be random, but can be chosen according to some other criteria (client IP address etc). It all depends on the implementation and configuration of underlying load balancer (reverse proxy).
- Red Hat build of Keycloak creates authentication session with random ID (eg. 123) and saves it to the Infinispan cache.
- Infinispan distributed cache assigns the primary owner of the session based on the hash of session ID. See Infinispan documentation for more details around this. Let’s assume that Infinispan assigned node2 to be the owner of this session.
- Red Hat build of Keycloak creates the cookie AUTH_SESSION_ID with the format like <session-id>.<owner-node-id> . In our example case, it will be 123.node2 .
- Response is returned to the user with the Red Hat build of Keycloak login screen and the AUTH_SESSION_ID cookie in the browser
From this point, it is beneficial if load balancer forwards all the next requests to the node2 as this is the node, who is owner of the authentication session with ID 123 and hence Infinispan can lookup this session locally. After authentication is finished, the authentication session is converted to user session, which will be also saved on node2 because it has same ID 123 .
The sticky session is not mandatory for the cluster setup, however it is good for performance for the reasons mentioned above. You need to configure your loadbalancer to stick over the AUTH_SESSION_ID cookie. The appropriate procedure to make this change depends on your loadbalancer.
If your proxy supports session affinity without processing cookies from backend nodes, you should set the
spi-sticky-session-encoder--infinispan--should-attach-route
false
bin/kc.[sh|bat] start --spi-sticky-session-encoder--infinispan--should-attach-route=false
By default, the
spi-sticky-session-encoder--infinispan--should-attach-route
true
8.5. Exposed path recommendations Copy linkLink copied to clipboard!
When using a reverse proxy, Red Hat build of Keycloak only requires certain paths to be exposed. The following table shows the recommended paths to expose.
| Red Hat build of Keycloak Path | Reverse Proxy Path | Exposed | Reason |
|---|---|---|---|
| / | - | No | When exposing all paths, admin paths are exposed unnecessarily. |
| /admin/ | - | No | Exposed admin paths lead to an unnecessary attack vector. |
| /realms/ | /realms/ | Yes | This path is needed to work correctly, for example, for OIDC endpoints. |
| /resources/ | /resources/ | Yes | This path is needed to serve assets correctly. It may be served from a CDN instead of the Red Hat build of Keycloak path. |
| /.well-known/ | /.well-known/ | Yes | This path is needed to resolve Authorization Server Metadata and other information via RFC8414. |
| /metrics | - | No | Exposed metrics lead to an unnecessary attack vector. |
| /health | - | No | Exposed health checks lead to an unnecessary attack vector. |
We assume you run Red Hat build of Keycloak on the root path
/
If you configured a
http-relative-path
/.well-known/
8.6. Trusted Proxies Copy linkLink copied to clipboard!
To ensure that proxy headers are used only from proxies you trust, set the
proxy-trusted-addresses
For example:
bin/kc.[sh|bat] start --proxy-headers forwarded --proxy-trusted-addresses=192.168.0.32,127.0.0.0/8
8.7. PROXY Protocol Copy linkLink copied to clipboard!
The
proxy-protocol-enabled
true
proxy-headers
This is useful when running behind a compatible https passthrough proxy because the request headers cannot be manipulated.
For example:
bin/kc.[sh|bat] start --proxy-protocol-enabled true
8.8. Enabling client certificate lookup Copy linkLink copied to clipboard!
When the proxy is configured as a TLS termination proxy the client certificate information can be forwarded to the server through specific HTTP request headers and then used to authenticate clients. You are able to configure how the server is going to retrieve client certificate information depending on the proxy you are using.
Client certificate lookup via a proxy header for X.509 authentication is considered security-sensitive. If misconfigured, a forged client certificate header can be used for authentication. Extra precautions need to be taken to ensure that the client certificate information can be trusted when passed via a proxy header.
- Double check your use case needs reencrypt or edge TLS termination which implies using a proxy header for client certificate lookup. TLS passthrough is recommended as a more secure option when X.509 authentication is desired as it does not require passing the certificate via a proxy header. Client certificate lookup from a proxy header is applicable only to reencrypt and edge TLS termination.
If passthrough is not an option, implement the following security measures:
- Configure your network so that Red Hat build of Keycloak is isolated and can accept connections only from the proxy.
-
Make sure that the proxy overwrites the header that is configured in option.
spi-x509cert-lookup--<provider>--ssl-client-cert -
Pay extra attention to the setting. Make sure you enable it only if you can trust your proxy to verify the client certificate. Setting
spi-x509cert-lookup--<provider>--trust-proxy-verificationwithout the proxy verifying the client certificate chain will expose Red Hat build of Keycloak to security vulnerability when a forged client certificate can be used for authentication.spi-x509cert-lookup--<provider>--trust-proxy-verification=true
The server supports some of the most commons TLS termination proxies such as:
| Proxy | Provider |
|---|---|
| Apache HTTP Server | apache |
| HAProxy | haproxy |
| NGINX | nginx |
To configure how client certificates are retrieved from the requests you need to:
Enable the corresponding proxy provider
bin/kc.[sh|bat] build --spi-x509cert-lookup--provider=<provider>
Configure the HTTP headers
bin/kc.[sh|bat] start --spi-x509cert-lookup--<provider>--ssl-client-cert=SSL_CLIENT_CERT --spi-x509cert-lookup--<provider>--ssl-cert-chain-prefix=CERT_CHAIN --spi-x509cert-lookup--<provider>-certificate-chain-length=10
When configuring the HTTP headers, you need to make sure the values you are using correspond to the name of the headers forwarded by the proxy with the client certificate information.
The available options for configuring a provider are:
| Option | Description |
|---|---|
| ssl-client-cert | The name of the header holding the client certificate |
| ssl-cert-chain-prefix | The prefix of the headers holding additional certificates in the chain and used to retrieve individual certificates accordingly to the length of the chain. For instance, a value
|
| certificate-chain-length | The maximum length of the certificate chain. |
| trust-proxy-verification | Enable trusting NGINX proxy certificate verification, instead of forwarding the certificate to Red Hat build of Keycloak and verifying it in Red Hat build of Keycloak. |
| cert-is-url-encoded | Whether the forwarded certificate is url-encoded or not. In NGINX, this corresponds to the
|
8.8.1. Configuring the NGINX provider Copy linkLink copied to clipboard!
The NGINX SSL/TLS module does not expose the client certificate chain. Red Hat build of Keycloak’s NGINX certificate lookup provider rebuilds it by using the Red Hat build of Keycloak truststore.
If you are using this provider, see Configuring trusted certificates for how to configure a Red Hat build of Keycloak Truststore.
8.9. Relevant options Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
Available only when hostname:v2 feature is enabled | |
|
Available only when hostname:v2 feature is enabled | |
| 🛠
| (default) |
|
|
|
|
|
|
|
|
Chapter 9. Configuring the database Copy linkLink copied to clipboard!
Configure a relational database for Red Hat build of Keycloak to store user, client, and realm data.
This chapter explains how to configure the Red Hat build of Keycloak server to store data in a relational database.
9.1. Supported databases Copy linkLink copied to clipboard!
The server has built-in support for different databases. You can query the available databases by viewing the expected values for the
db
| Database | Option value | Tested Version | Supported Versions |
|---|---|---|---|
| MariaDB Server |
| 11.8 | 11.8 (LTS), 11.4 (LTS), 10.11 (LTS), 10.6 (LTS) |
| Microsoft SQL Server |
| 2022 | 2022, 2019 |
| MySQL |
| 8.4 | 8.4 (LTS), 8.0 (LTS) |
| Oracle Database |
| 23.5 | 23.x (i.e 23.5+), 19c (19.3+) (Note: Oracle RAC is also supported if using the same database engine version, e.g 23.5+, 19.3+) |
| PostgreSQL |
| 17 | 17.x, 16.x, 15.x, 14.x |
| EnterpriseDB Advanced |
| 17 | 17 |
| Amazon Aurora PostgreSQL |
| 17.5 | 17.x, 16.x, 15.x |
| Azure SQL Database |
| latest | latest |
| Azure SQL Managed Instance |
| latest | latest |
It is not a supported configuration if the underlying database specific Hibernate dialect allows the use of a version that differs from those shown.
By default, the server uses the
dev-file
dev-file
9.2. Installing a database driver Copy linkLink copied to clipboard!
Database drivers are shipped as part of Red Hat build of Keycloak except for the Oracle Database and Microsoft SQL Server drivers.
Install the necessary missing driver manually if you want to connect to one of these databases or skip this section if you want to connect to a different database for which the database driver is already included.
Overriding the built-in database drivers or supplying your own drivers is considered unsupported. The only supported exceptions are explicitly documented in this guide, such as the Oracle Database driver.
9.2.1. Installing the Oracle Database driver Copy linkLink copied to clipboard!
To install the Oracle Database driver for Red Hat build of Keycloak:
Download the
andojdbc17JAR files from one of the following sources:orai18n- Zipped JDBC driver and Companion Jars version 23.6.0.24.10 from the Oracle driver download page.
-
Maven Central via and
ojdbc17.orai18n - Installation media recommended by the database vendor for the specific database in use.
-
When running the unzipped distribution: Place the and
ojdbc17JAR files in Red Hat build of Keycloak’sorai18nfolderproviders When running containers: Build a custom Red Hat build of Keycloak image and add the JARs in the
folder. When building a custom image for the Operator, those images need to be optimized images with all build-time options of Red Hat build of Keycloak set.providersA minimal Containerfile to build an image which can be used with the Red Hat build of Keycloak Operator and includes Oracle Database JDBC drivers downloaded from Maven Central looks like the following:
FROM registry.redhat.io/rhbk/keycloak-rhel9:26.4 ADD --chown=keycloak:keycloak --chmod=644 https://repo1.maven.org/maven2/com/oracle/database/jdbc/ojdbc17/23.6.0.24.10/ojdbc17-23.6.0.24.10.jar /opt/keycloak/providers/ojdbc17.jar ADD --chown=keycloak:keycloak --chmod=644 https://repo1.maven.org/maven2/com/oracle/database/nls/orai18n/23.6.0.24.10/orai18n-23.6.0.24.10.jar /opt/keycloak/providers/orai18n.jar # Setting the build parameter for the database: ENV KC_DB=oracle # Add all other build parameters needed, for example enable health and metrics: ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true # To be able to use the image with the Red Hat build of Keycloak Operator, it needs to be optimized, which requires Red Hat build of Keycloak's build step: RUN /opt/keycloak/bin/kc.sh buildSee the Running Red Hat build of Keycloak in a container chapter for details on how to build optimized images.
Then continue configuring the database as described in the next section.
9.2.2. Installing the Microsoft SQL Server driver Copy linkLink copied to clipboard!
To install the Microsoft SQL Server driver for Red Hat build of Keycloak:
Download the
JAR file from one of the following sources:mssql-jdbc- Download a version from the Microsoft JDBC Driver for SQL Server page.
-
Maven Central via .
mssql-jdbc - Installation media recommended by the database vendor for the specific database in use.
-
When running the unzipped distribution: Place the in Red Hat build of Keycloak’s
mssql-jdbcfolderproviders When running containers: Build a custom Red Hat build of Keycloak image and add the JARs in the
folder. When building a custom image for the Red Hat build of Keycloak Operator, those images need to be optimized images with all build-time options of Red Hat build of Keycloak set.providersA minimal Containerfile to build an image which can be used with the Red Hat build of Keycloak Operator and includes Microsoft SQL Server JDBC drivers downloaded from Maven Central looks like the following:
FROM registry.redhat.io/rhbk/keycloak-rhel9:26.4 ADD --chown=keycloak:keycloak --chmod=644 https://repo1.maven.org/maven2/com/microsoft/sqlserver/mssql-jdbc/13.2.1.jre11/mssql-jdbc-13.2.1.jre11.jar /opt/keycloak/providers/mssql-jdbc.jar # Setting the build parameter for the database: ENV KC_DB=mssql # Add all other build parameters needed, for example enable health and metrics: ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true # To be able to use the image with the Red Hat build of Keycloak Operator, it needs to be optimized, which requires Red Hat build of Keycloak's build step: RUN /opt/keycloak/bin/kc.sh buildSee the Running Red Hat build of Keycloak in a container chapter for details on how to build optimized images.
Then continue configuring the database as described in the next section.
9.3. Configuring a database Copy linkLink copied to clipboard!
For each supported database, the server provides some opinionated defaults to simplify database configuration. You complete the configuration by providing some key settings such as the database host and credentials.
The configuration can be set during a
build
start
Using a
command followed by an optimizedbuildcommand (recommended)startFirst, the minimum settings needed to connect to the database can be specified in
:conf/keycloak.conf# The database vendor. db=postgres # The username of the database user. db-username=keycloak # The password of the database user. db-password=change_me # Sets the hostname of the default JDBC URL of the chosen vendor db-url-host=keycloak-postgresThen, the following commands create a new and optimized server image based on the configuration options and start the server.
bin/kc.[sh|bat] build bin/kc.[sh|bat] start --optimizedUsing only a
startcommand (without)--optimizedbin/kc.[sh|bat] start --db postgres --db-url-host keycloak-postgres --db-username keycloak --db-password change_me
The examples above include the minimum settings needed to connect to the database but it exposes the database password and is not recommended. Use the
conf/keycloak.conf
The default schema is
keycloak
db-schema
It is also possible to configure the database when Importing and exporting realms or Bootstrapping and recovering an admin account:
bin/kc.[sh|bat] import --help
bin/kc.[sh|bat] export --help
bin/kc.[sh|bat] bootstrap-admin --help
For more information, see Configuring Red Hat build of Keycloak.
9.4. Overriding default connection settings Copy linkLink copied to clipboard!
The server uses JDBC as the underlying technology to communicate with the database. If the default connection settings are insufficient, you can specify a JDBC URL using the
db-url
The following is a sample command for a PostgreSQL database.
bin/kc.[sh|bat] start --db postgres --db-url jdbc:postgresql://mypostgres/mydatabase
Be aware that you need to escape characters when invoking commands containing special shell characters such as
;
9.5. Configuring Unicode support for the database Copy linkLink copied to clipboard!
Unicode support for all fields depends on whether the database allows VARCHAR and CHAR fields to use the Unicode character set.
- If these fields can be set, Unicode is likely to work, usually at the expense of field length.
- If the database only supports Unicode in the NVARCHAR and NCHAR fields, Unicode support for all text fields is unlikely to work because the server schema uses VARCHAR and CHAR fields extensively.
The database schema provides support for Unicode strings only for the following special fields:
- Realms: display name, HTML display name, localization texts (keys and values)
- Federation Providers: display name
- Users: username, given name, last name, attribute names and values
- Groups: name, attribute names and values
- Roles: name
- Descriptions of objects
Otherwise, characters are limited to those contained in database encoding, which is often 8-bit. However, for some database systems, you can enable UTF-8 encoding of Unicode characters and use the full Unicode character set in all text fields. For a given database, this choice might result in a shorter maximum string length than the maximum string length supported by 8-bit encodings.
9.5.1. Configuring Unicode support for an Oracle database Copy linkLink copied to clipboard!
Unicode characters are supported in an Oracle database if the database was created with Unicode support in the VARCHAR and CHAR fields. For example, you configured AL32UTF8 as the database character set. In this case, the JDBC driver requires no special settings.
If the database was not created with Unicode support, you need to configure the JDBC driver to support Unicode characters in the special fields. You configure two properties. Note that you can configure these properties as system properties or as connection properties.
-
Set to
oracle.jdbc.defaultNChar.true Optionally, set
tooracle.jdbc.convertNcharLiterals.trueNoteFor details on these properties and any performance implications, see the Oracle JDBC driver configuration documentation.
9.5.2. Unicode support for a Microsoft SQL Server database Copy linkLink copied to clipboard!
Unicode characters are supported only for the special fields for a Microsoft SQL Server database. The database requires no special settings.
The
sendStringParametersAsUnicode
false
9.5.3. Configuring Unicode support for a MySQL database Copy linkLink copied to clipboard!
Unicode characters are supported in a MySQL database if the database was created with Unicode support in the VARCHAR and CHAR fields when using the CREATE DATABASE command.
Note that the utf8mb4 character set is not supported due to different storage requirements for the utf8 character set. See MySQL documentation for details. In that situation, the length restriction on non-special fields does not apply because columns are created to accommodate the number of characters, not bytes. If the database default character set does not allow Unicode storage, only the special fields allow storing Unicode values.
- Start MySQL Server.
- Under JDBC driver settings, locate the JDBC connection settings.
-
Add this connection property:
characterEncoding=UTF-8
9.5.4. Configuring Unicode support for a PostgreSQL database Copy linkLink copied to clipboard!
Unicode is supported for a PostgreSQL database when the database character set is UTF8. Unicode characters can be used in any field with no reduction of field length for non-special fields. The JDBC driver requires no special settings. The character set is determined when the PostgreSQL database is created.
Check the default character set for a PostgreSQL cluster by entering the following SQL command.
show server_encoding;If the default character set is not UTF 8, create the database with the UTF8 as the default character set using a command such as:
create database keycloak with encoding 'UTF8';
9.6. Preparing for PostgreSQL Copy linkLink copied to clipboard!
9.6.1. Writer and reader instances Copy linkLink copied to clipboard!
When running PostgreSQL reader and writer instances, Red Hat build of Keycloak needs to always connect to the writer instance to do its work. When using the original PostgreSQL driver, Red Hat build of Keycloak sets the
targetServerType
primary
You can override this behavior by setting your own value for
targetServerType
The
targetServerType
9.6.2. Permissions of the database user Copy linkLink copied to clipboard!
Ensure that the database user has
SELECT
pg_class
pg_namespace
This is used during upgrades of Red Hat build of Keycloak to determine an estimated number of rows in a table. If Red Hat build of Keycloak does not have permissions to access these tables, it will log a warning and proceed with the less efficient
SELECT COUNT(*) ...
9.7. Preparing for Amazon Aurora PostgreSQL Copy linkLink copied to clipboard!
When using Amazon Aurora PostgreSQL, the Amazon Web Services JDBC Driver offers additional features like transfer of database connections when a writer instance changes in a Multi-AZ setup. This driver is not part of the distribution and needs to be installed before it can be used.
To install this driver, apply the following steps:
-
When running the unzipped distribution: Download the JAR file from the Amazon Web Services JDBC Driver releases page and place it in Red Hat build of Keycloak’s folder.
providers When running containers: Build a custom Red Hat build of Keycloak image and add the JAR in the
folder.providersA minimal Containerfile to build an image which can be used with the Red Hat build of Keycloak Operator looks like the following:
FROM registry.redhat.io/rhbk/keycloak-rhel9:26.4 ADD --chmod=0666 https://github.com/awslabs/aws-advanced-jdbc-wrapper/releases/download/2.5.6/aws-advanced-jdbc-wrapper-2.5.6.jar /opt/keycloak/providers/aws-advanced-jdbc-wrapper.jarSee the Running Red Hat build of Keycloak in a container chapter for details on how to build optimized images, and the Using custom Red Hat build of Keycloak images chapter on how to run optimized and non-optimized images with the Red Hat build of Keycloak Operator.
Configure Red Hat build of Keycloak to run with the following parameters:
db-url-
Insert
aws-wrapperto the regular PostgreSQL JDBC URL resulting in a URL likejdbc:aws-wrapper:postgresql://.... db-driver-
Set to
software.amazon.jdbc.Driverto use the AWS JDBC wrapper.
When overriding the
wrapperPlugins
failover
failover2
9.8. Preparing for MySQL server Copy linkLink copied to clipboard!
Beginning with MySQL 8.0.30, MySQL supports generated invisible primary keys for any InnoDB table that is created without an explicit primary key (more information here). If this feature is enabled, the database schema initialization and also migrations will fail with the error message
Multiple primary key defined (1068)
sql_generate_invisible_primary_key
OFF
9.9. Changing database locking timeout in a cluster configuration Copy linkLink copied to clipboard!
Because cluster nodes can boot concurrently, they take extra time for database actions. For example, a booting server instance may perform some database migration, importing, or first time initializations. A database lock prevents start actions from conflicting with each other when cluster nodes boot up concurrently.
The maximum timeout for this lock is 900 seconds. If a node waits on this lock for more than the timeout, the boot fails. The need to change the default value is unlikely, but you can change it by entering this command:
bin/kc.[sh|bat] start --spi-dblock--jpa--lock-wait-timeout 900
9.10. Using Database Vendors with XA transaction support Copy linkLink copied to clipboard!
Red Hat build of Keycloak uses non-XA transactions and the appropriate database drivers by default.
If you wish to use the XA transaction support offered by your driver, enter the following command:
bin/kc.[sh|bat] build --db=<vendor> --transaction-xa-enabled=true
Red Hat build of Keycloak automatically chooses the appropriate JDBC driver for your vendor.
Certain vendors, such as Azure SQL and MariaDB Galera, do not support or rely on the XA transaction mechanism.
XA recovery defaults to enabled and will use the file system location
KEYCLOAK_HOME/data/transaction-logs
Enabling XA transactions in a containerized environment does not fully support XA recovery unless stable storage is available at that path.
9.11. Setting JPA provider configuration option for migrationStrategy Copy linkLink copied to clipboard!
To setup the JPA migrationStrategy (manual/update/validate) you should setup JPA provider as follows:
Setting the migration-strategy for the quarkus provider of the connections-jpa SPI
bin/kc.[sh|bat] start --spi-connections-jpa--quarkus--migration-strategy=manual
If you want to get a SQL file for DB initialization, too, you have to add this additional SPI initializeEmpty (true/false):
Setting the initialize-empty for the quarkus provider of the connections-jpa SPI
bin/kc.[sh|bat] start --spi-connections-jpa--quarkus--initialize-empty=false
In the same way the migrationExport to point to a specific file and location:
Setting the migration-export for the quarkus provider of the connections-jpa SPI
bin/kc.[sh|bat] start --spi-connections-jpa--quarkus--migration-export=<path>/<file.sql>
For more information, check the Migrating the database documentation.
9.12. Configuring the connection pool Copy linkLink copied to clipboard!
9.12.1. MySQL and MariaDB Copy linkLink copied to clipboard!
In order to prevent 'No operations allowed after connection closed' exceptions from being thrown, it is necessary to ensure that Red Hat build of Keycloak’s connection pool has a connection maximum lifetime that is less than the server’s configured
wait_timeout
If you are explicitly configuring the
wait_timeout
db-pool-max-lifetime
wait_timeout
wait_timeout
db-pool-max-lifetime
9.13. Configure multiple datasources Copy linkLink copied to clipboard!
Red Hat build of Keycloak allows you to specify additional datasources in case you need to access another database from your extensions. This is useful when using the main Red Hat build of Keycloak datasource is not a viable option for storing custom data, like users.
You can find more details on how to connect to your own users database in the {developerguide_userstoragespi_name} documentation.
Defining multiple datasources works like defining a single datasource, with one important change - you have to specify a name for each datasource as part of the config option name.
9.13.1. Required configuration Copy linkLink copied to clipboard!
In order to enable an additional datasource, you need to set up 2 things - the JPA
persistence.xml
persistence.xml
persistence.xml
The additional datasource properties might be specified via the standard config sources like CLI,
keycloak.conf
The additional datasources can be configured in a similar way as the main datasource. This is achieved by using analogous names for config options, which additionally include the name of the additional datasource. For example, when the main datasource uses the
db-username
db-username-<datasource>
9.13.1.1. 1. JPA persistence.xml file Copy linkLink copied to clipboard!
The
persistence.xml
META-INF/persistence.xml
Be aware that Quarkus provides the ability to set up the JPA persistence unit via Hibernate ORM properties instead of using the
persistence.xml
persistence.xml
In Red Hat build of Keycloak, most of the configuration is automatic, and you just need to provide fundamental configuration details - the datasource name and transaction type.
Red Hat build of Keycloak requires setting the transaction type for the additional datasource to
JTA
persistence.xml
<persistence xmlns="https://jakarta.ee/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://jakarta.ee/xml/ns/persistence https://jakarta.ee/xml/ns/persistence/persistence_3_0.xsd"
version="3.0">
<persistence-unit name="user-store-pu" transaction-type="JTA">
<class>org.your.extension.UserEntity</class>
<properties>
<property name="jakarta.persistence.jtaDataSource" value="user-store" />
</properties>
</persistence-unit>
</persistence>
To properly set the datasource name, you should set the
jakarta.persistence.jtaDataSource
user-store-pu
user-store
In order to use your own JPA entities, you need to provide the
<class>
org.your.extension.UserEntity
user-store-pu
user-store
9.13.1.2. 2. Required properties Copy linkLink copied to clipboard!
Once you have set up your
persistence.xml
db-kind-<name>
<name>
persistence.xml
Therefore, you can enable the additional datasource
user-store
postgres
bin/kc.[sh|bat] start --db-kind-user-store=postgres
After specifying the db-kind for the datasource, all database-kind–specific defaults (such as the driver and dialect) are automatically applied, just like for the main datasource.
9.13.2. Configuration via environment variables Copy linkLink copied to clipboard!
If you do not want to configure the datasource via CLI or
keycloak.conf
You can set the DB kind via environment variables (for the
user-store
export KC_DB_KIND_USER_STORE=postgres
export KC_DB_USERNAME_USER_STORE=my-username
It maps to the
db-kind-user-store
db-username-user-store
_
-
_
$
.
In order to have it properly configured via the Red Hat build of Keycloak environment variables, you need to explicitly say what the key for the datasource should look like. You can use a pair of unique Red Hat build of Keycloak environment variables with a special case of the
KCKEY_
For instance, for a datasource with the name user_store$marketing, you can set environment variables as follows:
export KC_USER_STORE_DB_KIND=mariadb
export KCKEY_USER_STORE_DB_KIND=db-kind-user_store$marketing
You can find more information in the guide Configuring Red Hat build of Keycloak, in subsection Formats for environment variable keys with special characters.
9.13.3. Backward compatibility for the quarkus.properties Copy linkLink copied to clipboard!
In the past, we instructed users to use raw Quarkus properties to configure additional datasources in some places. However, as using Quarkus properties in the
conf/quarkus.properties
Before you are able to migrate to the dedicated options, you can still specify the datasource settings via the Quarkus properties as follows:
quarkus.datasource.user-store.db-kind=h2
quarkus.datasource.user-store.username=sa
quarkus.datasource.user-store.jdbc.url=jdbc:h2:mem:user-store;DB_CLOSE_DELAY=-1
quarkus.datasource.user-store.jdbc.transactions=xa
Use Quarkus properties without quotation for the datasource name, as properties with the quoted datasource name clash with the new datasource options mapping. Therefore, use
quarkus.datasource.user-store.db-kind=h2
quarkus.datasource."user-store".db-kind=h2
9.14. Relevant options Copy linkLink copied to clipboard!
| Value | |
|---|---|
| 🛠
|
|
|
|
|
| 🛠
| |
|
| (default) |
|
| |
|
| |
|
| |
|
| (default) |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
| 🛠
|
|
9.14.1. Additional datasources options Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
|
|
| 🛠
| |
|
|
|
| 🛠
|
|
|
| (default) |
|
| |
|
| |
|
| (default) |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
| 🛠
|
|
Chapter 10. Configuring distributed caches Copy linkLink copied to clipboard!
Configure the caching layer to cluster multiple Red Hat build of Keycloak instances and to increase performance.
Red Hat build of Keycloak is designed for high availability and multi-node clustered setups. The current distributed cache implementation is built on top of Infinispan, a high-performance, distributable in-memory data grid.
10.1. Enable distributed caching Copy linkLink copied to clipboard!
When you start Red Hat build of Keycloak in production mode, by using the
start
By default, caches use the
jdbc-ping
To explicitly enable distributed infinispan caching, enter this command:
bin/kc.[sh|bat] start --cache=ispn
When you start Red Hat build of Keycloak in development mode, by using the
start-dev
--cache=local
local
10.2. Configuring caches Copy linkLink copied to clipboard!
Red Hat build of Keycloak provides a regular {infinispan_configuring_docs}[Infinispan configuration file] located at
conf/cache-ispn.xml
The following table gives an overview of the specific caches Red Hat build of Keycloak uses:
| Cache name | Cache Type | Description |
|---|---|---|
| realms | Local | Cache persisted realm data |
| users | Local | Cache persisted user data |
| authorization | Local | Cache persisted authorization data |
| keys | Local | Cache external public keys |
| crl | Local | Cache for X.509 authenticator CRLs |
| work | Replicated | Propagate invalidation messages across nodes |
| authenticationSessions | Distributed | Caches authentication sessions, created/destroyed/expired during the authentication process |
| sessions | Distributed | Cache persisted user session data |
| clientSessions | Distributed | Cache persisted client session data |
| offlineSessions | Distributed | Cache persisted offline user session data |
| offlineClientSessions | Distributed | Cache persisted offline client session data |
| loginFailures | Distributed | keep track of failed logins, fraud detection |
| actionTokens | Distributed | Caches action Tokens |
10.2.1. Cache types and defaults Copy linkLink copied to clipboard!
Local caches
Red Hat build of Keycloak caches persistent data locally to avoid unnecessary round-trips to the database.
The following data is kept local to each node in the cluster using local caches:
- realms and related data like clients, roles, and groups.
- users and related data like granted roles and group memberships.
- authorization and related data like resources, permissions, and policies.
- keys
Local caches for realms, users, and authorization are configured to hold up to 10,000 entries per default. The local key cache can hold up to 1,000 entries per default and defaults to expire every one hour. Therefore, keys are forced to be periodically downloaded from external clients or identity providers.
In order to achieve an optimal runtime and avoid additional round-trips to the database you should consider looking at the configuration for each cache to make sure the maximum number of entries is aligned with the size of your database. More entries you can cache, less often the server needs to fetch data from the database. You should evaluate the trade-offs between memory utilization and performance.
Invalidation of local caches
Local caching improves performance, but adds a challenge in multi-node setups.
When one Red Hat build of Keycloak node updates data in the shared database, all other nodes need to be aware of it, so they invalidate that data from their caches.
The
work
Authentication sessions
Authentication sessions are created whenever a user tries to authenticate. They are automatically destroyed once the authentication process completes or due to reaching their expiration time.
The
authenticationSessions
By relying on a distributable cache, authentication sessions are available to any node in the cluster so that users can be redirected to any node without losing their authentication state. However, production-ready deployments should always consider session affinity and favor redirecting users to the node where their sessions were initially created. By doing that, you are going to avoid unnecessary state transfer between nodes and improve CPU, memory, and network utilization.
User sessions
Once the user is authenticated, a user session is created. The user session tracks your active users and their state so that they can seamlessly authenticate to any application without being asked for their credentials again. For each application, the user authenticates with a client session, so that the server can track the applications the user is authenticated with and their state on a per-application basis.
User and client sessions are automatically destroyed whenever the user performs a logout, the client performs a token revocation, or due to reaching their expiration time.
The session data are stored in the database by default and loaded on-demand to the following caches:
-
sessions -
clientSessions
By relying on a distributable cache, cached user and client sessions are available to any node in the cluster so that users can be redirected to any node without the need to load session data from the database. However, production-ready deployments should always consider session affinity and favor redirecting users to the node where their sessions were initially created. By doing that, you are going to avoid unnecessary state transfer between nodes and improve CPU, memory, and network utilization.
These in-memory caches for user sessions and client sessions are limited to, by default, 10000 entries per node which reduces the overall memory usage of Red Hat build of Keycloak for larger installations. The internal caches will run with only a single owner for each cache entry.
Offline user sessions
As an OpenID Connect Provider, the server is capable of authenticating users and issuing offline tokens. When issuing an offline token after successful authentication, the server creates an offline user session and offline client session.
The following caches are used to store offline sessions:
- offlineSessions
- offlineClientSessions
Like the user and client sessions caches, the offline user and client session caches are limited to 10000 entries per node by default. Items which are evicted from the memory will be loaded on-demand from the database when needed.
Password brute force detection
The
loginFailures
Action tokens
Action tokens are used for scenarios when a user needs to confirm an action asynchronously, for example in the emails sent by the forgot password flow. The
actionTokens
You can see the applied Infinispan configuration in the logs by configuring
--log-level=info,org.keycloak.connections.infinispan.DefaultInfinispanConnectionProviderFactory:debug
10.2.2. Volatile user sessions Copy linkLink copied to clipboard!
By default, regular user sessions are stored in the database and loaded on-demand to the cache. It is possible to configure Red Hat build of Keycloak to store regular user sessions in the cache only and minimize calls to the database.
Since all the sessions in this setup are stored in-memory, there are two side effects related to this:
- Losing sessions when all Red Hat build of Keycloak nodes restart.
- Increased memory consumption.
When using volatile user sessions, the cache is the source of truth for user and client sessions. Red Hat build of Keycloak automatically adjusts the number of entries that can be stored in memory, and increases the number of copies to prevent data loss.
Follow these steps to enable this setup:
Disable
feature using the following command:persistent-user-sessionsbin/kc.sh start --features-disabled=persistent-user-sessions ...
Disabling
persistent-user-sessions
multi-site
10.2.3. Configuring cache maximum size Copy linkLink copied to clipboard!
In order to reduce memory usage, it’s possible to place an upper bound on the number of entries which are stored in a given cache. To specify an upper bound of on a cache, you must provide the following command line argument
--cache-embedded-${CACHE_NAME}-max-count=
${CACHE_NAME}
1000
offlineSessions
--cache-embedded-offline-sessions-max-count=1000
actionToken
authenticationSessions
loginFailures
work
Setting a maximum cache size for
sessions
clientSessions
offlineSessions
offlineClientSessions
10.2.4. Specify your own cache configuration file Copy linkLink copied to clipboard!
To specify your own cache configuration file, enter this command:
bin/kc.[sh|bat] start --cache-config-file=my-cache-file.xml
The configuration file is relative to the
conf/
10.2.5. Modifying cache configuration defaults Copy linkLink copied to clipboard!
Red Hat build of Keycloak automatically creates all required caches with the expected configurations. You can add additional caches or override the default cache configurations in
conf/cache-ispn.xml
--cache-config-file
To see the applied Infinispan configuration in the logs, configure
--log-level=info,org.keycloak.connections.infinispan.DefaultInfinispanConnectionProviderFactory:debug
While overriding the default cache configurations via XML is technically possible, it is not supported. This is only recommended for advanced use-cases where the default cache configurations are proven to be problematic. The only supported way to change the default cache configurations is via the
cache-...
In order to prevent a warning being logged when a modified default cache configuration is detected, add the following option:
bin/kc.[sh|bat] start --cache-config-mutate=true
10.2.6. CLI options for remote server Copy linkLink copied to clipboard!
For configuration of Red Hat build of Keycloak server for high availability and multi-node clustered setup there was introduced following CLI options
cache-remote-host
cache-remote-port
cache-remote-username
cache-remote-password
10.2.6.1. Connecting to an insecure Infinispan server Copy linkLink copied to clipboard!
Disabling security is not recommended in production!
In a development or test environment, it is easier to start an unsecured Infinispan server. For these use case, the CLI options
cache-remote-tls-enabled
The CLI options
cache-remote-username
cache-remote-password
10.3. Topology aware data distribution Copy linkLink copied to clipboard!
Configuring Red Hat build of Keycloak to be aware of your network topology, increases data availability in the presence of hardware failures, as Infinispan is able to ensure that data is distributed correctly. For example, if
num_owners=2
By default, user and client sessions are safely stored in the database, and they are not affected by these settings. The remaining distributed caches are affected by this configuration.
The following topology information is available to configure:
- Site name
If your Red Hat build of Keycloak cluster is deployed between different datacenters, use this option to ensure the data replicas are stored in a different datacenter. It prevents data loss if a datacenter goes offline or fails.
Use the SPI option
(or environment variablespi-cache-embedded—default—site-name). The value itself is not important, but each datacenter must have a unique value.KC_SPI_CACHE_EMBEDDED__DEFAULT__SITE_NAMEFor example:
--spi-cache-embedded—default—site-name=site-1- Rack name
If your Red Hat build of Keycloak cluster is running in different racks on your datacenter, set this option to ensure the data replicas are stored in a different physical rack. It prevents data loss if a rack is suddenly disconnected or fails.
Use the SPI option
(or environment variablespi-cache-embedded—default—rack-name). The value itself is not important, but each rack must have a unique value.KC_SPI_CACHE_EMBEDDED__DEFAULT__RACK_NAMEFor example:
--spi-cache-embedded—default—rack-name=rack-1- Machine name
If you have multiple Red Hat build of Keycloak instances running on the same physical machine (using virtual machines or containers for example), use this option to ensure the data replicas are stored in different physical machines. It prevents data loss against a physical machine failure.
Use the SPI option
(or environment variablespi-cache-embedded—default—machine-name). The value itself is not important, but each machine must have a unique value.KC_SPI_CACHE_EMBEDDED__DEFAULT__MACHINE_NAMEFor example:
--spi-cache-embedded—default—machine-name=machine-1NoteThe Red Hat build of Keycloak Operator automatically configure the machine name based on the Kubernetes node. It ensures that if multiple pods are scheduled on the same node, data replicas are still replicated across distinct nodes when possible. We recommend to set up anti-affinity rules and/or topology spread constraints to prevent multiple Pods from being scheduled on the same node, further reducing the risk of a single node failure causing data loss.
10.4. Transport stacks Copy linkLink copied to clipboard!
Transport stacks ensure that Red Hat build of Keycloak nodes in a cluster communicate in a reliable fashion. Red Hat build of Keycloak supports a wide range of transport stacks:
-
jdbc-ping -
(deprecated)
kubernetes -
(deprecated)
jdbc-ping-udp -
(deprecated)
tcp -
(deprecated)
udp -
(deprecated)
ec2 -
(deprecated)
azure -
(deprecated)
google
To apply a specific cache stack, enter this command:
bin/kc.[sh|bat] start --cache-stack=<stack>
The default stack is set to
jdbc-ping
10.4.1. Available transport stacks Copy linkLink copied to clipboard!
The following table shows transport stacks that are available without any further configuration than using the
--cache-stack
| Stack name | Transport protocol | Discovery |
|---|---|---|
|
| TCP | Database registry using the JGroups
|
|
| UDP | Database registry using the JGroups
|
The following table shows transport stacks that are available using the
--cache-stack
| Stack name | Transport protocol | Discovery |
|---|---|---|
|
| TCP | DNS resolution using the JGroups
|
|
| TCP | IP multicast using the JGroups
|
|
| UDP | IP multicast using the JGroups
|
When using the
tcp
udp
jdbc-ping-udp
239.6.7.8
jgroups.mcast_addr
46655
jgroups.mcast_port
Use
-D<property>=<value>
JAVA_OPTS_APPEND
Additional Stacks
It is recommended to use one of the stacks available above. Additional stacks are provided by Infinispan, but it is outside the scope of this guide how to configure them. Please refer to Setting up Infinispan cluster transport and Customizing JGroups stacks for further documentation.
10.5. Securing transport stacks Copy linkLink copied to clipboard!
Encryption using TLS is enabled by default for TCP-based transport stacks, which is also the default configuration. No additional CLI options or modifications of the cache XML are required as long as you are using a TCP-based transport stack.
If you are using a transport stack based on
UDP
TCP_NIO2
-
Set the option to
cache-embedded-mtls-enabled.false - Follow the documentation in JGroups Encryption documentation and Encrypting cluster transport.
With TLS enabled, Red Hat build of Keycloak auto-generates a self-signed RSA 2048 bit certificate to secure the connection and uses TLS 1.3 to secure the communication. The keys and the certificate are stored in the database so they are available to all nodes. By default, the certificate is valid for 60 days and is rotated at runtime every 30 days. Use the option
cache-embedded-mtls-rotation-interval-days
10.5.1. Running inside a service mesh Copy linkLink copied to clipboard!
When using a service mesh like Istio, you might need to allow a direct mTLS communication between the Red Hat build of Keycloak Pods to allow for the mutual authentication to work. Otherwise, you might see error messages like
JGRP000006: failed accepting connection from peer SSLSocket
You then have the option to allow direct mTLS communication between the Red Hat build of Keycloak Pods, or rely on the service mesh transport security to encrypt the communication and to authenticate peers.
To allow direct mTLS communication for Red Hat build of Keycloak when using Istio:
Apply the following configuration to allow direct communication.
apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: infinispan-allow-nomtls spec: selector: matchLabels: app: keycloak1 portLevelMtls: "7800":2 mode: PERMISSIVE
As an alternative, to disable the mTLS communication, and rely on the service mesh to encrypt the traffic:
-
Set the option to
cache-embedded-mtls-enabled.false - Configure your service mesh to authorize only traffic from other Red Hat build of Keycloak Pods for the data transmission port (default: 7800).
10.5.2. Providing your own keys and certificates Copy linkLink copied to clipboard!
Although not recommended for standard setups, if it is essential in a specific setup, you can configure the keystore with the certificate for the transport stack manually.
cache-embedded-mtls-key-store-file
cache-embedded-mtls-key-store-password
cache-embedded-mtls-trust-store-file
cache-embedded-mtls-trust-store-password
10.6. Network Ports Copy linkLink copied to clipboard!
To ensure a healthy Red Hat build of Keycloak clustering, some network ports need to be open. The table below shows the TCP ports that need to be open for the
jdbc-ping
| Port | Option | Property | Description |
|---|---|---|---|
|
|
|
| Unicast data transmission. |
|
|
| Failure detection by protocol
|
If an option is not available for the port you require, configure it using a system property
-D<property>=<value>
JAVA_OPTS_APPEND
10.7. Network bind address Copy linkLink copied to clipboard!
To ensure a healthy Red Hat build of Keycloak clustering, the network port must be bound on an interface that is accessible from all other nodes of the cluster.
By default, it picks a site local (non-routable) IP address, for example, from the 192.168.0.0/16 or 10.0.0.0/8 address range.
To override the address, set the option
cache-embedded-network-bind-address=<IP>
The following special values are also recognized:
| Value | Description |
|---|---|
|
| Picks a global IP address if available. If not available, it falls back to
|
|
| Picks a site-local (non-routable) IP address (for example, from the 192.168.0.0 or 10.0.0.0 address ranges). This is the default value. |
|
| Picks a link-local IP address from 169.254.1.0 through 169.254.254.255. |
|
| Picks any non-loopback address. |
|
| Picks a loopback address (for example, 127.0.0.1). |
|
| Picks an address that matches a pattern against the interface name. For example,
|
|
| Picks an address that matches a pattern against the host address. For example,
|
|
| Picks an address that matches a pattern against the host name. For example,
|
To set up for IPv6 only and have Red Hat build of Keycloak pick the bind address automatically, use the following settings:
export JAVA_OPTS_APPEND="-Djava.net.preferIPv4Stack=false -Djava.net.preferIPv6Addresses=true"
For more details about JGroups transport, check the JGroups documentation page or the Infinispan documentation page.
10.8. Running instances on different networks Copy linkLink copied to clipboard!
If you run Red Hat build of Keycloak instances on different networks, for example behind firewalls or in containers, the different instances will not be able to reach each other by their local IP address. In such a case, set up a port forwarding rule (sometimes called “virtual server”) to their local IP address.
When using port forwarding, use the following options so each node correctly advertises its external address to the other nodes:
| Option | Description |
|---|---|
|
| Port that other instances in the Red Hat build of Keycloak cluster should use to contact this node. |
|
| IP address that other instances in the Red Hat build of Keycloak should use to contact this node. |
10.9. Verify cluster and network health Copy linkLink copied to clipboard!
This section provides methods to verify that your Red Hat build of Keycloak cluster has formed correctly and that network communication between instances is functioning as expected. It is crucial to perform these checks after deployment to ensure high availability and data consistency.
To verify if the cluster is formed properly, check one of these locations:
Admin UI
Access the Red Hat build of Keycloak Web UI, typically available at
. Under the Provider Info section, locate the connectionsInfinispan entry. Click on Show more to expand its details. You should find information about the cluster status and the health of individual caches.https://<your-host>/admin/master/console/#/master/providers
Logs
Infinispan logs a cluster view every time a new instance joins or leaves the cluster. Search for log entries with the ID
.ISPN000094A healthy cluster view will show all expected nodes. For example:
ISPN000094: Received new cluster view for channel ISPN: [node1-26186|1] (2) [node1-26186, node2-37007]This log entry indicates that the cluster named "ISPN" currently has 2 nodes:
andnode1-26186. Thenode2-37007confirms the total number of nodes in the cluster.(2)Metrics
Red Hat build of Keycloak exposes Infinispan metrics via a Prometheus endpoint, which can be visualized in tools like Grafana. The metric
shows the current number of instances in the cluster. You should verify that this metric matches the expected number of running instances configured in your cluster.vendor_cluster_sizeRefer to Clustering metrics for more information.
10.10. Exposing metrics from caches Copy linkLink copied to clipboard!
Metrics from caches are automatically exposed when the metrics are enabled.
To enable histograms for the cache metrics, set
cache-metrics-histograms-enabled
true
bin/kc.[sh|bat] start --metrics-enabled=true --cache-metrics-histograms-enabled=true
For more details about how to enable metrics, see Gaining insights with metrics.
10.11. Relevant options Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
|
|
|
| |
|
|
|
|
Available only when metrics are enabled |
|
|
Available only when 'cache' type is set to 'ispn'
Use 'jdbc-ping' instead by leaving it unset Deprecated values: |
|
10.11.1. Embedded Cache Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| |
|
Available only when embedded Infinispan clusters configured | |
|
| |
|
| |
|
Available only when a TCP based cache-stack is used |
|
|
Available only when property 'cache-embedded-mtls-enabled' is enabled | |
|
Available only when property 'cache-embedded-mtls-enabled' is enabled | |
|
Available only when property 'cache-embedded-mtls-enabled' is enabled | (default) |
|
Available only when property 'cache-embedded-mtls-enabled' is enabled | |
|
Available only when property 'cache-embedded-mtls-enabled' is enabled | |
|
Available only when Infinispan clustered embedded is enabled | |
|
Available only when Infinispan clustered embedded is enabled | |
|
Available only when Infinispan clustered embedded is enabled | |
|
Available only when Infinispan clustered embedded is enabled | |
|
Available only when embedded Infinispan clusters configured | |
|
Available only when embedded Infinispan clusters configured | |
|
| |
|
Available only when embedded Infinispan clusters configured | |
|
|
10.11.2. Remote Cache Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
Available only when remote host is set | |
|
| |
|
Available only when remote host is set | |
|
Available only when remote host is set | (default) |
|
Available only when remote host is set |
|
|
Available only when remote host is set |
Chapter 11. Configuring outgoing HTTP requests Copy linkLink copied to clipboard!
Configure the client used for outgoing HTTP requests.
Red Hat build of Keycloak often needs to make requests to the applications and services that it secures. Red Hat build of Keycloak manages these outgoing connections using an HTTP client. This chapter shows how to configure the client, connection pool, proxy environment settings, timeouts, and more.
11.1. Configuring trusted certificates for TLS connections Copy linkLink copied to clipboard!
See Configuring trusted certificates for how to configure a Red Hat build of Keycloak Truststore so that Red Hat build of Keycloak is able to perform outgoing requests using TLS.
11.2. Client Configuration Command Copy linkLink copied to clipboard!
The HTTP client that Red Hat build of Keycloak uses for outgoing communication is highly configurable. To configure the Red Hat build of Keycloak outgoing HTTP client, enter this command:
bin/kc.[sh|bat] start --spi-connections-http-client--default--<configurationoption>=<value>
The following are the command options:
- establish-connection-timeout-millis
- Maximum time in milliseconds until establishing a connection times out. Default: Not set.
- socket-timeout-millis
- Maximum time of inactivity between two data packets until a socket connection times out, in milliseconds. Default: 5000ms
- connection-pool-size
- Size of the connection pool for outgoing connections. Default: 128.
- max-pooled-per-route
- How many connections can be pooled per host. Default: 64.
- connection-ttl-millis
- Maximum connection time to live in milliseconds. Default: Not set.
- max-connection-idle-time-millis
- Maximum time an idle connection stays in the connection pool, in milliseconds. Idle connections will be removed from the pool by a background cleaner thread. Set this option to -1 to disable this check. Default: 900000.
- disable-cookies
- Enable or disable caching of cookies. Default: true.
- client-keystore
- File path to a Java keystore file. This keystore contains client certificates for mTLS.
- client-keystore-password
-
Password for the client keystore. REQUIRED, when
client-keystoreis set. - client-key-password
- Password for the private key of the client. REQUIRED, when client-keystore is set.
- proxy-mappings
- Specify proxy configurations for outgoing HTTP requests. For more details, see Section 11.3, “Proxy mappings for outgoing HTTP requests”.
- disable-trust-manager
- If an outgoing request requires HTTPS and this configuration option is set to true, you do not have to specify a truststore. This setting should be used only during development and never in production because it will disable verification of SSL certificates. Default: false.
11.3. Proxy mappings for outgoing HTTP requests Copy linkLink copied to clipboard!
To configure outgoing requests to use a proxy, you can use the following standard proxy environment variables to configure the proxy mappings:
HTTP_PROXY
HTTPS_PROXY
NO_PROXY
-
The and
HTTP_PROXYvariables represent the proxy server that is used for outgoing HTTP requests. Red Hat build of Keycloak does not differentiate between the two variables. If you define both variables,HTTPS_PROXYtakes precedence regardless of the actual scheme that the proxy server uses.HTTPS_PROXY -
The variable defines a comma separated list of hostnames that should not use the proxy. For each hostname that you specify, all its subdomains are also excluded from using proxy.
NO_PROXY
The environment variables can be lowercase or uppercase. Lowercase takes precedence. For example, if you define both
HTTP_PROXY
http_proxy
http_proxy
Example of proxy mappings and environment variables
HTTPS_PROXY=https://www-proxy.acme.com:8080
NO_PROXY=google.com,login.facebook.com
In this example, the following results occur:
-
All outgoing requests use the proxy except for requests to google.com or any subdomain of google.com, such as auth.google.com.
https://www-proxy.acme.com:8080 - login.facebook.com and all its subdomains do not use the defined proxy, but groups.facebook.com uses the proxy because it is not a subdomain of login.facebook.com.
11.4. Proxy mappings using regular expressions Copy linkLink copied to clipboard!
An alternative to using environment variables for proxy mappings is to configure a comma-delimited list of proxy-mappings for outgoing requests sent by Red Hat build of Keycloak. A proxy-mapping consists of a regex-based hostname pattern and a proxy-uri, using the format
hostname-pattern;proxy-uri
For example, consider the following regex:
.*\.(google|googleapis)\.com
You apply a regex-based hostname pattern by entering this command:
bin/kc.[sh|bat] start --spi-connections-http-client--default--proxy-mappings='.*\\.(google|googleapis)\\.com;http://www-proxy.acme.com:8080'
The backslash character
\
To determine the proxy for the outgoing HTTP request, the following occurs:
- The target hostname is matched against all configured hostname patterns.
- The proxy-uri of the first matching pattern is used.
- If no configured pattern matches the hostname, no proxy is used.
When your proxy server requires authentication, include the credentials of the proxy user in the format
username:password@
.*\.(google|googleapis)\.com;http://proxyuser:password@www-proxy.acme.com:8080
Example of regular expressions for proxy-mapping:
# All requests to Google APIs use http://www-proxy.acme.com:8080 as proxy
.*\.(google|googleapis)\.com;http://www-proxy.acme.com:8080
# All requests to internal systems use no proxy
.*\.acme\.com;NO_PROXY
# All other requests use http://fallback:8080 as proxy
.*;http://fallback:8080
In this example, the following occurs:
- The special value NO_PROXY for the proxy-uri is used, which means that no proxy is used for hosts matching the associated hostname pattern.
- A catch-all pattern ends the proxy-mappings, providing a default proxy for all outgoing requests.
11.5. Relevant options Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
|
Chapter 12. Configuring trusted certificates Copy linkLink copied to clipboard!
Configure the Red Hat build of Keycloak Truststore to communicate through TLS.
When Red Hat build of Keycloak communicates with external services or has an incoming connection through TLS, it has to validate the remote certificate in order to ensure it is connecting to a trusted server. This is necessary in order to prevent man-in-the-middle attacks.
The certificates of these clients or servers, or the CA that signed these certificates, must be put in a truststore. This truststore is then configured for use by Red Hat build of Keycloak.
12.1. Configuring the System Truststore Copy linkLink copied to clipboard!
The existing Java default truststore certs will always be trusted. If you need additional certificates, which will be the case if you have self-signed or internal certificate authorities that are not recognized by the JRE, they can be included in the
conf/truststores
.p12
.pfx
.pkcs12
If you need an alternative path, use the
--truststore-paths
After all applicable certs are included, the truststore will be used as the system default truststore via the
javax.net.ssl
For example:
bin/kc.[sh|bat] start --truststore-paths=/opt/truststore/myTrustStore.pfx,/opt/other-truststore/myOtherTrustStore.pem
It is still possible to directly set your own
javax.net.ssl
--truststore-paths
12.2. Hostname Verification Policy Copy linkLink copied to clipboard!
You may refine how hostnames are verified by TLS connections with the
tls-hostname-verifier
-
(the default) allows wildcards in subdomain names (e.g. *.foo.com) to match names with the same number of levels (e.g. a.foo.com, but not a.b.foo.com) - with rules and exclusions for public suffixes based upon https://publicsuffix.org/list/
DEFAULT -
means that the hostname is not verified - this mode should not be used in production.
ANY -
(deprecated) allows wildcards in subdomain names (e.g. *.foo.com) to match anything, including multiple levels (e.g. a.b.foo.com). Use DEFAULT instead.
WILDCARD - (deprecated) allows wildcards in subdomain names (e.g. *.foo.com) to match names with the same number of levels (e.g. a.foo.com, but not a.b.foo.com) - with some limited exclusions. Use DEFAULT instead.
STRICTPlease note that this setting does not apply to LDAP secure connections, which require strict hostname checking.
12.3. Relevant options Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
STRICT and WILDCARD have been deprecated, use DEFAULT instead. Deprecated values: |
|
|
|
Chapter 13. Configuring trusted certificates for mTLS Copy linkLink copied to clipboard!
Configure Mutual TLS to verify clients that are connecting to Red Hat build of Keycloak.
In order to properly validate client certificates and enable certain authentication methods like two-way TLS or mTLS, you can set a trust store with all the certificates (and certificate chain) the server should be trusting. There are number of capabilities that rely on this trust store to properly authenticate clients using certificates such as Mutual TLS and X.509 Authentication.
13.1. Enabling mTLS Copy linkLink copied to clipboard!
Authentication using mTLS is disabled by default. To enable mTLS certificate handling when Red Hat build of Keycloak is the server and needs to validate certificates from requests made to Red Hat build of Keycloak endpoints, put the appropriate certificates in a truststore and use the following command to enable mTLS:
bin/kc.[sh|bat] start --https-client-auth=<none|request|required>
Using the value
required
request
The mTLS configuration and the truststore is shared by all Realms. It is not possible to configure different truststores for different Realms.
Management interface properties are inherited from the main HTTP server, including mTLS settings. It means when mTLS is set, it is also enabled for the management interface. To override the behavior, use the
https-management-client-auth
13.2. Using a dedicated truststore for mTLS Copy linkLink copied to clipboard!
By default, Red Hat build of Keycloak uses the System Truststore to validate certificates. See Configuring trusted certificates for details.
If you need to use a dedicated truststore for mTLS, you can configure the location of this truststore by running the following command:
bin/kc.[sh|bat] start --https-trust-store-file=/path/to/file --https-trust-store-password=<value>
Recognized file extensions for a truststore:
-
,
.p12, and.pkcs12for a pkcs12 file.pfx -
, and
.jksfor a jks file.truststore -
,
.ca, and.crtfor a pem file.pem
If your truststore does not have an extension matching its file type, you will also need to set the
https-key-store-type
13.3. Additional resources Copy linkLink copied to clipboard!
13.3.1. Using mTLS for outgoing HTTP requests Copy linkLink copied to clipboard!
Be aware that this is the basic certificate configuration for mTLS use cases where Red Hat build of Keycloak acts as server. When Red Hat build of Keycloak acts as client instead, e.g. when Red Hat build of Keycloak tries to get a token from a token endpoint of a brokered identity provider that is secured by mTLS, you need to set up the HttpClient to provide the right certificates in the keystore for the outgoing request. To configure mTLS in these scenarios, see Configuring outgoing HTTP requests.
13.3.2. Configuring X.509 Authentication Copy linkLink copied to clipboard!
For more information on how to configure X.509 Authentication, see X.509 Client Certificate User Authentication section.
13.4. Relevant options Copy linkLink copied to clipboard!
| Value | |
|---|---|
| 🛠
|
|
|
| |
|
| |
|
| |
| 🛠
|
|
Chapter 14. Enabling and disabling features Copy linkLink copied to clipboard!
Configure Red Hat build of Keycloak to use optional features.
Red Hat build of Keycloak has packed some functionality in features, including some disabled features, such as Technology Preview and deprecated features. Other features are enabled by default, but you can disable them if they do not apply to your use of Red Hat build of Keycloak.
14.1. Enabling features Copy linkLink copied to clipboard!
Some supported features, and all preview features, are disabled by default. To enable a feature, enter this command:
bin/kc.[sh|bat] build --features="<name>[,<name>]"
For example, to enable
docker
token-exchange
bin/kc.[sh|bat] build --features="docker,token-exchange"
To enable all preview features, enter this command:
bin/kc.[sh|bat] build --features="preview"
Enabled feature may be versioned, or unversioned. If you use a versioned feature name, e.g. feature:v1, that exact feature version will be enabled as long as it still exists in the runtime. If you instead use an unversioned name, e.g. just feature, the selection of the particular supported feature version may change from release to release according to the following precedence:
- The highest default supported version
- The highest non-default supported version
- The highest deprecated version
- The highest preview version
- The highest experimental version
14.2. Disabling features Copy linkLink copied to clipboard!
To disable a feature that is enabled by default, enter this command:
bin/kc.[sh|bat] build --features-disabled="<name>[,<name>]"
For example to disable
impersonation
bin/kc.[sh|bat] build --features-disabled="impersonation"
It is not allowed to have a feature in both the
features-disabled
features
When a feature is disabled all versions of that feature are disabled.
14.3. Supported features Copy linkLink copied to clipboard!
The following list contains supported features that are enabled by default, and can be disabled if not needed.
| Feature | Description |
|---|---|
| account-api:v1 | Account Management REST API |
| account:v3 | Account Console version 3 |
| admin-api:v1 | Admin API |
| admin-fine-grained-authz:v2 | Fine-Grained Admin Permissions version 2 |
| admin:v2 | New Admin Console |
| authorization:v1 | Authorization Service |
| ciba:v1 | OpenID Connect Client Initiated Backchannel Authentication (CIBA) |
| client-policies:v1 | Client configuration policies |
| device-flow:v1 | OAuth 2.0 Device Authorization Grant |
| dpop:v1 | OAuth 2.0 Demonstrating Proof-of-Possession at the Application Layer |
| hostname:v2 | Hostname Options V2 |
| impersonation:v1 | Ability for admins to impersonate users |
| kerberos:v1 | Kerberos |
| login:v2 | New Login Theme |
| opentelemetry:v1 | OpenTelemetry Tracing |
| organization:v1 | Organization support within realms |
| par:v1 | OAuth 2.0 Pushed Authorization Requests (PAR) |
| passkeys:v1 | Passkeys |
| persistent-user-sessions:v1 | Persistent online user sessions across restarts and upgrades |
| recovery-codes:v1 | Recovery codes |
| rolling-updates:v1 | Rolling Updates |
| step-up-authentication:v1 | Step-up Authentication |
| token-exchange-standard:v2 | Standard Token Exchange version 2 |
| update-email:v1 | Update Email Action |
| user-event-metrics:v1 | Collect metrics based on user events |
| web-authn:v1 | W3C Web Authentication (WebAuthn) |
14.3.1. Disabled by default Copy linkLink copied to clipboard!
The following list contains supported features that are disabled by default, and can be enabled if needed.
| Feature | Description |
|---|---|
| docker:v1 | Docker Registry protocol |
| fips:v1 | FIPS 140-2 mode |
| multi-site:v1 | Multi-site support |
14.4. Preview features Copy linkLink copied to clipboard!
Preview features are disabled by default and are not recommended for use in production. These features may change or be removed at a future release.
| Feature | Description |
|---|---|
| admin-fine-grained-authz:v1 | Fine-Grained Admin Permissions |
| client-auth-federated:v1 | Authenticates client based on assertions issued by identity provider |
| client-secret-rotation:v1 | Client Secret Rotation |
| log-mdc:v1 | Mapped Diagnostic Context (MDC) information in logs |
| rolling-updates:v2 | Rolling Updates for patch releases |
| scripts:v1 | Write custom authenticators using JavaScript |
| spiffe:v1 | SPIFFE trust relationship provider |
| token-exchange:v1 | Token Exchange Service |
14.5. Deprecated features Copy linkLink copied to clipboard!
The following list contains deprecated features that will be removed in a future release. These features are disabled by default.
| Feature | Description |
|---|---|
| instagram-broker:v1 | Instagram Identity Broker |
| login:v1 | Legacy Login Theme |
| logout-all-sessions:v1 | Logout all sessions logs out only regular sessions |
| passkeys-conditional-ui-authenticator:v1 | Passkeys conditional UI authenticator |
14.6. Relevant options Copy linkLink copied to clipboard!
| Value | |
|---|---|
| 🛠
|
|
| 🛠
|
|
Chapter 15. Configuring providers Copy linkLink copied to clipboard!
Configure providers for Red Hat build of Keycloak.
The server is built with extensibility in mind and for that it provides a number of Service Provider Interfaces or SPIs, each one responsible for providing a specific capability to the server. In this chapter, you are going to understand the core concepts around the configuration of SPIs and their respective providers.
After reading this chapter, you should be able to use the concepts and the steps herein explained to install, uninstall, enable, disable, and configure any provider, including those you have implemented to extend the server capabilities in order to better fulfill your requirements.
15.1. Configuration option format Copy linkLink copied to clipboard!
Providers can be configured by using a specific configuration format. The format consists of:
spi-<spi-id>--<provider-id>--<property>=<value>
Or if there is no possibility of ambiguity between multiple providers:
spi-<spi-id>-<provider-id>-<property>=<value>
The
<spi-id>
The
<provider-id>
The
<property>
the property name
enabled
All those names (for spi, provider, and property) should be in lower case and if the name is in camel-case such as
myKeycloakProvider
-
my-keycloak-provider
Taking the
HttpClientSpi
connectionsHttpClient
default
connectionPoolSize
spi-connections-http-client--default--connection-pool-size=10
15.1.1. Setting a provider configuration option Copy linkLink copied to clipboard!
Provider configuration options are provided when starting the server. See all support configuration sources and formats for options in Configuring Red Hat build of Keycloak. For example via a command line option:
Setting the connection-pool-size for the default provider of the connections-http-client SPI
bin/kc.[sh|bat] start --spi-connections-http-client--default--connection-pool-size=10
15.2. Build time options Copy linkLink copied to clipboard!
15.2.1. Configuring a single provider for an SPI Copy linkLink copied to clipboard!
Depending on the SPI, multiple provider implementations can co-exist but only one of them is going to be used at runtime. For these SPIs, a specific provider is the primary implementation that is going to be active and used at runtime. The format consists of:
spi-<spi-id>--provider=<provider-id>
spi-<spi-id>-provider=<provider-id>
To configure a provider as the single provider you should run the
build
Marking the mycustomprovider provider as the single provider for the email-template SPI
bin/kc.[sh|bat] build --spi-email-template--provider=mycustomprovider
15.2.2. Configuring a default provider for an SPI Copy linkLink copied to clipboard!
Depending on the SPI, multiple provider implementations can co-exist and one is used by default. For these SPIs, a specific provider is the default implementation that is going to selected unless a specific provider is requested. The format consists of:
spi-<spi-id>--provider-default=<provider-id>
spi-<spi-id>-provider-default=<provider-id>
The following logic is used to determine the default provider:
- The explicitly configured default provider
- The provider with the highest order (providers with order ⇐ 0 are ignored)
-
The provider with the id set to
default
To configure a provider as the default provider you should run the
build
Marking the mycustomhash provider as the default provider for the password-hashing SPI
bin/kc.[sh|bat] build --spi-password-hashing--provider-default=mycustomprovider
15.2.3. Enabling and disabling a provider Copy linkLink copied to clipboard!
The format consists of:
spi-<spi-id>--<provider-id>--enabled=<boolean>
spi-<spi-id>-<provider-id>-enabled=<boolean>
To enable or disable a provider you should run the
build
Enabling a provider
bin/kc.[sh|bat] build --spi-email-template--mycustomprovider--enabled=true
To disable a provider, use the same command and set the
enabled
false
15.3. Installing and uninstalling a provider Copy linkLink copied to clipboard!
Custom providers should be packaged in a Java Archive (JAR) file and copied to the
providers
build
This step is needed in order to optimize the server runtime so that all providers are known ahead-of-time rather than discovered only when starting the server or at runtime.
Do not install untrusted provider JARs! There is a single class loader for the entire application, and JARs in the
providers
To uninstall a provider, you should remove the JAR file from the
providers
build
15.4. Using third-party dependencies Copy linkLink copied to clipboard!
When implementing a provider you might need to use some third-party dependency that is not available from the server distribution.
In this case, you should copy any additional dependency to the
providers
build
15.5. References Copy linkLink copied to clipboard!
Chapter 16. Configuring logging Copy linkLink copied to clipboard!
Configure logging for Red Hat build of Keycloak.
Red Hat build of Keycloak uses the JBoss Logging framework. The following is a high-level overview for the available log handlers with the common parent log handler
root
-
console -
file -
syslog
16.1. Logging configuration Copy linkLink copied to clipboard!
Logging is done on a per-category basis in Red Hat build of Keycloak. You can configure logging for the root log level or for more specific categories such as
org.hibernate
org.keycloak
This chapter describes how to configure logging.
16.1.1. Log levels Copy linkLink copied to clipboard!
The following table defines the available log levels.
| Level | Description |
|---|---|
| FATAL | Critical failures with complete inability to serve any kind of request. |
| ERROR | A significant error or problem leading to the inability to process requests. |
| WARN | A non-critical error or problem that might not require immediate correction. |
| INFO | Red Hat build of Keycloak lifecycle events or important information. Low frequency. |
| DEBUG | More detailed information for debugging purposes, such as database logs. Higher frequency. |
| TRACE | Most detailed debugging information. Very high frequency. |
| ALL | Special level for all log messages. |
| OFF | Special level to turn logging off entirely (not recommended). |
16.1.2. Configuring the root log level Copy linkLink copied to clipboard!
When no log level configuration exists for a more specific category logger, the enclosing category is used instead. When there is no enclosing category, the root logger level is used.
To set the root log level, enter the following command:
bin/kc.[sh|bat] start --log-level=<root-level>
Use these guidelines for this command:
-
For , supply a level defined in the preceding table.
<root-level> -
The log level is case-insensitive. For example, you could either use or
DEBUG.debug -
If you were to accidentally set the log level twice, the last occurrence in the list becomes the log level. For example, if you included the syntax , the root logger would be
--log-level="info,…,DEBUG,…".DEBUG
16.1.3. Configuring category-specific log levels Copy linkLink copied to clipboard!
You can set different log levels for specific areas in Red Hat build of Keycloak. Use this command to provide a comma-separated list of categories for which you want a different log level:
bin/kc.[sh|bat] start --log-level="<root-level>,<org.category1>:<org.category1-level>"
A configuration that applies to a category also applies to its sub-categories unless you include a more specific matching sub-category.
Example
bin/kc.[sh|bat] start --log-level="INFO,org.hibernate:debug,org.hibernate.hql.internal.ast:info"
This example sets the following log levels:
- Root log level for all loggers is set to INFO.
- The hibernate log level in general is set to debug.
-
To keep SQL abstract syntax trees from creating verbose log output, the specific subcategory is set to info. As a result, the SQL abstract syntax trees are omitted instead of appearing at the
org.hibernate.hql.internal.astlevel.debug
16.1.4. Adding context for log messages Copy linkLink copied to clipboard!
Log messages with Mapped Diagnostic Context (MDC) is Preview and is not fully supported. This feature is disabled by default.
You can enable additional context information for each log line like the current realm and client that is executing the request.
Use the option
log-mdc-enabled
Example configuration
bin/kc.[sh|bat] start --features=log-mdc --log-mdc-enabled=true
Example output
2025-06-20 14:13:01,772 {kc.clientId=security-admin-console, kc.realmName=master} INFO ...
Specify which keys to be added by setting the configuration option
log-mdc-keys
16.1.5. Configuring levels as individual options Copy linkLink copied to clipboard!
When configuring category-specific log levels, you can also set the log levels as individual
log-level-<category>
log-level
log-level
Example
If you start the server as:
bin/kc.[sh|bat] start --log-level="INFO,org.hibernate:debug"
you can then set an environmental variable
KC_LOG_LEVEL_ORG_KEYCLOAK=trace
org.keycloak
The
log-level-<category>
log-level
log-level
KC_LOG_LEVEL_ORG_HIBERNATE=trace
org.hibernate
trace
debug
Bear in mind that when using the environmental variables, the category name must be in uppercase and the dots must be replaced with underscores. When using other config sources, the category name must be specified "as is", for example:
bin/kc.[sh|bat] start --log-level="INFO,org.hibernate:debug" --log-level-org.keycloak=trace
16.2. Enabling log handlers Copy linkLink copied to clipboard!
To enable log handlers, enter the following command:
bin/kc.[sh|bat] start --log="<handler1>,<handler2>"
The available handlers are:
-
console -
file -
syslog
The more specific handler configuration mentioned below will only take effect when the handler is added to this comma-separated list.
16.2.1. Specify log level for each handler Copy linkLink copied to clipboard!
The
log-level
To set log levels for particular handlers, properties in format
log-<handler>-level
<handler>
It means properties for log level settings look like this:
-
- Console log handler
log-console-level -
- File log handler
log-file-level -
- Syslog log handler
log-syslog-level
The
log-<handler>-level
Only log levels specified in Section 16.1.1, “Log levels” section are accepted, and must be in lowercase. There is no support for specifying particular categories for log handlers yet.
16.2.1.1. General principle Copy linkLink copied to clipboard!
It is necessary to understand that setting the log levels for each particular handler does not override the root level specified in the
log-level
Specifically, when an arbitrary log level is defined for the handler, it does not mean the log records with the log level will be present in the output. In that case, the root
log-level
all
16.2.1.2. Examples Copy linkLink copied to clipboard!
Example: debug for file handler, but info for console handler:
bin/kc.[sh|bat] start --log=console,file --log-level=debug --log-console-level=info
The root log level is set to
debug
debug
info
Example: warn for all handlers, but debug for file handler:
bin/kc.[sh|bat] start --log=console,file,syslog --log-level=debug --log-console-level=warn --log-syslog-level=warn
The root level must be set to the most verbose required level (
debug
Example: info for all handlers, but debug+org.keycloak.events:trace for Syslog handler:
bin/kc.[sh|bat] start --log=console,file,syslog --log-level=debug,org.keycloak.events:trace, --log-syslog-level=trace --log-console-level=info --log-file-level=info
In order to see the
org.keycloak.events:trace
trace
16.2.2. Use different JSON format for log handlers Copy linkLink copied to clipboard!
Every log handler provides the ability to have structured log output in JSON format. It can be enabled by properties in the format
log-<handler>-output=json
<handler>
If you need a different format of the produced JSON, you can leverage the following JSON output formats:
-
(default)
default -
ecs
The
ecs
ECS is an open-source, community-driven specification that defines a common set of fields to be used with Elastic solutions. The ECS specification is being converged with OpenTelemetry Semantic Conventions with the goal of creating a single standard maintained by OpenTelemetry.
In order to change the JSON output format, properties in the format
log-<handler>-json-format
<handler>
-
- Console log handler
log-console-json-format -
- File log handler
log-file-json-format -
- Syslog log handler
log-syslog-json-format
16.2.2.1. Example Copy linkLink copied to clipboard!
If you want to have JSON logs in ECS (Elastic Common Schema) format for the console log handler, you can enter the following command:
bin/kc.[sh|bat] start --log-console-output=json --log-console-json-format=ecs
Example Log Message
{"@timestamp":"2025-02-03T14:53:22.539484211+01:00","event.sequence":9608,"log.logger":"io.quarkus","log.level":"INFO","message":"Keycloak 999.0.0-SNAPSHOT on JVM (powered by Quarkus 3.17.8) started in 4.615s. Listening on: http://0.0.0.0:8080","process.thread.name":"main","process.thread.id":1,"mdc":{},"ndc":"","host.hostname":"host-name","process.name":"/usr/lib/jvm/jdk-21.0.3+9/bin/java","process.pid":77561,"data_stream.type":"logs","ecs.version":"1.12.2","service.environment":"prod","service.name":"Keycloak","service.version":"999.0.0-SNAPSHOT"}
16.2.3. Asynchronous logging Copy linkLink copied to clipboard!
Red Hat build of Keycloak supports asynchronous logging, which might be useful for deployments requiring high throughput and low latency. Asynchronous logging uses a separate thread to take care of processing all log records. The logging handlers are invoked in exactly the same way as with synchronous logging, only done in separate threads. You can enable asynchronous logging for all Red Hat build of Keycloak log handlers. A dedicated thread will be created for every log handler with enabled asynchronous logging.
The underlying mechanism for asynchronous logging uses a queue for processing log records. Every new log record is added to the queue and then published to the particular log handler with enabled asynchronous logging. Every log handler has a different queue.
If the queue is already full, it blocks the main thread and waits for free space in the queue.
16.2.3.1. When to use asynchronous logging Copy linkLink copied to clipboard!
- You need lower latencies for incoming requests
- You need higher throughput
- You have small worker thread pool and want to offload logging to separate threads
- You want to reduce the impact of I/O-heavy log handlers
- You are logging to remote destinations (e.g., network syslog servers) and want to avoid blocking worker threads
Be aware that enabling asynchronous logging might bring some additional memory overhead due to the additional separate thread and the inner queue. In that case, it is not recommended to use it for resource-constrained environments. Additionally, unexpected server shutdowns create a risk of losing log records.
16.2.3.2. Enable asynchronous logging Copy linkLink copied to clipboard!
You can enable asynchronous logging globally for all log handlers by using
log-async
bin/kc.[sh|bat] start --log-async=true
Or you can enable the asynchronous logging for every specific handler by using properties in the format
log-<handler>-async
<handler>
log-async
You can use these properties as follows:
bin/kc.[sh|bat] start --log-console-async=true --log-file-async=true --log-syslog-async=true
-
- Console log handler
log-console-async -
- File log handler
log-file-async -
- Syslog log handler
log-syslog-async
16.2.3.3. Change queue length Copy linkLink copied to clipboard!
You can change the size of the queue used for the asynchronous logging. The default size is 512 log records in the queue.
You can change the queue length as follows:
bin/kc.[sh|bat] start --log-console-async-queue-length=512 --log-file-async-queue-length=512 --log-syslog-async-queue-length=512
These properties are available only when asynchronous logging is enabled for these specific log handlers.
16.2.4. HTTP Access Logging Copy linkLink copied to clipboard!
Red Hat build of Keycloak supports HTTP access logging to record details of incoming HTTP requests. While access logs are often used for debugging and traffic analysis, they are also important for security auditing and compliance monitoring, helping administrators track access patterns, identify suspicious activity, and maintain audit trails.
These logs are written at the
INFO
log-level=info
log-level=org.keycloak.http.access-log:info
INFO
16.2.4.1. How to enable Copy linkLink copied to clipboard!
You can enable HTTP access logging by using
http-access-log-enabled
bin/kc.[sh|bat] start --http-access-log-enabled=true
16.2.4.2. Change log format/pattern Copy linkLink copied to clipboard!
You can change format/pattern of the access log records by using
http-access-log-pattern
bin/kc.[sh|bat] start --http-access-log-pattern=combined
Predefined named patterns:
-
(default) - prints basic information about the request
common -
- prints basic information about the request + information about referer and user agent
combined -
- prints comprehensive information about the request with all its headers
long
You can even specify your own pattern with your required data to be logged, such as:
bin/kc.[sh|bat] start --http-access-log-pattern='%A %{METHOD} %{REQUEST_URL} %{i,User-Agent}'
HTTP Access logs may contain sensitive HTTP headers like
Authorization
Cookie
long
Consult the Quarkus documentation for the full list of variables that can be used.
16.2.4.3. Exclude specific URL paths Copy linkLink copied to clipboard!
It is possible to exclude specific URL paths from the HTTP access logging, so they will not be recorded.
You can use regular expressions to exclude them, such as:
bin/kc.[sh|bat] start --http-access-log-exclude='/realms/my-internal-realm/.*'
In this case, all calls to the
/realms/my-internal-realm/
16.3. Console log handler Copy linkLink copied to clipboard!
The console log handler is enabled by default, providing unstructured log messages for the console.
16.3.1. Configuring the console log format Copy linkLink copied to clipboard!
Red Hat build of Keycloak uses a pattern-based logging formatter that generates human-readable text logs by default.
The logging format template for these lines can be applied at the root level. The default format template is:
-
%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n
The format string supports the symbols in the following table:
| Symbol | Summary | Description |
|---|---|---|
| %% | % | Renders a simple % character. |
| %c | Category | Renders the log category name. |
| %d{xxx} | Date | Renders a date with the given date format string.String syntax defined by
|
| %e | Exception | Renders a thrown exception. |
| %h | Hostname | Renders the simple host name. |
| %H | Qualified host name | Renders the fully qualified hostname, which may be the same as the simple host name, depending on the OS configuration. |
| %i | Process ID | Renders the current process PID. |
| %m | Full Message | Renders the log message and an exception, if thrown. |
| %n | Newline | Renders the platform-specific line separator string. |
| %N | Process name | Renders the name of the current process. |
| %p | Level | Renders the log level of the message. |
| %r | Relative time | Render the time in milliseconds since the start of the application log. |
| %s | Simple message | Renders only the log message without exception trace. |
| %t | Thread name | Renders the thread name. |
| %t{id} | Thread ID | Render the thread ID. |
| %z{<zone name>} | Timezone | Set the time zone of log output to <zone name>. |
| %L | Line number | Render the line number of the log message. |
16.3.2. Setting the logging format Copy linkLink copied to clipboard!
To set the logging format for a logged line, perform these steps:
- Build your desired format template using the preceding table.
Enter the following command:
bin/kc.[sh|bat] start --log-console-format="'<format>'"
Note that you need to escape characters when invoking commands containing special shell characters such as
;
Example: Abbreviate the fully qualified category name
bin/kc.[sh|bat] start --log-console-format="'%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] (%t) %s%e%n'"
This example abbreviates the category name to three characters by setting
[%c{3.}]
[%c]
16.3.3. Configuring JSON or plain console logging Copy linkLink copied to clipboard!
By default, the console log handler logs plain unstructured data to the console. To use structured JSON log output instead, enter the following command:
bin/kc.[sh|bat] start --log-console-output=json
Example Log Message
{"timestamp":"2025-02-03T14:52:20.290353085+01:00","sequence":9605,"loggerClassName":"org.jboss.logging.Logger","loggerName":"io.quarkus","level":"INFO","message":"Keycloak 999.0.0-SNAPSHOT on JVM (powered by Quarkus 3.17.8) started in 4.440s. Listening on: http://0.0.0.0:8080","threadName":"main","threadId":1,"mdc":{},"ndc":"","hostName":"host-name","processName":"/usr/lib/jvm/jdk-21.0.3+9/bin/java","processId":76944}
When using JSON output, colors are disabled and the format settings set by
--log-console-format
To use unstructured logging, enter the following command:
bin/kc.[sh|bat] start --log-console-output=default
Example Log Message
2025-02-03 14:53:56,653 INFO [io.quarkus] (main) Keycloak 999.0.0-SNAPSHOT on JVM (powered by Quarkus 3.17.8) started in 4.795s. Listening on: http://0.0.0.0:8080
16.3.4. Colors Copy linkLink copied to clipboard!
Colored console log output for unstructured logs is disabled by default. Colors may improve readability, but they can cause problems when shipping logs to external log aggregation systems. To enable or disable color-coded console log output, enter following command:
bin/kc.[sh|bat] start --log-console-color=<false|true>
16.3.5. Configuring the console log level Copy linkLink copied to clipboard!
Log level for console log handler can be specified by
--log-console-level
bin/kc.[sh|bat] start --log-console-level=warn
For more information, see the section Section 16.2.1, “Specify log level for each handler” above.
16.4. File logging Copy linkLink copied to clipboard!
As an alternative to logging to the console, you can use unstructured logging to a file.
16.4.1. Enable file logging Copy linkLink copied to clipboard!
Logging to a file is disabled by default. To enable it, enter the following command:
bin/kc.[sh|bat] start --log="console,file"
A log file named
keycloak.log
data/log
16.4.2. Configuring the location and name of the log file Copy linkLink copied to clipboard!
To change where the log file is created and the file name, perform these steps:
Create a writable directory to store the log file.
If the directory is not writable, Red Hat build of Keycloak will start correctly, but it will issue an error and no log file will be created.
Enter this command:
bin/kc.[sh|bat] start --log="console,file" --log-file=<path-to>/<your-file.log>
16.4.3. Configuring the file handler format Copy linkLink copied to clipboard!
To configure a different logging format for the file log handler, enter the following command:
bin/kc.[sh|bat] start --log-file-format="<pattern>"
See Section 16.3.1, “Configuring the console log format” for more information and a table of the available pattern configuration.
16.4.4. Configuring the file log level Copy linkLink copied to clipboard!
Log level for file log handler can be specified by
--log-file-level
bin/kc.[sh|bat] start --log-file-level=warn
For more information, see the section Section 16.2.1, “Specify log level for each handler” above.
16.5. Centralized logging using Syslog Copy linkLink copied to clipboard!
Red Hat build of Keycloak provides the ability to send logs to a remote Syslog server. It utilizes the protocol defined in RFC 5424.
16.5.1. Enable the Syslog handler Copy linkLink copied to clipboard!
To enable logging using Syslog, add it to the list of activated log handlers as follows:
bin/kc.[sh|bat] start --log="console,syslog"
16.5.2. Configuring the Syslog Application Name Copy linkLink copied to clipboard!
To set a different application name, add the
--log-syslog-app-name
bin/kc.[sh|bat] start --log="console,syslog" --log-syslog-app-name=kc-p-itadmins
If not set, the application name defaults to
keycloak
16.5.3. Configuring the Syslog endpoint Copy linkLink copied to clipboard!
To configure the endpoint(host:port) of your centralized logging system, enter the following command and substitute the values with your specific values:
bin/kc.[sh|bat] start --log="console,syslog" --log-syslog-endpoint=myhost:12345
When the Syslog handler is enabled, the host is using
localhost
514
16.5.4. Configuring the Syslog log level Copy linkLink copied to clipboard!
Log level for Syslog log handler can be specified by
--log-syslog-level
bin/kc.[sh|bat] start --log-syslog-level=warn
For more information, see the section Section 16.2.1, “Specify log level for each handler” above.
16.5.5. Configuring the Syslog protocol Copy linkLink copied to clipboard!
Syslog uses TCP as the default protocol for communication. To use UDP instead of TCP, add the
--log-syslog-protocol
bin/kc.[sh|bat] start --log="console,syslog" --log-syslog-protocol=udp
The available protocols are:
tpc
udp
ssl-tcp
16.5.6. Configuring the Syslog counting framing Copy linkLink copied to clipboard!
By default, Syslog messages sent over TCP or SSL-TCP are prefixed with the message size, as required by certain Syslog receivers. This behavior is controlled by the
--log-syslog-counting-framing
To explicitly enable or disable this feature, use the following command:
bin/kc.[sh|bat] start --log-syslog-counting-framing=true
You can set the value to one of the following:
-
(default) – Enable counting framing only when the
protocol-dependentislog-syslog-protocolortcp.ssl-tcp -
– Always enable counting framing by prefixing messages with their size.
true -
– Never use counting framing.
false
Note that using
protocol-dependent
16.5.7. Configuring the Syslog log format Copy linkLink copied to clipboard!
To set the logging format for a logged line, perform these steps:
- Build your desired format template using the preceding table.
Enter the following command:
bin/kc.[sh|bat] start --log-syslog-format="'<format>'"
Note that you need to escape characters when invoking commands containing special shell characters such as
;
Example: Abbreviate the fully qualified category name
bin/kc.[sh|bat] start --log-syslog-format="'%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] (%t) %s%e%n'"
This example abbreviates the category name to three characters by setting
[%c{3.}]
[%c]
16.5.8. Configuring the Syslog type Copy linkLink copied to clipboard!
Syslog uses different message formats based on particular RFC specifications. To change the Syslog type with a different message format, use the
--log-syslog-type
bin/kc.[sh|bat] start --log-syslog-type=rfc3164
Possible values for the
--log-syslog-type
-
(default)
rfc5424 -
rfc3164
The preferred Syslog type is RFC 5424, which obsoletes RFC 3164, known as BSD Syslog protocol.
16.5.9. Configuring the Syslog maximum message length Copy linkLink copied to clipboard!
To set the maximum length of the message allowed to be sent (in bytes), use the
--log-syslog-max-length
bin/kc.[sh|bat] start --log-syslog-max-length=1536
The length can be specified in memory size format with the appropriate suffix, like
1k
1K
If the length is not explicitly set, the default values are set based on the
--log-syslog-type
-
- for RFC 5424
2048B -
- for RFC 3164
1024B
16.5.10. Configuring the Syslog structured output Copy linkLink copied to clipboard!
By default, the Syslog log handler sends plain unstructured data to the Syslog server. To use structured JSON log output instead, enter the following command:
bin/kc.[sh|bat] start --log-syslog-output=json
Example Log Message
2024-04-05T12:32:20.616+02:00 host keycloak 2788276 io.quarkus - {"timestamp":"2024-04-05T12:32:20.616208533+02:00","sequence":9948,"loggerClassName":"org.jboss.logging.Logger","loggerName":"io.quarkus","level":"INFO","message":"Profile prod activated. ","threadName":"main","threadId":1,"mdc":{},"ndc":"","hostName":"host","processName":"QuarkusEntryPoint","processId":2788276}
When using JSON output, colors are disabled and the format settings set by
--log-syslog-format
To use unstructured logging, enter the following command:
bin/kc.[sh|bat] start --log-syslog-output=default
Example Log Message
2024-04-05T12:31:38.473+02:00 host keycloak 2787568 io.quarkus - 2024-04-05 12:31:38,473 INFO [io.quarkus] (main) Profile prod activated.
As you can see, the timestamp is present twice, so you can amend it correspondingly via the
--log-syslog-format
16.6. Relevant options Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
|
|
|
|
|
|
| (default) |
|
|
|
| 🛠
Available only when log-mdc preview feature is enabled |
|
|
Available only when MDC logging is enabled |
|
16.6.1. Console Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
Available only when Console log handler is activated |
|
|
Available only when Console log handler is activated and asynchronous logging is enabled | (default) |
|
Available only when Console log handler is activated |
|
|
Available only when Console log handler is activated | (default) |
|
Available only when Console log handler and MDC logging are activated |
|
|
Available only when Console log handler and Tracing is activated |
|
|
Available only when Console log handler is activated and output is set to 'json' |
|
|
Available only when Console log handler is activated |
|
|
Available only when Console log handler is activated |
|
16.6.2. File Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
Available only when File log handler is activated | (default) |
|
Available only when File log handler is activated |
|
|
Available only when File log handler is activated and asynchronous logging is enabled | (default) |
|
Available only when File log handler is activated | (default) |
|
Available only when File log handler and MDC logging are activated |
|
|
Available only when File log handler and Tracing is activated |
|
|
Available only when File log handler is activated and output is set to 'json' |
|
|
Available only when File log handler is activated |
|
|
Available only when File log handler is activated |
|
16.6.3. Syslog Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
Available only when Syslog is activated | (default) |
|
Available only when Syslog is activated |
|
|
Available only when Syslog is activated and asynchronous logging is enabled | (default) |
|
Available only when Syslog is activated |
|
|
Available only when Syslog is activated | (default) |
|
Available only when Syslog is activated | (default) |
|
Available only when Syslog handler and MDC logging are activated |
|
|
Available only when Syslog handler and Tracing is activated |
|
|
Available only when Syslog is activated and output is set to 'json' |
|
|
Available only when Syslog is activated |
|
|
Available only when Syslog is activated | |
|
Available only when Syslog is activated |
|
|
Available only when Syslog is activated |
|
|
Available only when Syslog is activated |
|
16.6.4. HTTP Access log Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
|
|
|
Available only when HTTP Access log is enabled | |
|
Available only when HTTP Access log is enabled |
|
Chapter 17. FIPS 140-2 support Copy linkLink copied to clipboard!
Configure Red Hat build of Keycloak server for FIPS compliance.
The Federal Information Processing Standard Publication 140-2, (FIPS 140-2), is a U.S. government computer security standard used to approve cryptographic modules. Red Hat build of Keycloak supports running in FIPS 140-2 compliant mode. In this case, Red Hat build of Keycloak will use only FIPS approved cryptography algorithms for its functionality.
To run in FIPS 140-2, Red Hat build of Keycloak should run on a FIPS 140-2 enabled system. This requirement usually assumes RHEL or Fedora where FIPS was enabled during installation. See RHEL documentation for the details. When the system is in FIPS mode, it makes sure that the underlying OpenJDK is in FIPS mode as well and would use only FIPS enabled security providers.
To check that the system is in FIPS mode, you can check it with the following command from the command line:
fips-mode-setup --check
If the system is not in FIPS mode, you can enable it with the following command, however it is recommended that system is in FIPS mode since the installation rather than subsequently enabling it as follows:
fips-mode-setup --enable
17.1. BouncyCastle library Copy linkLink copied to clipboard!
Red Hat build of Keycloak internally uses the BouncyCastle library for many cryptography utilities. Please note that the default version of the BouncyCastle library that shipped with Red Hat build of Keycloak is not FIPS compliant; however, BouncyCastle also provides a FIPS validated version of its library. The FIPS validated BouncyCastle library is not shipped with Red Hat build of Keycloak as Red Hat build of Keycloak cannot provide official support of it. Therefore, to run in FIPS compliant mode, you need to download BouncyCastle-FIPS bits and add them to the Red Hat build of Keycloak distribution. When Red Hat build of Keycloak executes in fips mode, it will use the BCFIPS bits instead of the default BouncyCastle bits, which achieves FIPS compliance.
17.1.1. BouncyCastle FIPS bits Copy linkLink copied to clipboard!
BouncyCastle FIPS can be downloaded from the BouncyCastle official page. Then you can add them to the directory
KEYCLOAK_HOME/providers
- bc-fips version 2.1.2.
- bctls-fips version 2.1.22.
- bcpkix-fips version 2.1.10.
- bcutil-fips version 2.1.5.
17.2. Generating keystore Copy linkLink copied to clipboard!
You can create either
pkcs12
bcfks
17.2.1. PKCS12 keystore Copy linkLink copied to clipboard!
The
p12
pkcs12
PKCS12 keystore can be generated with OpenJDK 21 Java on RHEL 9 in the standard way. For instance, the following command can be used to generate the keystore:
keytool -genkeypair -sigalg SHA512withRSA -keyalg RSA -storepass passwordpassword \
-keystore $KEYCLOAK_HOME/conf/server.keystore \
-alias localhost \
-dname CN=localhost -keypass passwordpassword
The
pkcs12
BCFIPS
pkcs12
When the system is in FIPS mode, the default
java.security
17.2.2. BCFKS keystore Copy linkLink copied to clipboard!
BCFKS keystore generation requires the use of the BouncyCastle FIPS libraries and a custom security file.
You can start by creating a helper file, such as
/tmp/kc.keystore-create.java.security
securerandom.strongAlgorithms=PKCS11:SunPKCS11-NSS-FIPS
Next, enter a command such as the following to generate the keystore:
keytool -keystore $KEYCLOAK_HOME/conf/server.keystore \
-storetype bcfks \
-providername BCFIPS \
-providerclass org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider \
-provider org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider \
-providerpath $KEYCLOAK_HOME/providers/bc-fips-*.jar \
-alias localhost \
-genkeypair -sigalg SHA512withRSA -keyalg RSA -storepass passwordpassword \
-dname CN=localhost -keypass passwordpassword \
-J-Djava.security.properties=/tmp/kc.keystore-create.java.security
Using self-signed certificates is for demonstration purposes only, so replace these certificates with proper certificates when you move to a production environment.
Similar options are needed when you are doing any other manipulation with keystore/truststore of
bcfks
17.3. Running the server. Copy linkLink copied to clipboard!
- To run the server with BCFIPS in non-approved mode, enter the following command
bin/kc.[sh|bat] start --features=fips --hostname=localhost --https-key-store-password=passwordpassword --log-level=INFO,org.keycloak.common.crypto:TRACE,org.keycloak.crypto:TRACE
In non-approved mode, the default keystore type (as well as default truststore type) is PKCS12. Hence if you generated a BCFKS keystore as described above, it is also required to use the command
--https-key-store-type=bcfks
You can disable logging in production if everything works as expected.
17.4. Strict mode Copy linkLink copied to clipboard!
There is the
fips-mode
non-strict
fips
--features=fips --fips-mode=strict
In strict mode, the default keystore type (as well as default truststore type) is BCFKS. If you want to use a different keystore type it is required to use the option
--https-key-store-type
When starting the server, you can include TRACE level in the startup command. For example:
--log-level=INFO,org.keycloak.common.crypto.CryptoIntegration:TRACE
By using TRACE level, you can check that the startup log contains
KC
Approved Mode
KC(BCFIPS version 2.0102 Approved Mode, FIPS-JVM: enabled) version 1.0 - class org.keycloak.crypto.fips.KeycloakFipsSecurityProvider,
17.4.1. Cryptography restrictions in strict mode Copy linkLink copied to clipboard!
-
As mentioned in the previous section, strict mode may not work with keystore. It is required to use another keystore (like
pkcs12) as mentioned earlier. Alsobcfksandjkskeystores are not supported in Red Hat build of Keycloak when using strict mode. Some examples are importing or generating a keystore of an OIDC or SAML client in the Admin Console or for apkcs12provider in the realm keys.java-keystore -
User passwords must be 14 characters or longer. Red Hat build of Keycloak uses PBKDF2 based password encoding by default. BCFIPS approved mode requires passwords to be at least 112 bits (effectively 14 characters) with PBKDF2 algorithm. If you want to allow a shorter password, set the property of provider
max-padding-lengthof SPIpbkdf2-sha512to 14 to provide additional padding when verifying a hash created by this algorithm. This setting is also backwards compatible with previously stored passwords. For example, if the user’s database is in a non-FIPS environment and you have shorter passwords and you want to verify them now with Red Hat build of Keycloak using BCFIPS in approved mode, the passwords should work. So effectively, you can use an option such as the following when starting the server:password-hashing
--spi-password-hashing--pbkdf2-sha512--max-padding-length=14
Using the option above does not break FIPS compliance. However, note that longer passwords are good practice anyway. For example, passwords auto-generated by modern browsers match this requirement as they are longer than 14 characters. If you want to omit the option for max-padding-length, you can set the password policy to your realms to have passwords at least 14 characters long.
When you are migrating from Red Hat build of Keycloak older than 24, or if you explicitly set the password policy to override the default hashing algorithm, it is possible that some of your users use an older algorithm like
pbkdf2-sha256
--spi-password-hashing--pbkdf2-sha256--max-padding-length=14
pbkdf2-sha256
-
RSA keys of 1024 bits do not work (2048 is the minimum). This applies for keys used by the Red Hat build of Keycloak realm itself (Realm keys from the tab in the admin console), but also client keys and IDP keys
Keys -
HMAC SHA-XXX keys must be at least 112 bits (or 14 characters long). For example if you use OIDC clients with the client authentication (or
Signed Jwt with Client Secretin the OIDC notation), then your client secrets should be at least 14 characters long. Note that for good security, it is recommended to use client secrets generated by the Red Hat build of Keycloak server, which always fulfils this requirement.client-secret-jwt -
The bc-fips version 1.0.2.4 deals with the end of the transition period for PKCS 1.5 RSA encryption. Therefore JSON Web Encryption (JWE) with algorithm is not allowed in strict mode by default (BC provides the system property
RSA1_5as backward compatibility option for the moment).-Dorg.bouncycastle.rsa.allow_pkcs15_enc=trueandRSA-OAEPare still available as before.RSA-OAEP-256
17.5. Other restrictions Copy linkLink copied to clipboard!
To have SAML working, make sure that a
XMLDSig
SunJGSS
XMLDSig
java.security
To have SAML working, you can manually add the provider into
JAVA_HOME/conf/security/java.security
fips.provider.7=XMLDSig
Adding this security provider should work well. In fact, it is FIPS compliant and is already added by default in the OpenJDK 21 and newer versions of OpenJDK 17. Details are in the bugzilla.
It is recommended to look at
JAVA_HOME/conf/security/java.security
fips.provider.7
fips.provider.N
If you prefer not to edit your
java.security
kc.java.security
-Djava.security.properties=/location/to/your/file/kc.java.security
For Kerberos/SPNEGO, the security provider
SunJGSS
KERBEROS
17.6. Run the CLI on the FIPS host Copy linkLink copied to clipboard!
If you want to run Client Registration CLI (
kcreg.sh|bat
kcadm.sh|bat
cp $KEYCLOAK_HOME/providers/bc-fips-*.jar $KEYCLOAK_HOME/bin/client/lib/
cp $KEYCLOAK_HOME/providers/bctls-fips-*.jar $KEYCLOAK_HOME/bin/client/lib/
cp $KEYCLOAK_HOME/providers/bcutil-fips-*.jar $KEYCLOAK_HOME/bin/client/lib/
When trying to use BCFKS truststore/keystore with CLI, you may see issues due this truststore is not the default java keystore type. It can be good to specify it as default in java security properties. For example run this command on unix based systems before doing any operation with kcadm|kcreg clients:
echo "keystore.type=bcfks
fips.keystore.type=bcfks" > /tmp/kcadm.java.security
export KC_OPTS="-Djava.security.properties=/tmp/kcadm.java.security"
17.7. Red Hat build of Keycloak server in FIPS mode in containers Copy linkLink copied to clipboard!
When you want Red Hat build of Keycloak in FIPS mode to be executed inside a container, your "host" must be using FIPS mode as well. The container will then "inherit" FIPS mode from the parent host. See this section in the RHEL documentation for the details.
The Red Hat build of Keycloak container image will automatically be in fips mode when executed from the host in FIPS mode. However, make sure that the Red Hat build of Keycloak container also uses BCFIPS jars (instead of BC jars) and proper options when started.
Regarding this, it is best to build your own container image as described in the Running Red Hat build of Keycloak in a container and tweak it to use BCFIPS etc.
For example in the current directory, you can create sub-directory
files
- BC FIPS jar files as described above
-
Custom keystore file - named for example
keycloak-fips.keystore.bcfks -
Security file with added provider for SAML (Not needed with OpenJDK 21 or newer OpenJDK 17)
kc.java.security
Then create
Containerfile
Containerfile:
FROM registry.redhat.io/rhbk/keycloak-rhel9:26.4 as builder
ADD files /tmp/files/
WORKDIR /opt/keycloak
RUN cp /tmp/files/*.jar /opt/keycloak/providers/
RUN cp /tmp/files/keycloak-fips.keystore.* /opt/keycloak/conf/server.keystore
RUN cp /tmp/files/kc.java.security /opt/keycloak/conf/
RUN /opt/keycloak/bin/kc.sh build --features=fips --fips-mode=strict
FROM registry.redhat.io/rhbk/keycloak-rhel9:26.4
COPY --from=builder /opt/keycloak/ /opt/keycloak/
ENTRYPOINT ["/opt/keycloak/bin/kc.sh"]
Then build FIPS as an optimized Docker image and start it as described in the Running Red Hat build of Keycloak in a container. These steps require that you use arguments as described above when starting the image.
17.8. Migration from non-fips environment Copy linkLink copied to clipboard!
If you previously used Red Hat build of Keycloak in a non-fips environment, it is possible to migrate it to a FIPS environment including its data. However, restrictions and considerations exist as mentioned in previous sections, namely:
-
Starting with Red Hat build of Keycloak 25, the default algorithm for password hashing is . However, this algorithm is not supported for FIPS 140-2. This means that if your users hashed their password with
argon2, they will not be able to login after switch to the FIPS environment. If you plan to migrate to the FIPS environment, consider setting the Password policy for your realm from the beginning (before any users are created) and override the default algorithm for example toargon2, which is FIPS compliant. This strategy helps to make the migration to the FIPS environment to be smooth. Otherwise, if your users are already onpbkdf2-sha512passwords, simply ask users to reset the password after migrating to the FIPS environment. For instance, ask users to use "Forget password" or send the email for reset-password to all users.argon2 - Make sure all the Red Hat build of Keycloak functionality relying on keystores uses only supported keystore types. This differs based on whether strict or non-strict mode is used.
-
Kerberos authentication may not work. If your authentication flow uses authenticator, this authenticator will be automatically switched to
Kerberoswhen migrated to FIPS environment. It is recommended to remove anyDISABLEDuser storage providers from your realm and disableKerberosrelated functionality in LDAP providers before switching to FIPS environment.Kerberos
In addition to the preceding requirements, be sure to doublecheck this before switching to FIPS strict mode:
- Make sure that all the Red Hat build of Keycloak functionality relying on keys (for example, realm or client keys) use RSA keys of at least 2048 bits
-
Make sure that clients relying on use at least 14 characters long secrets (ideally generated secrets)
Signed JWT with Client Secret -
Password length restriction as described earlier. In case your users have shorter passwords, be sure to start the server with the max padding length set to 14 of PBKDF2 provider as mentioned earlier. If you prefer to avoid this option, you can for instance ask all your users to reset their password (for example by the link) during the first authentication in the new environment.
Forgot password
17.9. Red Hat build of Keycloak FIPS mode on the non-fips system Copy linkLink copied to clipboard!
Red Hat build of Keycloak is supported and tested on a FIPS enabled RHEL 8 system and
ubi8
ubi9
If you are still restricted to running Red Hat build of Keycloak on such a system, you can at least update your security providers configured in
java.security
You can check the Red Hat build of Keycloak server log at startup to see if the correct security providers are used. TRACE logging should be enabled for crypto-related Red Hat build of Keycloak packages as described in the Keycloak startup command earlier.
Chapter 18. Configuring the Management Interface Copy linkLink copied to clipboard!
Configure Red Hat build of Keycloak’s management interface for endpoints such as metrics and health checks.
The management interface allows accessing management endpoints via a different HTTP server than the primary one. It provides the possibility to hide endpoints like
/metrics
/health
18.1. Management interface configuration Copy linkLink copied to clipboard!
The management interface is turned on when something is exposed on it. Management endpoints such as
/metrics
/health
9000
If management interface properties are not explicitly set, their values are automatically inherited from the default HTTP server.
18.1.1. Port Copy linkLink copied to clipboard!
In order to change the port for the management interface, you can use the Red Hat build of Keycloak option
http-management-port
18.1.2. Relative path Copy linkLink copied to clipboard!
You can change the relative path of the management interface, as the prefix path for the management endpoints can be different. You can achieve it via the Red Hat build of Keycloak option
http-management-relative-path
For instance, if you set the CLI option
--http-management-relative-path=/management
/management/metrics
/management/health
User is automatically redirected to the path where Red Hat build of Keycloak is hosted when the relative path is specified. It means when the relative path is set to
/management
localhost:9000/
localhost:9000/management
If you do not explicitly set the value for it, the value from the
http-relative-path
--http-relative-path=/auth
/auth/metrics
/auth/health
18.1.3. TLS support Copy linkLink copied to clipboard!
When the TLS is set for the default Red Hat build of Keycloak server, by default the management interface will be accessible through HTTPS as well. The management interface can run only either on HTTP or HTTPS, not both as for the main server.
If you do not want the management interface to use HTTPS, you may set the
http-management-scheme
http
Specific Red Hat build of Keycloak management interface options with the prefix
https-management-*
18.1.4. Disable Management interface Copy linkLink copied to clipboard!
The management interface is automatically turned off when nothing is exposed on it. Currently, only health checks and metrics are exposed on the management interface regardless. If you want to disable exposing them on the management interface, set the Red Hat build of Keycloak property
legacy-observability-interface
true
Exposing health and metrics endpoints on the default server is not recommended for security reasons, and you should always use the management interface. Beware, the
legacy-observability-interface
18.2. Relevant options Copy linkLink copied to clipboard!
| Value | |
|---|---|
| 🛠
Available only when health is enabled |
|
|
| (default) |
| 🛠
| (default) |
|
|
|
|
Available only when http-management-scheme is inherited | |
|
Available only when http-management-scheme is inherited | |
|
Available only when http-management-scheme is inherited | (default) |
| 🛠
|
|
|
Available only when http-management-scheme is inherited | |
|
Available only when http-management-scheme is inherited | (default) |
| 🛠
DEPRECATED. |
|
Chapter 19. Importing and exporting realms Copy linkLink copied to clipboard!
Import and export realms as JSON files.
In this chapter, you are going to understand the different approaches for importing and exporting realms using JSON files.
19.1. Import / Export Commands Copy linkLink copied to clipboard!
Exporting and importing into single files can produce large files which may run the export / import process out of memory. If your database contains more than 50,000 users, export to a directory and not a single file. The default count of users per file is fifty, but you may use a much larger value if desired.
The
import
export
It is recommended that all Red Hat build of Keycloak nodes are stopped prior to using the
kc.[sh|bat] export
It is required that all Red Hat build of Keycloak nodes are stopped prior to performing an
kc.[sh|bat] import
19.1.1. Providing options for database connection parameters Copy linkLink copied to clipboard!
When using the
export
import
--help
Some of the configuration options are build time configuration options. As default, Red Hat build of Keycloak will re-build automatically for the
export
import
If you have built an optimized version of Red Hat build of Keycloak with the
build
--optimized
if you do not use
--optimized
import
export
19.1.2. Exporting a Realm to a Directory Copy linkLink copied to clipboard!
To export a realm, you can use the
export
bin/kc.[sh|bat] export --help
To export a realm to a directory, you can use the
--dir <dir>
bin/kc.[sh|bat] export --dir <dir>
When exporting realms to a directory, the server is going to create separate files for each realm being exported.
19.1.2.1. Configuring how users are exported Copy linkLink copied to clipboard!
You are also able to configure how users are going to be exported by setting the
--users <strategy>
different_files-
Users export into different json files, depending on the maximum number of users per file set by
--users-per-file. This is the default value. skip- Skips exporting users.
realm_file- Users will be exported to the same file as the realm settings. For a realm named "foo", this would be "foo-realm.json" with realm data and users.
same_file- All users are exported to one explicit file. So you will get two json files for a realm, one with realm data and one with users.
If you are exporting users using the
different_files
--users-per-file
50
bin/kc.[sh|bat] export --dir <dir> --users different_files --users-per-file 100
19.1.3. Exporting a Realm to a File Copy linkLink copied to clipboard!
To export a realm to a file, you can use the
--file <file>
bin/kc.[sh|bat] export --file <file>
When exporting realms to a file, the server is going to use the same file to store the configuration for all the realms being exported.
19.1.4. Exporting a specific realm Copy linkLink copied to clipboard!
If you do not specify a specific realm to export, all realms are exported. To export a single realm, you can use the
--realm
bin/kc.[sh|bat] export [--dir|--file] <path> --realm my-realm
19.1.5. Import File Naming Conventions Copy linkLink copied to clipboard!
When you export a realm specific file name conventions are used, which must also be used for importing from a directory or import at startup. The realm file to be imported must be named <realm name>-realm.json. Regular and federated user files associated with a realm must be named <realm-name>-users-<file number>.json and <realm-name>-federated-users-<file number>.json. Failure to use this convention will result in errors or user files not being imported.
19.1.6. Importing a Realm from a Directory Copy linkLink copied to clipboard!
To import a realm, you can use the
import
bin/kc.[sh|bat] import --help
After exporting a realm to a directory, you can use the
--dir <dir>
bin/kc.[sh|bat] import --dir <dir>
When importing realms using the
import
--override
bin/kc.[sh|bat] import --dir <dir> --override false
By default, the
--override
true
19.1.7. Importing a Realm from a File Copy linkLink copied to clipboard!
To import a realm previously exported in a single file, you can use the
--file <file>
bin/kc.[sh|bat] import --file <file>
19.1.8. Using Environment Variables within the Realm Configuration Files Copy linkLink copied to clipboard!
You are able to use placeholders to resolve values from environment variables for any realm configuration.
Realm configuration using placeholders
{
"realm": "${MY_REALM_NAME}",
"enabled": true,
...
}
In the example above, the value set to the
MY_REALM_NAME
realm
there are currently no restrictions on what environment variables may be referenced. When environment variables are used to convey sensitive information, take care to ensure placeholders references do not inappropriately expose sensitive environment variable values.
19.2. Importing a Realm during Startup Copy linkLink copied to clipboard!
You are also able to import realms when the server is starting by using the
--import-realm
bin/kc.[sh|bat] start --import-realm
When you set the
--import-realm
data/import
.json
For the Red Hat build of Keycloak containers, the import directory is
/opt/keycloak/data/import
If a realm already exists in the server, the import operation is skipped. The main reason behind this behavior is to avoid re-creating realms and potentially lose state between server restarts.
To re-create realms you should explicitly run the
import
The server will not fully start until the imports are complete.
19.3. Importing and Exporting by using the Admin Console Copy linkLink copied to clipboard!
You can also import and export a realm using the Admin Console. This functionality is different from the other CLI options described in previous sections because the Admin Console requires the cluster to be online. The Admin Console also offers only the capability to partially export a realm. In this case, the current realm settings, along with some resources like clients, roles, and groups, can be exported. The users for that realm cannot be exported using this method.
When using the Admin Console export, the realm and the selected resources are always exported to a file named
realm-export.json
*
To export a realm using the Admin Console, perform these steps:
- Select a realm.
- Click Realm settings in the menu.
Point to the Action menu in the top right corner of the realm settings screen, and select Partial export.
A list of resources appears along with the realm configuration.
- Select the resources you want to export.
- Click Export.
Realms exported from the Admin Console are not suitable for backups or data transfer between servers. Only CLI exports are suitable for backups or data transfer between servers.
If the realm contains many groups, roles, and clients, the operation may cause the server to be unresponsive to user requests for a while. Use this feature with caution, especially on a production system.
In a similar way, you can import a previously exported realm. Perform these steps:
- Click Realm settings in the menu.
Point to the Action menu in the top right corner of the realm settings screen, and select Partial import.
A prompt appears where you can select the file you want to import. Based on this file, you see the resources you can import along with the realm settings.
- Click Import.
You can also control what Red Hat build of Keycloak should do if the imported resource already exists. These options exist:
- Fail import
- Abort the import.
- Skip
- Skip the duplicate resources without aborting the process
- Overwrite
- Replace the existing resources with the ones being imported.
The Admin Console partial import can also import files created by the CLI
export
Chapter 20. Using a vault Copy linkLink copied to clipboard!
Configure and use a vault in Red Hat build of Keycloak.
Red Hat build of Keycloak provides two out-of-the-box implementations of the Vault SPI: a plain-text file-based vault and Java KeyStore-based vault.
The file-based vault implementation is especially useful for Kubernetes/OpenShift secrets. You can mount Kubernetes secrets into the Red Hat build of Keycloak Container, and the data fields will be available in the mounted folder with a flat-file structure.
The Java KeyStore-based vault implementation is useful for storing secrets in bare metal installations. You can use the KeyStore vault, which is encrypted using a password.
20.1. Available integrations Copy linkLink copied to clipboard!
Secrets stored in the vaults can be used at the following places of the Administration Console:
- Obtain the SMTP Mail server Password
- Obtain the LDAP Bind Credential when using LDAP-based User Federation
- Obtain the OIDC identity providers Client Secret when integrating external identity providers
20.2. Enabling a vault Copy linkLink copied to clipboard!
For enabling the file-based vault you need to build Red Hat build of Keycloak first using the following build option:
bin/kc.[sh|bat] build --vault=file
Analogically, for the Java KeyStore-based you need to specify the following build option:
bin/kc.[sh|bat] build --vault=keystore
20.3. Configuring the file-based vault Copy linkLink copied to clipboard!
20.3.1. Setting the base directory to lookup secrets Copy linkLink copied to clipboard!
Kubernetes/OpenShift secrets are basically mounted files. To configure a directory where these files should be mounted, enter this command:
bin/kc.[sh|bat] start --vault-dir=/my/path
20.3.2. Realm-specific secret files Copy linkLink copied to clipboard!
Kubernetes/OpenShift Secrets are used on a per-realm basis in Red Hat build of Keycloak, which requires a naming convention for the file in place:
${vault.<realmname>_<secretname>}
20.4. Configuring the Java KeyStore-based vault Copy linkLink copied to clipboard!
In order to use the Java KeyStore-based vault, you need to create a KeyStore file first. You can use the following command for doing so:
keytool -importpass -alias <realm-name>_<alias> -keystore keystore.p12 -storepass keystorepassword
and then enter a value you want to store in the vault. Note that the format of the
-alias
REALM_UNDERSCORE_KEY
This by default results to storing the value in a form of generic PBEKey (password based encryption) within SecretKeyEntry.
You can then start Red Hat build of Keycloak using the following runtime options:
bin/kc.[sh|bat] start --vault-file=/path/to/keystore.p12 --vault-pass=<value> --vault-type=<value>
Note that the
--vault-type
PKCS12
Secrets stored in the vault can then be accessed in a realm via the following placeholder (assuming using the
REALM_UNDERSCORE_KEY
${vault.realm-name_alias}
20.5. Using underscores in the secret names Copy linkLink copied to clipboard!
To process the secret correctly, you double all underscores in the <secretname>. When
REALM_UNDERSCORE_KEY
Example
-
Realm Name:
sso_realm -
Desired Name:
ldap_credential - Resulting file name:
sso__realm_ldap__credential
Note the doubled underscores between sso and realm and also between ldap and credential.
To learn more about key resolvers, see Key resolvers section in the Server Administration guide.
20.6. Example: Use an LDAP bind credential secret in the Admin Console Copy linkLink copied to clipboard!
Example setup
-
A realm named
secrettest -
A desired Name for the bind Credential
ldapBc -
Resulting file name:
secrettest_ldapBc
Usage in Admin Console
You can then use this secret from the Admin Console by using
${vault.ldapBc}
Bind Credential
20.7. Relevant options Copy linkLink copied to clipboard!
| Value | |
|---|---|
| 🛠
|
|
|
| |
|
| |
|
| |
|
| (default) |
Chapter 21. All configuration Copy linkLink copied to clipboard!
Review build options and configuration for Red Hat build of Keycloak.
21.1. Cache Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
CLI:
Env:
|
|
|
CLI:
Env:
| |
|
CLI:
Env:
|
|
|
CLI:
Env:
| |
|
CLI:
Env:
Available only when embedded Infinispan clusters configured | |
|
CLI:
Env:
| |
|
CLI:
Env:
| |
|
CLI:
Env:
Available only when a TCP based cache-stack is used |
|
|
CLI:
Env:
Available only when property 'cache-embedded-mtls-enabled' is enabled | |
|
CLI:
Env:
Available only when property 'cache-embedded-mtls-enabled' is enabled | |
|
CLI:
Env:
Available only when property 'cache-embedded-mtls-enabled' is enabled | (default) |
|
CLI:
Env:
Available only when property 'cache-embedded-mtls-enabled' is enabled | |
|
CLI:
Env:
Available only when property 'cache-embedded-mtls-enabled' is enabled | |
|
CLI:
Env:
Available only when Infinispan clustered embedded is enabled | |
|
CLI:
Env:
Available only when Infinispan clustered embedded is enabled | |
|
CLI:
Env:
Available only when Infinispan clustered embedded is enabled | |
|
CLI:
Env:
Available only when Infinispan clustered embedded is enabled | |
|
CLI:
Env:
Available only when embedded Infinispan clusters configured | |
|
CLI:
Env:
Available only when embedded Infinispan clusters configured | |
|
CLI:
Env:
| |
|
CLI:
Env:
Available only when embedded Infinispan clusters configured | |
|
CLI:
Env:
| |
|
CLI:
Env:
Available only when metrics are enabled |
|
|
CLI:
Env:
Available only when remote host is set | |
|
CLI:
Env:
| |
|
CLI:
Env:
Available only when remote host is set | |
|
CLI:
Env:
Available only when remote host is set | (default) |
|
CLI:
Env:
Available only when remote host is set |
|
|
CLI:
Env:
Available only when remote host is set | |
|
CLI:
Env:
Available only when 'cache' type is set to 'ispn'
Use 'jdbc-ping' instead by leaving it unset Deprecated values: |
|
21.2. Config Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
CLI:
Env:
| |
|
CLI:
Env:
| |
|
CLI:
Env:
| (default) |
21.3. Database Copy linkLink copied to clipboard!
| Value | |
|---|---|
| 🛠
Named key: 🛠 CLI:
Env:
|
|
|
Named key: CLI:
Env:
|
|
| 🛠
Named key: 🛠 CLI:
Env:
| |
|
Named key: CLI:
Env:
| (default) |
|
Named key: CLI:
Env:
| |
|
Named key: CLI:
Env:
| |
|
CLI:
Env:
| |
|
Named key: CLI:
Env:
| (default) |
|
Named key: CLI:
Env:
| |
|
Named key: CLI:
Env:
| |
|
Named key: CLI:
Env:
| |
|
Named key: CLI:
Env:
| |
|
Named key: CLI:
Env:
| |
|
Named key: CLI:
Env:
| |
|
Named key: CLI:
Env:
| |
|
Named key: CLI:
Env:
|
21.4. Database - additional datasources Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
CLI:
Env:
|
|
| 🛠
CLI:
Env:
| |
|
CLI:
Env:
|
|
| 🛠
CLI:
Env:
|
|
|
CLI:
Env:
| (default) |
|
CLI:
Env:
| |
|
CLI:
Env:
| |
|
CLI:
Env:
| (default) |
|
CLI:
Env:
| |
|
CLI:
Env:
| |
|
CLI:
Env:
| |
|
CLI:
Env:
| |
|
CLI:
Env:
| |
|
CLI:
Env:
| |
|
CLI:
Env:
| |
|
CLI:
Env:
|
21.5. Transaction Copy linkLink copied to clipboard!
| Value | |
|---|---|
| 🛠
Named key: 🛠 CLI:
Env:
|
|
| 🛠
CLI:
Env:
|
|
21.6. Feature Copy linkLink copied to clipboard!
| Value | |
|---|---|
| 🛠
CLI:
Env:
|
|
| 🛠
CLI:
Env:
|
|
21.7. Hostname v2 Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
CLI:
Env:
Available only when hostname:v2 feature is enabled | |
|
CLI:
Env:
Available only when hostname:v2 feature is enabled | |
|
CLI:
Env:
Available only when hostname:v2 feature is enabled |
|
|
CLI:
Env:
Available only when hostname:v2 feature is enabled |
|
|
CLI:
Env:
Available only when hostname:v2 feature is enabled |
|
21.8. HTTP(S) Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
CLI:
Env:
DEPRECATED. |
|
|
CLI:
Env:
|
|
|
CLI:
Env:
| (default) |
|
CLI:
Env:
| |
|
CLI:
Env:
Available only when metrics are enabled |
|
|
CLI:
Env:
Available only when metrics are enabled | |
|
CLI:
Env:
| |
|
CLI:
Env:
| (default) |
| 🛠
CLI:
Env:
| (default) |
|
CLI:
Env:
| |
|
CLI:
Env:
| |
|
CLI:
Env:
| (default) |
|
CLI:
Env:
| |
| 🛠
CLI:
Env:
|
|
|
CLI:
Env:
| |
|
CLI:
Env:
| (default) |
|
CLI:
Env:
| |
|
CLI:
Env:
| (default) |
|
CLI:
Env:
|
|
|
CLI:
Env:
| |
|
CLI:
Env:
| |
|
CLI:
Env:
|
21.9. HTTP Access log Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
CLI:
Env:
|
|
|
CLI:
Env:
Available only when HTTP Access log is enabled | |
|
CLI:
Env:
Available only when HTTP Access log is enabled |
|
21.10. Health Copy linkLink copied to clipboard!
| Value | |
|---|---|
| 🛠
CLI:
Env:
|
|
21.11. Management Copy linkLink copied to clipboard!
| Value | |
|---|---|
| 🛠
CLI:
Env:
Available only when health is enabled |
|
|
CLI:
Env:
| (default) |
| 🛠
CLI:
Env:
| (default) |
|
CLI:
Env:
|
|
|
CLI:
Env:
Available only when http-management-scheme is inherited | |
|
CLI:
Env:
Available only when http-management-scheme is inherited | |
|
CLI:
Env:
Available only when http-management-scheme is inherited | (default) |
| 🛠
CLI:
Env:
|
|
|
CLI:
Env:
Available only when http-management-scheme is inherited | |
|
CLI:
Env:
Available only when http-management-scheme is inherited | (default) |
| 🛠
CLI:
Env:
DEPRECATED. |
|
21.12. Metrics Copy linkLink copied to clipboard!
| Value | |
|---|---|
| 🛠
CLI:
Env:
|
|
21.13. Proxy Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
CLI:
Env:
|
|
|
CLI:
Env:
|
|
|
CLI:
Env:
|
21.14. Vault Copy linkLink copied to clipboard!
| Value | |
|---|---|
| 🛠
CLI:
Env:
|
|
|
CLI:
Env:
| |
|
CLI:
Env:
| |
|
CLI:
Env:
| |
|
CLI:
Env:
| (default) |
21.15. Logging Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
CLI:
Env:
|
|
|
CLI:
Env:
|
|
|
CLI:
Env:
Available only when Console log handler is activated |
|
|
CLI:
Env:
Available only when Console log handler is activated and asynchronous logging is enabled | (default) |
|
CLI:
Env:
Available only when Console log handler is activated |
|
|
CLI:
Env:
Available only when Console log handler is activated | (default) |
|
CLI:
Env:
Available only when Console log handler and MDC logging are activated |
|
|
CLI:
Env:
Available only when Console log handler and Tracing is activated |
|
|
CLI:
Env:
Available only when Console log handler is activated and output is set to 'json' |
|
|
CLI:
Env:
Available only when Console log handler is activated |
|
|
CLI:
Env:
Available only when Console log handler is activated |
|
|
CLI:
Env:
Available only when File log handler is activated | (default) |
|
CLI:
Env:
Available only when File log handler is activated |
|
|
CLI:
Env:
Available only when File log handler is activated and asynchronous logging is enabled | (default) |
|
CLI:
Env:
Available only when File log handler is activated | (default) |
|
CLI:
Env:
Available only when File log handler and MDC logging are activated |
|
|
CLI:
Env:
Available only when File log handler and Tracing is activated |
|
|
CLI:
Env:
Available only when File log handler is activated and output is set to 'json' |
|
|
CLI:
Env:
Available only when File log handler is activated |
|
|
CLI:
Env:
Available only when File log handler is activated |
|
|
CLI:
Env:
| (default) |
|
CLI:
Env:
|
|
| 🛠
CLI:
Env:
Available only when log-mdc preview feature is enabled |
|
|
CLI:
Env:
Available only when MDC logging is enabled |
|
|
CLI:
Env:
Available only when Syslog is activated | (default) |
|
CLI:
Env:
Available only when Syslog is activated |
|
|
CLI:
Env:
Available only when Syslog is activated and asynchronous logging is enabled | (default) |
|
CLI:
Env:
Available only when Syslog is activated |
|
|
CLI:
Env:
Available only when Syslog is activated | (default) |
|
CLI:
Env:
Available only when Syslog is activated | (default) |
|
CLI:
Env:
Available only when Syslog handler and MDC logging are activated |
|
|
CLI:
Env:
Available only when Syslog handler and Tracing is activated |
|
|
CLI:
Env:
Available only when Syslog is activated and output is set to 'json' |
|
|
CLI:
Env:
Available only when Syslog is activated |
|
|
CLI:
Env:
Available only when Syslog is activated | |
|
CLI:
Env:
Available only when Syslog is activated |
|
|
CLI:
Env:
Available only when Syslog is activated |
|
|
CLI:
Env:
Available only when Syslog is activated |
|
21.16. Tracing Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
CLI:
Env:
Available only when Tracing is enabled |
|
| 🛠
CLI:
Env:
Available only when 'opentelemetry' feature is enabled |
|
|
CLI:
Env:
Available only when Tracing is enabled | (default) |
|
CLI:
Env:
Available only when tracing and embedded Infinispan is enabled |
|
| 🛠
CLI:
Env:
Available only when Tracing is enabled |
|
|
CLI:
Env:
Available only when Tracing is enabled |
|
|
CLI:
Env:
Available only when Tracing is enabled | |
|
CLI:
Env:
Available only when Tracing is enabled | (default) |
| 🛠
CLI:
Env:
Available only when Tracing is enabled |
|
|
CLI:
Env:
Available only when Tracing is enabled | (default) |
21.17. Events Copy linkLink copied to clipboard!
| Value | |
|---|---|
| 🛠
CLI:
Env:
Available only when metrics are enabled and feature user-event-metrics is enabled |
|
|
CLI:
Env:
Available only when user event metrics are enabled Use
remove_totp, update_totp, update_password
|
|
|
CLI:
Env:
Available only when user event metrics are enabled |
|
21.18. Truststore Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
CLI:
Env:
STRICT and WILDCARD have been deprecated, use DEFAULT instead. Deprecated values: |
|
|
CLI:
Env:
|
21.19. Security Copy linkLink copied to clipboard!
| Value | |
|---|---|
| 🛠
CLI:
Env:
|
|
21.20. Export Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
CLI:
Env:
| |
|
CLI:
Env:
| |
|
CLI:
Env:
| |
|
CLI:
Env:
|
|
|
CLI:
Env:
| (default) |
21.21. Import Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
CLI:
Env:
| |
|
CLI:
Env:
| |
|
CLI:
Env:
|
|
21.22. Bootstrap Admin Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
CLI:
Env:
| (default) |
|
CLI:
Env:
| |
|
CLI:
Env:
| |
|
CLI:
Env:
| (default) |
Chapter 22. All provider configuration Copy linkLink copied to clipboard!
Review provider configuration options.
22.1. authentication-sessions Copy linkLink copied to clipboard!
22.1.1. infinispan Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
22.1.2. remote Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
|
| (default) or any
|
|
| (default) or any
|
22.2. brute-force-protector Copy linkLink copied to clipboard!
22.2.1. default-brute-force-detector Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
|
|
22.3. cache-embedded Copy linkLink copied to clipboard!
22.3.1. default Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| any
|
|
| any
|
|
| any
|
|
| any
|
|
| any
|
|
| any
|
|
| any
|
|
| any
|
|
| any
|
|
| any
|
|
| any
|
|
|
|
|
| any
|
|
| any
|
|
| any
|
|
| any
|
|
| any
|
|
| any
|
|
| any
|
|
| any
|
|
| any
|
|
| any
|
|
| any
|
|
| any
|
|
| any
|
|
| any
|
|
| any
|
22.4. cache-remote Copy linkLink copied to clipboard!
22.4.1. default Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
|
| any
|
|
|
|
|
|
|
|
| (default) or any
|
|
| any
|
|
| any
|
|
| (default) or any
|
|
| any
|
|
| (default) or any
|
|
|
|
|
| any
|
|
| any
|
22.5. ciba-auth-channel Copy linkLink copied to clipboard!
22.5.1. ciba-http-auth-channel Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| any
|
22.6. connections-http-client Copy linkLink copied to clipboard!
22.6.1. default Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
|
| any
|
|
| any
|
|
| any
|
|
| (default) or any
|
|
|
|
|
|
|
|
| (default) or any
|
|
| (default) or any
|
|
| (default) or any
|
|
| (default) or any
|
|
| any
|
|
|
|
|
| (default) or any
|
22.6.2. opentelemetry Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
|
| any
|
|
| any
|
|
| any
|
|
| (default) or any
|
|
|
|
|
|
|
|
| (default) or any
|
|
| (default) or any
|
|
| (default) or any
|
|
| (default) or any
|
|
| any
|
|
|
|
|
| (default) or any
|
22.7. connections-jpa Copy linkLink copied to clipboard!
22.7.1. quarkus Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
|
|
|
| any
|
|
|
|
22.8. credential Copy linkLink copied to clipboard!
22.8.1. keycloak-password Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
|
|
22.9. crl-storage Copy linkLink copied to clipboard!
22.9.1. infinispan Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
|
| (default) or any
|
22.10. datastore Copy linkLink copied to clipboard!
22.10.1. legacy Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
|
|
22.11. dblock Copy linkLink copied to clipboard!
22.11.1. jpa Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| any
|
22.12. device-representation Copy linkLink copied to clipboard!
22.12.1. device-representation Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
22.13. events-listener Copy linkLink copied to clipboard!
22.13.1. email Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
|
|
|
|
|
22.13.2. jboss-logging Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
|
|
|
|
|
|
| (default) or any
|
|
|
|
|
|
|
22.14. export Copy linkLink copied to clipboard!
22.14.1. dir Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| any
|
|
| any
|
|
| (default) or any
|
|
| (default) or any
|
22.14.2. single-file Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| any
|
|
| any
|
22.15. group Copy linkLink copied to clipboard!
22.15.1. jpa Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
|
|
|
| any
|
22.16. import Copy linkLink copied to clipboard!
22.16.1. dir Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| any
|
|
| any
|
|
| any
|
22.16.2. single-file Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| any
|
|
| any
|
|
| any
|
22.17. jgroups-mtls Copy linkLink copied to clipboard!
22.17.1. default Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
|
|
|
| any
|
|
| any
|
|
| (default) or any
|
|
| any
|
|
| any
|
22.18. load-balancer-check Copy linkLink copied to clipboard!
22.18.1. remote Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
22.19. login-protocol Copy linkLink copied to clipboard!
22.19.1. openid-connect Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
|
|
|
| (default) or any
|
|
| (default) or any
|
|
| (default) or any
|
|
| (default) or any
|
|
| any
|
22.19.2. saml Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
22.20. login-failure Copy linkLink copied to clipboard!
22.20.1. remote Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
|
| (default) or any
|
22.21. mapped-diagnostic-context Copy linkLink copied to clipboard!
22.21.1. default Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
|
|
22.22. password-hashing Copy linkLink copied to clipboard!
22.22.1. argon2 Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| any
|
|
| (default) or any
|
|
| (default) or any
|
|
| (default) or any
|
|
| (default) or any
|
|
|
|
|
|
|
22.23. public-key-storage Copy linkLink copied to clipboard!
22.23.1. infinispan Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
|
| (default) or any
|
22.24. required-action Copy linkLink copied to clipboard!
22.24.1. CONFIGURE_RECOVERY_AUTHN_CODES Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
|
| (default) or any
|
22.24.2. CONFIGURE_TOTP Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
|
|
|
| (default) or any
|
22.24.3. TERMS_AND_CONDITIONS Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
22.24.4. UPDATE_EMAIL Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
|
| (default) or any
|
|
|
|
22.24.5. UPDATE_PASSWORD Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
22.24.6. UPDATE_PROFILE Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
22.24.7. VERIFY_EMAIL Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
|
| (default) or any
|
22.24.8. VERIFY_PROFILE Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
22.24.9. delete_credential Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
22.24.10. idp_link Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
22.24.11. update_user_locale Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
22.24.12. webauthn-register Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
22.24.13. webauthn-register-passwordless Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
22.25. resource-encoding Copy linkLink copied to clipboard!
22.25.1. gzip Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
22.26. security-profile Copy linkLink copied to clipboard!
22.26.1. default Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| any
|
22.27. single-use-object Copy linkLink copied to clipboard!
22.27.1. infinispan Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
|
|
22.27.2. remote Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
|
|
22.28. sticky-session-encoder Copy linkLink copied to clipboard!
22.28.1. infinispan Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
|
|
22.28.2. remote Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
|
|
22.29. storage Copy linkLink copied to clipboard!
22.29.1. ldap Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
|
|
22.30. truststore Copy linkLink copied to clipboard!
22.30.1. file Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| any
|
|
|
|
|
| any
|
|
| any
|
22.31. user-profile Copy linkLink copied to clipboard!
22.31.1. declarative-user-profile Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| any
|
|
| any
|
|
| any
|
22.32. user-sessions Copy linkLink copied to clipboard!
22.32.1. infinispan Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
|
| any
|
|
| any
|
|
|
|
|
|
|
22.32.2. remote Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
| (default) or any
|
|
| (default) or any
|
|
| (default) or any
|
22.33. well-known Copy linkLink copied to clipboard!
22.33.1. oauth-authorization-server Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
|
|
|
| any
|
22.33.2. openid-configuration Copy linkLink copied to clipboard!
| Value | |
|---|---|
|
|
|
|
| any
|
Chapter 23. Checking if rolling updates are possible Copy linkLink copied to clipboard!
Execute the update compatibility command to check if Red Hat build of Keycloak supports a rolling update for a change in your deployment.
Use the update compatibility command to determine if you can update your deployment with a rolling update strategy when enabling or disabling features or changing the Red Hat build of Keycloak version, configurations or providers and themes. The outcome shows whether a rolling update is possible or if a recreate update is required.
In its current version, it shows that a rolling update is possible when the Red Hat build of Keycloak version is the same for the old and the new version. Future versions of Red Hat build of Keycloak might change that behavior to use additional information from the configuration, the image and the version to determine if a rolling update is possible.
In the next iteration of this feature, it is possible to use rolling update strategy also when updating to the following patch release of Red Hat build of Keycloak. Refer to Section 23.4, “Rolling updates for patch releases” section for more details.
This is fully scriptable, so your update procedure can use that information to perform a rolling or recreate strategy depending on the change performed. It is also GitOps friendly, as it allows storing the metadata of the previous configuration in a file. Use this file in a CI/CD pipeline with the new configuration to determine if a rolling update is possible or if a recreate update is needed.
If you are using the Red Hat build of Keycloak Operator, continue to the Avoiding downtime with rolling updates chapter and the
Auto
23.1. Supported update strategies Copy linkLink copied to clipboard!
- Rolling Update
- In this guide, a rolling update is an update that can be performed with zero downtime for your deployment, which consists of at least two nodes. Update your Red Hat build of Keycloak one by one; shut down one of your old deployment nodes and start a new deployment node. Wait until the new node’s start-up probe returns success before proceeding to the next Red Hat build of Keycloak node. See chapter Tracking instance status with health checks for details on how to enable and use the start-up probe.
- Recreate Update
- A recreate update is not compatible with zero-downtime and requires downtime to be applied. Shut down all nodes of the cluster running the old version before starting the nodes with the new version.
23.2. Determining the update strategy for an updated configuration Copy linkLink copied to clipboard!
To determine if a rolling update is possible:
- Run the update compatibility command to generate the required metadata with the old configuration.
- Check the metadata with the new configuration to determine the update strategy.
If you do not use
--optimized
update
Consumers of these commands should not rely on the internal behavior or the structure of the metadata file. Instead, they should rely only on the exit code of the
check
23.2.1. Generating the Metadata Copy linkLink copied to clipboard!
To generate the metadata, execute the following command using the same Red Hat build of Keycloak version and configuration options:
Generate and save the metadata from the current deployment.
bin/kc.[sh|bat] update-compatibility metadata --file=/path/to/file.json
This command accepts all options used by the
start
--file
check
Ensure that all configuration options, whether set via environment variables or CLI arguments, are included when running the above command.
Omitting any configuration options results in incomplete metadata, and could lead to a wrong reported result in the next step.
23.2.2. Checking the Metadata Copy linkLink copied to clipboard!
This command checks the metadata generated by the previous command and compares it with the current configuration and Red Hat build of Keycloak version. If you are updating to a new Red Hat build of Keycloak version, this command must be executed with the new version.
Check the metadata from a previous deployment.
bin/kc.[sh|bat] update-compatibility check --file=/path/to/file.json
- Ensure that all configuration options, whether set via environment variables or CLI arguments, are included when running this command.
- Verify that the correct Red Hat build of Keycloak version is used.
Failure to meet these requirements results in an incorrect outcome.
The command prints the result to the console. For example, if a rolling update is possible, it displays:
Rolling Update possible message
[OK] Rolling Update is available.
If no rolling update is possible, the command provides details about the incompatibility:
Rolling Update not possible message
[keycloak] Rolling Update is not available. 'keycloak.version' is incompatible: 26.2.0 -> 26.2.1
- 1
- In this example, the Keycloak version
26.2.0is not compatible with version26.2.1and a rolling update is not possible.
In the next iteration of this feature, it is possible to use rolling update strategy also when updating to the following patch release of Red Hat build of Keycloak. Refer to Section 23.4, “Rolling updates for patch releases” section for more details.
Command exit code
Use the command’s exit code to determine the update type in your automation pipeline:
| Exit Code | Description |
|---|---|
|
| Rolling Update is possible. |
|
| Unexpected error occurred (such as the metadata file is missing or corrupted). |
|
| Invalid CLI option. |
|
| Rolling Update is not possible. The deployment must be shut down before applying the new configuration. |
|
| Rolling Update is not possible. The feature
|
23.3. Rolling incompatible changes Copy linkLink copied to clipboard!
The following configuration changes return a "Rolling Update is not possible" result code.
23.3.1. Features Copy linkLink copied to clipboard!
23.3.1.1. Recreate always Copy linkLink copied to clipboard!
The enabling or disabling of the following features requires a recreate update:
| Feature | Description |
|---|---|
| multi-site:v1 | Multi-site support |
| persistent-user-sessions:v1 | Persistent online user sessions across restarts and upgrades |
23.3.1.2. Recreate on feature version change Copy linkLink copied to clipboard!
Changing the following features versions triggers a recreate update:
| Feature | Description |
|---|---|
| login:v1 | Legacy Login Theme |
| login:v2 | New Login Theme |
| passkeys-conditional-ui-authenticator:v1 | Passkeys conditional UI authenticator |
23.3.2. Configuration options Copy linkLink copied to clipboard!
Changing the value of one of the following CLI options triggers a recreate update:
| Option | Rationale |
|---|---|
|
| The
|
|
| Changing the configuration file could result in incompatible cache or transport configurations, resulting in clusters not forming as expected. |
|
| Changing stack will result in the cluster not forming during rolling update and will lead to data loss. |
|
| Enabling/Disabling TLS will result in the cluster not forming during rolling update and will lead to data loss. |
|
| Connecting to a new remote cache will cause previously cached data to be lost. |
|
| Connecting to a new remote cache will cause previously cached data to be lost. |
Red Hat build of Keycloak does not verify changes to the content of the cache configuration file provided via
--cache-config-file
| Option | Rationale |
|---|---|
|
| Migration to a new database vendor should be applied to all cluster members to ensure data consistency. |
|
| Migration to a new database schema should be applied to all cluster members to ensure data consistency. |
|
| Migration to a new database name should be applied to all cluster members to ensure data consistency. |
|
| All cluster members should be connecting to the same database to ensure data consistency. |
|
| All cluster members should be connecting to the same database to ensure data consistency. |
Red Hat build of Keycloak allows changes to the
--db-url
23.4. Rolling updates for patch releases Copy linkLink copied to clipboard!
This behavior is currently in preview mode, and it is not recommended for use in production.
It is possible to configure the Red Hat build of Keycloak compatibility command to allow rolling updates when upgrading to a newer patch version in the same
major.minor
To enable this behavior for compatibility check command enable feature
rolling-updates:v2
bin/kc.[sh|bat] update-compatibility check --file=/path/to/file.json --features=rolling-updates:v2
Note there is no change needed when generating metadata using
metadata
Recommended Configuration:
- Enable sticky sessions in your loadbalancer to avoid users bouncing between different versions of Red Hat build of Keycloak as this could result in users needing to refresh their Account Console and Admin UI multiple times while the upgrade is progressing.
Supported functionality during rolling updates:
- Users can log in and log out for OpenID Connect clients.
- OpenID Connect clients can perform all operations, for example, refreshing tokens and querying the user info endpoint.
Known limitations:
- If there have been changes to the Account Console or Admin UI in the patch release, and the user opened the Account Console or Admin UI before or during the upgrade, the user might see an error message and be asked to reload the application while navigating in browser during or after the upgrade.
- If the two patch releases of Red Hat build of Keycloak use different versions of the embedded Infinispan, no rolling update of Red Hat build of Keycloak be performed.
23.5. Further reading Copy linkLink copied to clipboard!
The Red Hat build of Keycloak Operator uses the functionality described above to determine if a rolling update is possible. See the Avoiding downtime with rolling updates chapter and the
Auto
23.6. Relevant options Copy linkLink copied to clipboard!
| Value | |
|---|---|
| 🛠
|
|
| 🛠
|
|