Data Grid Server Guide
Deploy, secure, and manage Data Grid Server deployments
Abstract
Red Hat Data Grid Copy linkLink copied to clipboard!
Data Grid is a high-performance, distributed in-memory data store.
- Schemaless data structure
- Flexibility to store different objects as key-value pairs.
- Grid-based data storage
- Designed to distribute and replicate data across clusters.
- Elastic scaling
- Dynamically adjust the number of nodes to meet demand without service disruption.
- Data interoperability
- Store, retrieve, and query data in the grid from different endpoints.
Data Grid documentation Copy linkLink copied to clipboard!
Documentation for Data Grid is available on the Red Hat customer portal.
Data Grid downloads Copy linkLink copied to clipboard!
Access the Data Grid Software Downloads on the Red Hat customer portal.
You must have a Red Hat account to access and download Data Grid software.
Making open source more inclusive Copy linkLink copied to clipboard!
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. Getting started with Data Grid Server Copy linkLink copied to clipboard!
Install the server distribution, create a user, and start your first Data Grid cluster.
Ansible collection
Automate installation of Data Grid clusters with our Ansible collection that optionally includes Keycloak caches and cross-site replication configuration. The Ansible collection also lets you inject Data Grid caches into the static configuration for each server instance during installation.
The Ansible collection for Data Grid is available from the Red Hat Automation Hub.
1.1. Data Grid Server requirements Copy linkLink copied to clipboard!
Data Grid Server requires a Java Virtual Machine. See the Data Grid Supported Configurations for details on supported versions.
1.2. Downloading Data Grid Server distributions Copy linkLink copied to clipboard!
The Data Grid Server distribution is an archive of Java libraries (JAR
files) and configuration files.
Procedure
- Access the Red Hat customer portal.
- Download Red Hat Data Grid 8.5 Server from the software downloads section.
Run the
md5sum
orsha256sum
command with the server download archive as the argument, for example:sha256sum jboss-datagrid-${version}-server.zip
sha256sum jboss-datagrid-${version}-server.zip
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Compare with the
MD5
orSHA-256
checksum value on the Data Grid Software Details page.
Reference
- Data Grid Server README describes the contents of the server distribution.
1.3. Installing Data Grid Server Copy linkLink copied to clipboard!
Install the Data Grid Server distribution on a host system.
Prerequisites
- Download a Data Grid Server distribution archive.
Procedure
- Use any appropriate tool to extract the Data Grid Server archive to the host filesystem.
unzip redhat-datagrid-8.5.2-server.zip
unzip redhat-datagrid-8.5.2-server.zip
The resulting directory is your $RHDG_HOME
.
1.4. JVM settings for Data Grid Copy linkLink copied to clipboard!
You can define Java Virtual Machine (JVM) settings for Data Grid either by editing the server.conf
configuration file, or by setting the JAVA_OPTS
environment variable .
If you are running Data Grid in a container do not set Xmx
or Xms
because the values are automatically calculated from the container settings to be 50% of the container size.
Editing the configuration file
You can edit the required values in the server.conf
configuration file. For example, to set the options to pass to the JVM, edit the following lines:
JAVA_OPTS="-Xms64m -Xmx512m -XX:MetaspaceSize=64M -Djava.net.preferIPv4Stack=true" JAVA_OPTS="$JAVA_OPTS -Djava.awt.headless=true"
JAVA_OPTS="-Xms64m -Xmx512m -XX:MetaspaceSize=64M -Djava.net.preferIPv4Stack=true"
JAVA_OPTS="$JAVA_OPTS -Djava.awt.headless=true"
You can uncomment the existing example settings as well. For example, to configure Java Platform Debugger Architecture (JPDA) settings for remote socket debugging, update the file as follows:
Sample JPDA settings for remote socket debugging
# Sample JPDA settings for remote socket debugging
JAVA_OPTS="$JAVA_OPTS -agentlib:jdwp=transport=dt_socket,address=8787,server=y,suspend=n"
Additionally, you can add more settings to JAVA_OPTS
like this:
JAVA_OPTS="$JAVA_OPTS <key_1>=<value_1>, ..., <key_N>=<value_N> "
JAVA_OPTS="$JAVA_OPTS <key_1>=<value_1>, ..., <key_N>=<value_N> "
Setting an environment variable
You can override the settings in server.conf
configuration file by setting the JAVA_OPTS
environment variable. For example:
Linux
export JAVA_OPTS="-Xmx1024M"
export JAVA_OPTS="-Xmx1024M"
Microsoft Windows
set JAVA_OPTS="-Xmx1024M"
set JAVA_OPTS="-Xmx1024M"
1.5. Starting Data Grid Server Copy linkLink copied to clipboard!
Run Data Grid Server instances in a Java Virtual Machine (JVM) on any supported host.
Prerequisites
- Download and install the server distribution.
Procedure
-
Open a terminal in
$RHDG_HOME
. Start Data Grid Server instances with the
server
script.- Linux
bin/server.sh
bin/server.sh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Microsoft Windows
bin\server.bat
bin\server.bat
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Data Grid Server is running successfully when it logs the following messages:
ISPN080004: Protocol SINGLE_PORT listening on 127.0.0.1:11222 ISPN080034: Server '...' listening on http://127.0.0.1:11222 ISPN080001: Data Grid Server <version> started in <mm>ms
ISPN080004: Protocol SINGLE_PORT listening on 127.0.0.1:11222
ISPN080034: Server '...' listening on http://127.0.0.1:11222
ISPN080001: Data Grid Server <version> started in <mm>ms
Verification
-
Open
127.0.0.1:11222/console/
in any browser. - Enter your credentials at the prompt and continue to Data Grid Console.
1.6. Passing Data Grid Server configuration at startup Copy linkLink copied to clipboard!
Specify custom configuration when you start Data Grid Server.
Data Grid Server can parse multiple configuration files that you overlay on startup with the --server-config
argument. You can use as many configuration overlay files as required, in any order. Configuration overlay files:
-
Must be valid Data Grid configuration and contain the root
server
element or field. - Do not need to be full configuration as long as your combination of overlay files results in a full configuration.
Data Grid Server does not detect conflicting configuration between overlay files. Each overlay file overwrites any conflicting configuration in the preceding configuration.
If you pass cache configuration to Data Grid Server on startup it does not dynamically create those cache across the cluster. You must manually propagate caches to each node.
Additionally, cache configuration that you pass to Data Grid Server on startup must include the infinispan
and cache-container
elements.
Prerequisites
- Download and install the server distribution.
-
Add custom server configuration to the
server/conf
directory of your Data Grid Server installation.
Procedure
-
Open a terminal in
$RHDG_HOME
. Specify one or more configuration files with the
--server-config=
or-c
argument, for example:bin/server.sh -c infinispan.xml -c datasources.yaml -c security-realms.json
bin/server.sh -c infinispan.xml -c datasources.yaml -c security-realms.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.7. Creating Data Grid users Copy linkLink copied to clipboard!
Add credentials to authenticate with Data Grid Server deployments through Hot Rod and REST endpoints. Before you can access the Data Grid Console or perform cache operations you must create at least one user with the Data Grid command line interface (CLI).
Data Grid enforces security authorization with role-based access control (RBAC). Create an admin
user the first time you add credentials to gain full ADMIN
permissions to your Data Grid deployment.
Prerequisites
- Download and install Data Grid Server.
Procedure
-
Open a terminal in
$RHDG_HOME
. Create an
admin
user with theuser create
command.bin/cli.sh user create admin -p changeme
bin/cli.sh user create admin -p changeme
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipRun
help user
from a CLI session to get complete command details.
Verification
Open user.properties
and confirm the user exists.
cat server/conf/users.properties admin=scram-sha-1\:BYGcIAwvf6b...
cat server/conf/users.properties
admin=scram-sha-1\:BYGcIAwvf6b...
Adding credentials to a properties realm with the CLI creates the user only on the server instance to which you are connected. You must manually synchronize credentials in a properties realm to each node in the cluster.
1.7.1. Granting roles to users Copy linkLink copied to clipboard!
Assign roles to users and grant them permissions to perform cache operations and interact with Data Grid resources.
Grant roles to groups instead of users if you want to assign the same role to multiple users and centrally maintain their permissions.
Prerequisites
-
Have
ADMIN
permissions for Data Grid. - Create Data Grid users.
Procedure
- Create a CLI connection to Data Grid.
Assign roles to users with the
user roles grant
command, for example:user roles grant --roles=deployer katie
user roles grant --roles=deployer katie
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
List roles that you grant to users with the user roles ls
command.
user roles ls katie ["deployer"]
user roles ls katie
["deployer"]
1.7.2. Adding users to groups Copy linkLink copied to clipboard!
Groups let you change permissions for multiple users. You assign a role to a group and then add users to that group. Users inherit permissions from the group role.
You use groups as part of a property realm in the Data Grid Server configuration. Each group is a special type of user that also requires a username and password.
Prerequisites
-
Have
ADMIN
permissions for Data Grid. - Create Data Grid users.
Procedure
- Create a CLI connection to Data Grid.
Use the
user create
command to create a group.-
Specify a group name with the
--groups
argument. Set a username and password for the group.
user create --groups=developers developers -p changeme
user create --groups=developers developers -p changeme
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Specify a group name with the
List groups.
user ls --groups
user ls --groups
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Grant a role to the group.
user roles grant --roles=application developers
user roles grant --roles=application developers
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List roles for the group.
user roles ls developers
user roles ls developers
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add users to the group one at a time.
user groups john --groups=developers
user groups john --groups=developers
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Open groups.properties
and confirm the group exists.
cat server/conf/groups.properties
cat server/conf/groups.properties
1.7.3. Data Grid user roles and permissions Copy linkLink copied to clipboard!
Data Grid includes several roles that provide users with permissions to access caches and Data Grid resources.
Role | Permissions | Description |
---|---|---|
| ALL | Superuser with all permissions including control of the Cache Manager lifecycle. |
| ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR, CREATE |
Can create and delete Data Grid resources in addition to |
| ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR |
Has read and write access to Data Grid resources in addition to |
| ALL_READ, MONITOR |
Has read access to Data Grid resources in addition to |
| MONITOR |
Can view statistics via JMX and the |
1.8. Verifying cluster views Copy linkLink copied to clipboard!
Data Grid Server instances on the same network automatically discover each other and form clusters.
Complete this procedure to observe cluster discovery with the MPING
protocol in the default TCP
stack with locally running Data Grid Server instances. If you want to adjust cluster transport for custom network requirements, see the documentation for setting up Data Grid clusters.
This procedure is intended to demonstrate the principle of cluster discovery and is not intended for production environments. Doing things like specifying a port offset on the command line is not a reliable way to configure cluster transport for production.
Prerequisites
Have one instance of Data Grid Server running.
Procedure
-
Open a terminal in
$RHDG_HOME
. Copy the root directory to
server2
.cp -r server server2
cp -r server server2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify a port offset and the
server2
directory.bin/server.sh -o 100 -s server2
bin/server.sh -o 100 -s server2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
You can view cluster membership in the console at 127.0.0.1:11222/console/cluster-membership
.
Data Grid also logs the following messages when nodes join clusters:
1.9. Shutting down Data Grid Server Copy linkLink copied to clipboard!
Stop individually running servers or bring down clusters gracefully.
Procedure
- Create a CLI connection to Data Grid.
Shut down Data Grid Server in one of the following ways:
Stop all nodes in a cluster with the
shutdown cluster
command, for example:shutdown cluster
shutdown cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command saves cluster state to the
data
folder for each node in the cluster. If you use a cache store, theshutdown cluster
command also persists all data in the cache.Stop individual server instances with the
shutdown server
command and the server hostname, for example:shutdown server <my_server01>
shutdown server <my_server01>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The shutdown server
command does not wait for rebalancing operations to complete, which can lead to data loss if you specify multiple hostnames at the same time.
Run help shutdown
for more details about using the command.
Verification
Data Grid logs the following messages when you shut down servers:
ISPN080002: Data Grid Server stopping ISPN000080: Disconnecting JGroups channel cluster ISPN000390: Persisted state, version=<$version> timestamp=YYYY-MM-DDTHH:MM:SS ISPN080003: Data Grid Server stopped
ISPN080002: Data Grid Server stopping
ISPN000080: Disconnecting JGroups channel cluster
ISPN000390: Persisted state, version=<$version> timestamp=YYYY-MM-DDTHH:MM:SS
ISPN080003: Data Grid Server stopped
1.9.1. Shutdown and restart of Data Grid clusters Copy linkLink copied to clipboard!
Prevent data loss and ensure consistency of your cluster by properly shutting down and restarting nodes.
Cluster shutdown
Data Grid recommends using the shutdown cluster
command to stop all nodes in a cluster while saving cluster state and persisting all data in the cache. You can use the shutdown cluster
command also for clusters with a single node.
When you bring Data Grid clusters back online, all nodes and caches in the cluster will be unavailable until all nodes rejoin. To prevent inconsistencies or data loss, Data Grid restricts access to the data stored in the cluster and modifications of the cluster state until the cluster is fully operational again. Additionally, Data Grid disables cluster rebalancing and prevents local cache stores purging on startup.
During the cluster recovery process, the coordinator node logs messages for each new node joining, indicating which nodes are available and which are still missing. Other nodes in the Data Grid cluster have the view from the time they join. You can monitor availability of caches using the Data Grid Console or REST API.
However, in cases where waiting for all nodes is not necessary nor desired, it is possible to set a cache available with the current topology. This approach is possible through the CLI, see below, or the REST API.
Manually installing a topology can lead to data loss, only perform this operation if the initial topology cannot be recreated.
Server shutdown
After using the shutdown server
command to bring nodes down, the first node to come back online will be available immediately without waiting for other members. The remaining nodes join the cluster immediately, triggering state transfer but loading the local persistence first, which might lead to stale entries. Local cache stores configured to purge on startup will be emptied when the server starts. Local cache stores marked as purge=false
will be available after a server restarts but might contain stale entries.
If you shutdown clustered nodes with the shutdown server
command, you must restart each server in reverse order to avoid potential issues related to data loss and stale entries in the cache.
For example, if you shutdown server1
and then shutdown server2
, you should first start server2
and then start server1
. However, restarting clustered nodes in reverse order does not completely prevent data loss and stale entries.
1.10. Data Grid Server installation directory structure Copy linkLink copied to clipboard!
Data Grid Server uses the following folders on the host filesystem under $RHDG_HOME
:
See the Data Grid Server README for descriptions of the each folder in your $RHDG_HOME
directory as well as system properties you can use to customize the filesystem.
1.10.1. Server root directory Copy linkLink copied to clipboard!
Apart from resources in the bin
and docs
folders, the only folder under $RHDG_HOME
that you should interact with is the server root directory, which is named server
by default.
You can create multiple nodes under the same $RHDG_HOME
directory or in different directories, but each Data Grid Server instance must have its own server root directory. For example, a cluster of 5 nodes could have the following server root directories on the filesystem:
├── server ├── server1 ├── server2 ├── server3 └── server4
├── server
├── server1
├── server2
├── server3
└── server4
Each server root directory should contain the following folders:
├── server │ ├── conf │ ├── data │ ├── lib │ └── log
├── server
│ ├── conf
│ ├── data
│ ├── lib
│ └── log
server/conf
Holds infinispan.xml
configuration files for a Data Grid Server instance.
Data Grid separates configuration into two layers:
- Dynamic
-
Create mutable cache configurations for data scalability.
Data Grid Server permanently saves the caches you create at runtime along with the cluster state that is distributed across nodes. Each joining node receives a complete cluster state that Data Grid Server synchronizes across all nodes whenever changes occur. - Static
-
Add configuration to
infinispan.xml
for underlying server mechanisms such as cluster transport, security, and shared datasources.
server/data
Provides internal storage that Data Grid Server uses to maintain cluster state.
Never directly delete or modify content in server/data
.
Modifying files such as caches.xml
while the server is running can cause corruption. Deleting content can result in an incorrect state, which means clusters cannot restart after shutdown.
server/lib
Contains extension JAR
files for custom filters, custom event listeners, JDBC drivers, custom ServerTask
implementations, and so on.
server/log
Holds Data Grid Server log files.
Chapter 2. Network interfaces and socket bindings Copy linkLink copied to clipboard!
Expose Data Grid Server through a network interface by binding it to an IP address. You can then configure endpoints to use the interface so Data Grid Server can handle requests from remote client applications.
2.1. Network interfaces Copy linkLink copied to clipboard!
Data Grid Server multiplexes endpoints to a single TCP/IP port and automatically detects protocols of inbound client requests. You can configure how Data Grid Server binds to network interfaces to listen for client requests.
Internet Protocol (IP) address
XML
JSON
YAML
server: interfaces: - name: "public" inetAddress: value: "127.0.0.1"
server:
interfaces:
- name: "public"
inetAddress:
value: "127.0.0.1"
Loopback address
XML
JSON
YAML
server: interfaces: - name: "public" loopback: ~
server:
interfaces:
- name: "public"
loopback: ~
Non-loopback address
XML
JSON
YAML
server: interfaces: - name: "public" nonLoopback: ~
server:
interfaces:
- name: "public"
nonLoopback: ~
Any address
XML
JSON
YAML
server: interfaces: - name: "public" anyAddress: ~
server:
interfaces:
- name: "public"
anyAddress: ~
Link local
XML
JSON
YAML
server: interfaces: - name: "public" linkLocal: ~
server:
interfaces:
- name: "public"
linkLocal: ~
Site local
XML
JSON
YAML
server: interfaces: - name: "public" siteLocal: ~
server:
interfaces:
- name: "public"
siteLocal: ~
2.1.1. Match and fallback strategies Copy linkLink copied to clipboard!
Data Grid Server can enumerate all network interfaces on the host system and bind to an interface, host, or IP address that matches a value, which can include regular expressions for additional flexibility.
Match host
XML
JSON
YAML
server: interfaces: - name: "public" matchHost: value: "my_host_name"
server:
interfaces:
- name: "public"
matchHost:
value: "my_host_name"
Match interface
XML
JSON
YAML
server: interfaces: - name: "public" matchInterface: value: "eth0"
server:
interfaces:
- name: "public"
matchInterface:
value: "eth0"
Match address
XML
JSON
YAML
server: interfaces: - name: "public" matchAddress: value: "127\\..*"
server:
interfaces:
- name: "public"
matchAddress:
value: "127\\..*"
Fallback
XML
JSON
YAML
2.2. Socket bindings Copy linkLink copied to clipboard!
Socket bindings map endpoint connectors to network interfaces and ports. By default, Data Grid Server includes a socket binding configuration that listens on the localhost interface, 127.0.0.1
, at port 11222
for the REST and Hot Rod endpoints. If you enable the Memcached endpoint, the default socket bindings configure Data Grid Server to bind to port 11221
.
Default socket bindings
Configuration element or attribute | Description |
---|---|
| Root element that contains all network interfaces and ports to which Data Grid Server endpoints can bind and listen for client connections. |
| Declare the network interface that Data Grid Server listens on by default. |
| Specifies the offset that Data Grid Server applies to port declarations for socket bindings. |
| Configures Data Grid Server to bind to a port on a network interface. |
Custom socket binding declarations
The following example configuration adds an interface
declaration named "private" and a socket-binding
declaration that binds Data Grid Server to the private IP address:
XML
JSON
YAML
2.3. Changing the bind address for Data Grid Server Copy linkLink copied to clipboard!
Data Grid Server binds to a network IP address to listen for inbound client connections on the Hot Rod and REST endpoints. You can specify that IP address directly in your Data Grid Server configuration or when starting server instances.
Prerequisites
- Have at least one Data Grid Server installation.
Procedure
Specify the IP address to which Data Grid Server binds in one of the following ways:
Open your Data Grid Server configuration and set the value for the
inet-address
element, for example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
-b
option or theinfinispan.bind.address
system property.Linux
bin/server.sh -b 192.0.2.0
bin/server.sh -b 192.0.2.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Windows
bin\server.bat -b 192.0.2.0
bin\server.bat -b 192.0.2.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3.1. Listening on all addresses Copy linkLink copied to clipboard!
If you specify the 0.0.0.0
meta-address, or INADDR_ANY
, as the bind address in your Data Grid Server configuration, it listens for incoming client connections on all available network interfaces.
Client intelligence
Configuring Data Grid to listen on all addresses affects how it provides Hot Rod clients with cluster topology. If there are multiple interfaces to which Data Grid Server binds, then it sends a list of IP addresses for each interface.
For example, a cluster where each server node binds to:
-
10.0.0.0/8
subnet -
192.168.0.0/16
subnet -
127.0.0.1
loopback
Hot Rod clients receive IP addresses for server nodes that belong to the interface through which the clients connect. If a client connects to 192.168.0.0
, for example, it does not receive any cluster topology details for nodes that listen on 10.0.0.0
.
Netmask override
Kubernetes, and some other environments, divide the IP address space into subnets and use those different subnets as a single network. For example, 10.129.2.100/23
and 10.129.4.100/23
are in different subnets but belong to the 10.0.0.0/8
network.
For this reason, Data Grid Server overrides netmasks that the host system provides with netmasks that follow IANA conventions for private and reserved networks:
-
IPv4:
10.0.0.0/8
,100.64.0.0/10
, 192.168.0.0/16`,172.16.0.0/12
,169.254.0.0/16
and240.0.0.0/4
-
IPv6:
fc00::/7
andfe80::/10
See RFC 1112
, RFC 1918
, RFC 3927
, RFC 6598
for IPv4 or RFC 4193
, RFC 3513
for IPv6.
You can optionally configure the Hot Rod connector to use the netmask that the host system provides for interfaces with the network-prefix-override
attribute in your Data Grid Server configuration.
2.4. Data Grid Server ports and protocols Copy linkLink copied to clipboard!
Data Grid Server provides network endpoints that allow client access with different protocols.
Port | Protocol | Description |
---|---|---|
| TCP | Hot Rod and REST |
| TCP | Memcached (disabled by default) |
Single port
Data Grid Server exposes multiple protocols through a single TCP port, 11222
. Handling multiple protocols with a single port simplifies configuration and reduces management complexity when deploying Data Grid clusters. Using a single port also enhances security by minimizing the attack surface on the network.
Data Grid Server handles HTTP/1.1, HTTP/2, and Hot Rod protocol requests from clients via the single port in different ways.
HTTP/1.1 upgrade headers
Client requests can include the HTTP/1.1 upgrade
header field to initiate HTTP/1.1 connections with Data Grid Server. Client applications can then send the Upgrade: protocol
header field, where protocol
is a server endpoint.
Application-Layer Protocol Negotiation (ALPN)/Transport Layer Security (TLS)
Client requests include Server Name Indication (SNI) mappings for Data Grid Server endpoints to negotiate protocols over a TLS connection.
Automatic Hot Rod detection
Client requests that include Hot Rod headers automatically route to Hot Rod endpoints.
2.4.1. Configuring network firewalls for Data Grid traffic Copy linkLink copied to clipboard!
Adjust firewall rules to allow traffic between Data Grid Server and client applications.
Procedure
On Red Hat Enterprise Linux (RHEL) workstations, for example, you can allow traffic to port 11222
with firewalld as follows:
firewall-cmd --add-port=11222/tcp --permanent firewall-cmd --list-ports | grep 11222
# firewall-cmd --add-port=11222/tcp --permanent
success
# firewall-cmd --list-ports | grep 11222
11222/tcp
To configure firewall rules that apply across a network, you can use the nftables utility.
2.5. Specifying port offsets Copy linkLink copied to clipboard!
Configure port offsets for multiple Data Grid Server instances on the same host. The default port offset is 0
.
Procedure
Use the -o
switch with the Data Grid CLI or the infinispan.socket.binding.port-offset
system property to set port offsets.
For example, start a server instance with an offset of 100
as follows. With the default configuration, this results in the Data Grid server listening on port 11322
.
- Linux
bin/server.sh -o 100
bin/server.sh -o 100
- Windows
bin\server.bat -o 100
bin\server.bat -o 100
Chapter 3. Data Grid Server endpoints Copy linkLink copied to clipboard!
Data Grid Server endpoints provide client access to the cache manager over Hot Rod and REST protocols.
3.1. Data Grid Server endpoints Copy linkLink copied to clipboard!
3.1.1. Hot Rod Copy linkLink copied to clipboard!
Hot Rod is a binary TCP client-server protocol designed to provide faster data access and improved performance in comparison to text-based protocols.
Data Grid provides Hot Rod client libraries in Java, C++, C#, Node.js and other programming languages.
Topology caches
Data Grid uses topology caches to provide clients with cluster views. Topology caches contain entries that map internal JGroups transport addresses to exposed Hot Rod endpoints.
When client send requests, Data Grid servers compare the topology ID in request headers with the topology ID from the cache. Data Grid servers send new topology views if client have older topology IDs.
Cluster topology views allow Hot Rod clients to immediately detect when nodes join and leave, which enables dynamic load balancing and failover.
In distributed cache modes, the consistent hashing algorithm also makes it possible to route Hot Rod client requests directly to primary owners.
3.1.2. REST Copy linkLink copied to clipboard!
Data Grid exposes a RESTful interface that allows HTTP clients to access data, monitor and maintain clusters, and perform administrative operations.
You can use standard HTTP load balancers to provide clients with load balancing and failover capabilities. However, HTTP load balancers maintain static cluster views and require manual updates when cluster topology changes occur.
3.1.3. RESP Copy linkLink copied to clipboard!
Data Grid provides an implementation of the RESP3 protocol.
The RESP connector supports a subset of the Redis commands.
3.1.4. Memcached Copy linkLink copied to clipboard!
Data Grid provides an implementation of the Memcached text and binary protocols for remote client access.
The Data Grid Memcached endpoint supports clustering with replicated and distributed cache modes.
There are some Memcached client implementations, such as the Cache::Memcached Perl client, that can offer load balancing and failover detection capabilities with static lists of Data Grid server addresses that require manual updates when cluster topology changes occur.
3.1.5. Comparison of endpoint protocols Copy linkLink copied to clipboard!
Hot Rod | HTTP / REST | Memcached | RESP | |
---|---|---|---|---|
Topology-aware | Y | N | N | N |
Hash-aware | Y | N | N | N |
Encryption | Y | Y | Y | Y |
Authentication | Y | Y | Y | Y |
Conditional ops | Y | Y | Y | N |
Bulk ops | Y | N | Y | Y |
Transactions | Y | N | N | N |
Listeners | Y | N | N | Y |
Query | Y | Y | N | N |
Execution | Y | N | N | N |
Cross-site failover | Y | N | N | N |
3.1.6. Hot Rod client compatibility with Data Grid Server Copy linkLink copied to clipboard!
Data Grid Server allows you to connect Hot Rod clients with different versions. For instance during a migration or upgrade to your Data Grid cluster, the Hot Rod client version might be a lower Data Grid version than Data Grid Server.
Data Grid recommends using the latest Hot Rod client version to benefit from the most recent capabilities and security enhancements.
Data Grid 8 and later
Hot Rod protocol version 3.x automatically negotiates the highest version possible for clients with Data Grid Server.
Data Grid 7.3 and earlier
Clients that use a Hot Rod protocol version that is higher than the Data Grid Server version must set the infinispan.client.hotrod.protocol_version
property.
3.2. Configuring Data Grid Server endpoints Copy linkLink copied to clipboard!
Control how the different protocol endpoints bind to sockets and use security realm configuration. You can also configure multiple endpoints and disable administrative capabilities.
Each unique endpoint configuration must include both a Hot Rod connector and a REST connector. Data Grid Server implicitly includes the hotrod-connector
and rest-connector
elements, or fields, in an endpoint
configuration. You should only add these elements to custom configuration to specify authentication mechanisms for endpoints.
Prerequisites
- Add socket bindings and security realms to your Data Grid Server configuration.
Procedure
- Open your Data Grid Server configuration for editing.
-
Wrap multiple
endpoint
configurations with theendpoints
element. -
Specify the socket binding that the endpoint uses with the
socket-binding
attribute. -
Specify the security realm that the endpoint uses with the
security-realm
attribute. Disable administrator access with the
admin="false"
attribute, if required.With this configuration users cannot access Data Grid Console or the Command Line Interface (CLI) from the endpoint.
- Save the changes to your configuration.
Multiple endpoint configuration
The following Data Grid Server configuration creates endpoints on separate socket bindings with dedicated security realms:
XML
JSON
YAML
3.3. Endpoint connectors Copy linkLink copied to clipboard!
Connectors configure Hot Rod and REST endpoints to use socket bindings and security realms.
Default endpoint configuration
<endpoints socket-binding="default" security-realm="default"/>
<endpoints socket-binding="default" security-realm="default"/>
Configuration element or attribute | Description |
---|---|
| Wraps endpoint connector configuration. |
| Declares a Data Grid Server endpoint that configures Hot Rod and REST connectors to use a socket binding and security realm. |
|
Includes the Hot Rod endpoint in the |
|
Includes the REST endpoint in the |
|
Includes the RESP endpoint in the |
|
Includes the Memcached endpoint in the |
3.4. Endpoint IP address filtering rules Copy linkLink copied to clipboard!
Data Grid Server endpoints can use filtering rules that control whether clients can connect based on their IP addresses. Data Grid Server applies filtering rules in order until it finds a match for the client IP address.
A CIDR block is a compact representation of an IP address and its associated network mask. CIDR notation specifies an IP address, a slash ('/') character, and a decimal number. The decimal number is the count of leading 1 bits in the network mask. The number can also be thought of as the width, in bits, of the network prefix. The IP address in CIDR notation is always represented according to the standards for IPv4 or IPv6.
The address can denote a specific interface address, including a host identifier, such as 10.0.0.1/8
, or it can be the beginning address of an entire network interface range using a host identifier of 0, as in 10.0.0.0/8
or 10/8
.
For example:
-
192.168.100.14/24
represents the IPv4 address192.168.100.14
and its associated network prefix192.168.100.0
, or equivalently, its subnet mask255.255.255.0
, which has 24 leading 1-bits. -
the IPv4 block
192.168.100.0/22
represents the 1024 IPv4 addresses from192.168.100.0
to192.168.103.255
. -
the IPv6 block
2001:db8::/48
represents the block of IPv6 addresses from2001:db8:0:0:0:0:0:0
to2001:db8:0:ffff:ffff:ffff:ffff:ffff
. -
::1/128
represents the IPv6 loopback address. Its prefix length is 128 which is the number of bits in the address.
IP address filter configuration
In the following configuration, Data Grid Server accepts connections only from addresses in the 192.168.0.0/16
and 10.0.0.0/8
CIDR blocks. Data Grid Server rejects all other connections.
XML
JSON
YAML
3.5. Inspecting and modifying rules for filtering IP addresses Copy linkLink copied to clipboard!
Configure IP address filtering rules on Data Grid Server endpoints to accept or reject connections based on client address.
Prerequisites
- Install Data Grid Command Line Interface (CLI).
Procedure
- Create a CLI connection to Data Grid Server.
Inspect and modify the IP filter rules
server connector ipfilter
command as required.List all IP filtering rules active on a connector across the cluster:
server connector ipfilter ls endpoint-default
server connector ipfilter ls endpoint-default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set IP filtering rules across the cluster.
NoteThis command replaces any existing rules.
server connector ipfilter set endpoint-default --rules=ACCEPT/192.168.0.0/16,REJECT/10.0.0.0/8`
server connector ipfilter set endpoint-default --rules=ACCEPT/192.168.0.0/16,REJECT/10.0.0.0/8`
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove all IP filtering rules on a connector across the cluster.
server connector ipfilter clear endpoint-default
server connector ipfilter clear endpoint-default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Endpoint authentication mechanisms Copy linkLink copied to clipboard!
Data Grid Server can use custom SASL and HTTP authentication mechanisms for Hot Rod and REST endpoints.
4.1. Data Grid Server authentication Copy linkLink copied to clipboard!
Authentication restricts user access to endpoints as well as the Data Grid Console and Command Line Interface (CLI).
Data Grid Server includes a "default" security realm that enforces user authentication. Default authentication uses a property realm with user credentials stored in the server/conf/users.properties
file. Data Grid Server also enables security authorization by default so you must assign users with permissions stored in the server/conf/groups.properties
file.
Use the user create
command with the Command Line Interface (CLI) to add users and assign permissions. Run user create --help
for examples and more information.
4.2. Configuring Data Grid Server authentication mechanisms Copy linkLink copied to clipboard!
You can explicitly configure Hot Rod and REST endpoints to use specific authentication mechanisms. Configuring authentication mechanisms is required only if you need to explicitly override the default mechanisms for a security realm.
Each endpoint
section in your configuration must include hotrod-connector
and rest-connector
elements or fields. For example, if you explicitly declare a hotrod-connector
you must also declare a rest-connector
even if it does not configure an authentication mechanism.
Prerequisites
- Add security realms to your Data Grid Server configuration as required.
Procedure
- Open your Data Grid Server configuration for editing.
-
Add an
endpoint
element or field and specify the security realm that it uses with thesecurity-realm
attribute. Add a
hotrod-connector
element or field to configure the Hot Rod endpoint.-
Add an
authentication
element or field. -
Specify SASL authentication mechanisms for the Hot Rod endpoint to use with the
sasl mechanisms
attribute. -
If applicable, specify SASL quality of protection settings with the
qop
attribute. -
Specify the Data Grid Server identity with the
server-name
attribute if necessary.
-
Add an
Add a
rest-connector
element or field to configure the REST endpoint.-
Add an
authentication
element or field. -
Specify HTTP authentication mechanisms for the REST endpoint to use with the
mechanisms
attribute.
-
Add an
- Save the changes to your configuration.
Authentication mechanism configuration
The following configuration specifies SASL mechanisms for the Hot Rod endpoint to use for authentication:
XML
JSON
YAML
4.2.1. Disabling authentication Copy linkLink copied to clipboard!
In local development environments or on isolated networks you can configure Data Grid to allow unauthenticated client requests. When you disable user authentication you should also disable authorization in your Data Grid security configuration.
Procedure
- Open your Data Grid Server configuration for editing.
-
Remove the
security-realm
attribute from theendpoints
element or field. -
Remove any
authorization
elements from thesecurity
configuration for thecache-container
and each cache configuration. - Save the changes to your configuration.
XML
<server xmlns="urn:infinispan:server:15.0"> <endpoints socket-binding="default"/> </server>
<server xmlns="urn:infinispan:server:15.0">
<endpoints socket-binding="default"/>
</server>
JSON
YAML
server: endpoints: endpoint: socketBinding: "default"
server:
endpoints:
endpoint:
socketBinding: "default"
4.3. Data Grid Server authentication mechanisms Copy linkLink copied to clipboard!
Data Grid Server automatically configures endpoints with authentication mechanisms that match your security realm configuration. For example, if you add a Kerberos security realm then Data Grid Server enables the GSSAPI
and GS2-KRB5
authentication mechanisms for the Hot Rod endpoint.
Currently, you cannot use the Lightweight Directory Access Protocol (LDAP) protocol with the DIGEST
or SCRAM
authentication mechanisms, because these mechanisms require access to specific hashed passwords.
Hot Rod endpoints
Data Grid Server enables the following SASL authentication mechanisms for Hot Rod endpoints when your configuration includes the corresponding security realm:
Security realm | SASL authentication mechanism |
---|---|
Property realms and LDAP realms |
|
Token realms |
|
Trust realms |
|
Kerberos identities |
|
SSL/TLS identities |
|
REST endpoints
Data Grid Server enables the following HTTP authentication mechanisms for REST endpoints when your configuration includes the corresponding security realm:
Security realm | HTTP authentication mechanism |
---|---|
Property realms and LDAP realms |
|
Token realms |
|
Trust realms |
|
Kerberos identities |
|
SSL/TLS identities |
|
Memcached endpoints
Data Grid Server enables the following SASL authentication mechanisms for Memcached binary protocol endpoints when your configuration includes the corresponding security realm:
Security realm | SASL authentication mechanism |
---|---|
Property realms and LDAP realms |
|
Token realms |
|
Trust realms |
|
Kerberos identities |
|
SSL/TLS identities |
|
Data Grid Server enables authentication on Memcached text protocol endpoints only with security realms which support password-based authentication:
Security realm | Memcached text authentication |
---|---|
Property realms and LDAP realms | Yes |
Token realms | No |
Trust realms | No |
Kerberos identities | No |
SSL/TLS identities | No |
RESP endpoints
Data Grid Server enables authentication on RESP endpoints only with security realms which support password-based authentication:
Security realm | RESP authentication |
---|---|
Property realms and LDAP realms | Yes |
Token realms | No |
Trust realms | No |
Kerberos identities | No |
SSL/TLS identities | No |
4.3.1. SASL authentication mechanisms Copy linkLink copied to clipboard!
Data Grid Server supports the following SASL authentications mechanisms with Hot Rod and Memcached binary protocol endpoints:
Authentication mechanism | Description | Security realm type | Related details |
---|---|---|---|
|
Uses credentials in plain-text format. You should use | Property realms and LDAP realms |
Similar to the |
|
Uses hashing algorithms and nonce values. Hot Rod connectors support | Property realms and LDAP realms |
Similar to the |
|
Uses salt values in addition to hashing algorithms and nonce values. Hot Rod connectors support | Property realms and LDAP realms |
Similar to the |
|
Uses Kerberos tickets and requires a Kerberos Domain Controller. You must add a corresponding | Kerberos realms |
Similar to the |
|
Uses Kerberos tickets and requires a Kerberos Domain Controller. You must add a corresponding | Kerberos realms |
Similar to the |
| Uses client certificates. | Trust store realms |
Similar to the |
|
Uses OAuth tokens and requires a | Token realms |
Similar to the |
4.3.2. SASL quality of protection (QoP) Copy linkLink copied to clipboard!
If SASL mechanisms support integrity and privacy protection (QoP) settings, you can add them to your Hot Rod and Memcached endpoint configuration with the qop
attribute.
QoP setting | Description |
---|---|
| Authentication only. |
| Authentication with integrity protection. |
| Authentication with integrity and privacy protection. |
4.3.3. SASL policies Copy linkLink copied to clipboard!
SASL policies provide fine-grain control over Hot Rod and Memcached authentication mechanisms.
Data Grid cache authorization restricts access to caches based on roles and permissions. Configure cache authorization and then set <no-anonymous value=false />
to allow anonymous login and delegate access logic to cache authorization.
Policy | Description | Default value |
---|---|---|
| Use only SASL mechanisms that support forward secrecy between sessions. This means that breaking into one session does not automatically provide information for breaking into future sessions. | false |
| Use only SASL mechanisms that require client credentials. | false |
| Do not use SASL mechanisms that are susceptible to simple plain passive attacks. | false |
| Do not use SASL mechanisms that are susceptible to active, non-dictionary, attacks. | false |
| Do not use SASL mechanisms that are susceptible to passive dictionary attacks. | false |
| Do not use SASL mechanisms that accept anonymous logins. | true |
SASL policy configuration
In the following configuration the Hot Rod endpoint uses the GSSAPI
mechanism for authentication because it is the only mechanism that complies with all SASL policies:
XML
JSON
YAML
4.3.4. HTTP authentication mechanisms Copy linkLink copied to clipboard!
Data Grid Server supports the following HTTP authentication mechanisms with REST endpoints:
Authentication mechanism | Description | Security realm type | Related details |
---|---|---|---|
|
Uses credentials in plain-text format. You should use | Property realms and LDAP realms |
Corresponds to the |
|
Uses hashing algorithms and nonce values. REST connectors support | Property realms and LDAP realms |
Corresponds to the |
|
Uses Kerberos tickets and requires a Kerberos Domain Controller. You must add a corresponding | Kerberos realms |
Corresponds to the |
|
Uses OAuth tokens and requires a | Token realms |
Corresponds to the |
| Uses client certificates. | Trust store realms |
Similar to the |
Chapter 5. Security realms Copy linkLink copied to clipboard!
Security realms integrate Data Grid Server deployments with the network protocols and infrastructure in your environment that control access and verify user identities.
5.1. Creating security realms Copy linkLink copied to clipboard!
Add security realms to Data Grid Server configuration to control access to deployments. You can add one or more security realm to your configuration.
When you add security realms to your configuration, Data Grid Server automatically enables the matching authentication mechanisms for the Hot Rod and REST endpoints.
Prerequisites
- Add socket bindings to your Data Grid Server configuration as required.
Create keystores, or have a PEM file, to configure the security realm with TLS/SSL encryption.
Data Grid Server can also generate keystores at startup.
-
Provision the resources or services that the security realm configuration relies on.
For example, if you add a token realm, you need to provision OAuth services.
This procedure demonstrates how to configure multiple property realms. Before you begin, you need to create properties files that add users and assign permissions with the Command Line Interface (CLI). Use the user create
commands as follows:
Run user create --help
for examples and more information.
Adding credentials to a properties realm with the CLI creates the user only on the server instance to which you are connected. You must manually synchronize credentials in a properties realm to each node in the cluster.
Procedure
- Open your Data Grid Server configuration for editing.
-
Use the
security-realms
element in thesecurity
configuration to contain create multiple security realms. Add a security realm with the
security-realm
element and give it a unique name with thename
attribute.To follow the example, create one security realm named
application-realm
and another namedmanagement-realm
.-
Provide the TLS/SSL identify for Data Grid Server with the
server-identities
element and configure a keystore as required. Specify the type of security realm by adding one the following elements or fields:
-
properties-realm
-
ldap-realm
-
token-realm
-
truststore-realm
-
Specify properties for the type of security realm you are configuring as appropriate.
To follow the example, specify the
*.properties
files you created with the CLI using thepath
attribute on theuser-properties
andgroup-properties
elements or fields.-
If you add multiple different types of security realm to your configuration, include the
distributed-realm
element or field so that Data Grid Server uses the realms in combination with each other. -
Configure Data Grid Server endpoints to use the security realm with the with the
security-realm
attribute. - Save the changes to your configuration.
Multiple property realms
XML
JSON
YAML
5.2. Setting up Kerberos identities Copy linkLink copied to clipboard!
Add Kerberos identities to a security realm in your Data Grid Server configuration to use keytab files that contain service principal names and encrypted keys, derived from Kerberos passwords.
Prerequisites
- Have Kerberos service account principals.
keytab files can contain both user and service account principals. However, Data Grid Server uses service account principals only which means it can provide identity to clients and allow clients to authenticate with Kerberos servers.
In most cases, you create unique principals for the Hot Rod and REST endpoints. For example, if you have a "datagrid" server in the "INFINISPAN.ORG" domain you should create the following service principals:
-
hotrod/datagrid@INFINISPAN.ORG
identifies the Hot Rod service. -
HTTP/datagrid@INFINISPAN.ORG
identifies the REST service.
Procedure
Create keytab files for the Hot Rod and REST services.
- Linux
ktutil ktutil: addent -password -p datagrid@INFINISPAN.ORG -k 1 -e aes256-cts Password for datagrid@INFINISPAN.ORG: [enter your password] ktutil: wkt http.keytab ktutil: quit
ktutil ktutil: addent -password -p datagrid@INFINISPAN.ORG -k 1 -e aes256-cts Password for datagrid@INFINISPAN.ORG: [enter your password] ktutil: wkt http.keytab ktutil: quit
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Microsoft Windows
ktpass -princ HTTP/datagrid@INFINISPAN.ORG -pass * -mapuser INFINISPAN\USER_NAME ktab -k http.keytab -a HTTP/datagrid@INFINISPAN.ORG
ktpass -princ HTTP/datagrid@INFINISPAN.ORG -pass * -mapuser INFINISPAN\USER_NAME ktab -k http.keytab -a HTTP/datagrid@INFINISPAN.ORG
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Copy the keytab files to the
server/conf
directory of your Data Grid Server installation. - Open your Data Grid Server configuration for editing.
-
Add a
server-identities
definition to the Data Grid server security realm. - Specify the location of keytab files that provide service principals to Hot Rod and REST connectors.
- Name the Kerberos service principals.
- Save the changes to your configuration.
Kerberos identity configuration
XML
JSON
YAML
5.3. Property realms Copy linkLink copied to clipboard!
Property realms use property files to define users and groups.
-
users.properties
contains Data Grid user credentials. Passwords can be pre-digested with theDIGEST-MD5
andDIGEST
authentication mechanisms. -
groups.properties
associates users with roles and permissions.
users.properties
myuser=a_password user2=another_password
myuser=a_password
user2=another_password
groups.properties
myuser=supervisor,reader,writer user2=supervisor
myuser=supervisor,reader,writer
user2=supervisor
Property realm configuration
XML
JSON
YAML
5.3.1. Property realm file structure Copy linkLink copied to clipboard!
User properties files are structured as follows:
users.properties structure
#$REALM_NAME=default$ #$ALGORITHM=encrypted$ #Wed Jul 31 08:32:08 CEST 2024 admin=algorithm-1\:hash-1;algorithm-2\:hash-2;...
#$REALM_NAME=default$
#$ALGORITHM=encrypted$
#Wed Jul 31 08:32:08 CEST 2024
admin=algorithm-1\:hash-1;algorithm-2\:hash-2;...
The first three lines are special comments that define the name of the realm ($REALM_NAME
), whether the passwords are stored in clear
or encrypted
format ($ALGORITHM
) and the timestamp of the last update.
User credentials are stored in traditional key/value format: the key corresponds to the username and the value corresponds to the password. Encrypted passwords are represented as semi-colon-separated algorithm/hash pairs, with the hash encoded in Base64.
Credentials are encrypted using the realm name. Changing a realm’s name requires re-encrypting all the passwords. Use the Data Grid CLI to enter the correct security realm name to the file.
5.4. LDAP realms Copy linkLink copied to clipboard!
LDAP realms connect to LDAP servers, such as OpenLDAP, Red Hat Directory Server, Apache Directory Server, or Microsoft Active Directory, to authenticate users and obtain membership information.
LDAP servers can have different entry layouts, depending on the type of server and deployment. It is beyond the scope of this document to provide examples for all possible configurations.
5.4.1. LDAP connection properties Copy linkLink copied to clipboard!
Specify the LDAP connection properties in the LDAP realm configuration.
The following properties are required:
url |
Specifies the URL of the LDAP server. The URL should be in format |
principal | Specifies a distinguished name (DN) of a valid user in the LDAp server. The DN uniquely identifies the user within the LDAP directory structure. |
credential | Corresponds to the password associated with the principal mentioned above. |
The principal for LDAP connections must have necessary privileges to perform LDAP queries and access specific attributes.
Enabling connection-pooling
significantly improves the performance of authentication to LDAP servers. The connection pooling mechanism is provided by the JDK. For more information see Connection Pooling Configuration and Java Tutorials: Pooling.
5.4.2. LDAP realm user authentication methods Copy linkLink copied to clipboard!
Configure the user authentication method in the LDAP realm.
The LDAP realm can authenticate users in two ways:
Hashed password comparison |
by comparing the hashed password stored in a user’s password attribute (usually |
Direct verification | by authenticating against the LDAP server using the supplied credentials
Direct verification is the only approach that works with Active Directory, because access to the |
You cannot use endpoint authentication mechanisms that performs hashing with the direct-verification
attribute, since this method requires having the password in clear text. As a result you must use the BASIC
authentication mechanism with the REST endpoint and PLAIN
with the Hot Rod endpoint to integrate with Active Directory Server. A more secure alternative is to use Kerberos, which allows the SPNEGO
, GSSAPI
, and GS2-KRB5
authentication mechanisms.
The LDAP realm searches the directory to find the entry which corresponds to the authenticated user. The rdn-identifier
attribute specifies an LDAP attribute that finds the user entry based on a provided identifier, which is typically a username; for example, the uid
or sAMAccountName
attribute. Add search-recursive="true"
to the configuration to search the directory recursively. By default, the search for the user entry uses the (rdn_identifier={0})
filter. You can specify a different filter using the filter-name
attribute.
5.4.3. Mapping user entries to their associated groups Copy linkLink copied to clipboard!
In the LDAP realm configuration, specify the attribute-mapping
element to retrieve and associate all groups that a user is a member of.
The membership information is stored typically in two ways:
-
Under group entries that usually have class
groupOfNames
orgroupOfUniqueNames
in themember
attribute. This is the default behavior in most LDAP installations, except for Active Directory. In this case, you can use an attribute filter. This filter searches for entries that match the supplied filter, which locates groups with amember
attribute equal to the user’s DN. The filter then extracts the group entry’s CN as specified byfrom
, and adds it to the user’sRoles
. In the user entry in the
memberOf
attribute. This is typically the case for Active Directory. In this case you should use an attribute reference such as the following:<attribute-reference reference="memberOf" from="cn" to="Roles" />
This reference gets all
memberOf
attributes from the user’s entry, extracts the CN as specified byfrom
, and adds them to the user’s groups (Roles
is the internal name used to map the groups).
5.4.4. LDAP realm configuration reference Copy linkLink copied to clipboard!
XML
JSON
YAML
5.4.4.1. LDAP realm principal rewriting Copy linkLink copied to clipboard!
Principals obtained by SASL authentication mechanisms such as GSSAPI
, GS2-KRB5
and Negotiate
usually include the domain name, for example myuser@INFINISPAN.ORG
. Before using these principals in LDAP queries, it is necessary to transform them to ensure their compatibility. This process is called rewriting.
Data Grid includes the following transformers:
case-principal-transformer |
rewrites the principal to either all uppercase or all lowercase. For example |
common-name-principal-transformer |
rewrites principals in the LDAP Distinguished Name format (as defined by RFC 4514). It extracts the first attribute of type |
regex-principal-transformer | rewrites principals using a regular expression with capturing groups, allowing, for example, for extractions of any substring. |
5.4.4.2. LDAP principal rewriting configuration reference Copy linkLink copied to clipboard!
Case principal transformer
XML
JSON
YAML
Common name principal transformer
XML
JSON
YAML
Regex principal transformer
XML
JSON
YAML
5.4.4.3. LDAP user and group mapping process with Data Grid Copy linkLink copied to clipboard!
This example illustrates the process of loading and internally mapping LDAP users and groups to Data Grid subjects. The following is a LDIF (LDAP Data Interchange Format) file, which describes multiple LDAP entries:
LDIF
The root
user is a member of the admin
and monitor
groups.
When a request to authenticate the user root
with the password strongPassword
is made on one of the endpoints, the following operations are performed:
- The username is optionally rewritten using the chosen principal transformer.
-
The realm searches within the
ou=People,dc=infinispan,dc=org
tree for an entry whoseuid
attribute is equal toroot
and finds the entry with DNuid=root,ou=People,dc=infinispan,dc=org
, which becomes the user principal. -
The realm searches within the
u=Roles,dc=infinispan,dc=org
tree for entries ofobjectClass=groupOfNames
that includeuid=root,ou=People,dc=infinispan,dc=org
in themember
attribute. In this case it finds two entries:cn=admin,ou=Roles,dc=infinispan,dc=org
andcn=monitor,ou=Roles,dc=infinispan,dc=org
. From these entries, it extracts thecn
attributes which become the group principals.
The resulting subject will therefore look like:
-
NamePrincipal:
uid=root,ou=People,dc=infinispan,dc=org
-
RolePrincipal:
admin
-
RolePrincipal:
monitor
At this point, the global authorization mappers are applied on the above subject to convert the principals into roles. The roles are then expanded into a set of permissions, which are validated against the requested cache and operation.
5.5. Token realms Copy linkLink copied to clipboard!
Token realms use external services to validate tokens and require providers that are compatible with RFC-7662 (OAuth2 Token Introspection), such as Red Hat SSO.
Token realm configuration
XML
JSON
YAML
5.6. Trust store realms Copy linkLink copied to clipboard!
Trust store realms use certificates, or certificates chains, that verify Data Grid Server and client identities when they negotiate connections.
- Keystores
- Contain server certificates that provide a Data Grid Server identity to clients. If you configure a keystore with server certificates, Data Grid Server encrypts traffic using industry standard SSL/TLS protocols.
- Trust stores
- Contain client certificates, or certificate chains, that clients present to Data Grid Server. Client trust stores are optional and allow Data Grid Server to perform client certificate authentication.
Client certificate authentication
You must add the require-ssl-client-auth="true"
attribute to the endpoint configuration if you want Data Grid Server to validate or authenticate client certificates.
Trust store realm configuration
XML
JSON
YAML
5.7. Distributed security realms Copy linkLink copied to clipboard!
Distributed realms combine multiple different types of security realms. When users attempt to access the Hot Rod or REST endpoints, Data Grid Server uses each security realm in turn until it finds one that can perform the authentication.
Distributed realm configuration
XML
JSON
YAML
5.8. Aggregate security realms Copy linkLink copied to clipboard!
Aggregate realms combine multiple realms: the first one for the authentication steps and the others for loading the identity for the authorization steps. For example, this can be used to authenticate users via a client certificate, and retrieve identity from a properties or LDAP realm.
Aggregate realm configuration
XML
JSON
YAML
5.8.1. Name rewriters Copy linkLink copied to clipboard!
Principal names may have different forms, depending on the security realm type:
- Properties and Token realms may return simple strings
- Trust and LDAP realms may return X.500-style distinguished names
-
Kerberos realms may return
user@domain
-style names
Names must be normalized to a common form when using the aggregate realm using one of the following transformers.
5.8.1.1. Case Principal Transformer Copy linkLink copied to clipboard!
The case-principal-transformer
transforms a name to all uppercase or all lowercase letters.
XML
<aggregate-realm authentication-realm="trust" authorization-realms="properties"> <name-rewriter> <case-principal-transformer uppercase="false"/> </name-rewriter> </aggregate-realm>
<aggregate-realm authentication-realm="trust" authorization-realms="properties">
<name-rewriter>
<case-principal-transformer uppercase="false"/>
</name-rewriter>
</aggregate-realm>
JSON
YAML
5.8.1.2. Common Name Principal Transformer Copy linkLink copied to clipboard!
The common-name-principal-transformer
extracts the first CN
element from a DN
used by LDAP or Certificates. For example, given a principal in the form CN=app1,CN=serviceA,OU=applications,DC=infinispan,DC=org
, the following configuration will extract app1
as the principal.
XML
<aggregate-realm authentication-realm="trust" authorization-realms="properties"> <name-rewriter> <common-name-principal-transformer/> </name-rewriter> </aggregate-realm>
<aggregate-realm authentication-realm="trust" authorization-realms="properties">
<name-rewriter>
<common-name-principal-transformer/>
</name-rewriter>
</aggregate-realm>
JSON
YAML
5.8.1.3. Regex Principal Transformer Copy linkLink copied to clipboard!
The regex-principal-transformer
can perform find and replace using a regular expression. The example shows how to extract the local-part from a user@domain.com
identifier.
XML
<aggregate-realm authentication-realm="trust" authorization-realms="properties"> <name-rewriter> <regex-principal-transformer pattern="([^@]+)@.*" replacement="$1" replace-all="false"/> </name-rewriter> </aggregate-realm>
<aggregate-realm authentication-realm="trust" authorization-realms="properties">
<name-rewriter>
<regex-principal-transformer pattern="([^@]+)@.*" replacement="$1" replace-all="false"/>
</name-rewriter>
</aggregate-realm>
JSON
YAML
5.9. Security realm caching Copy linkLink copied to clipboard!
Security realms implement caching to avoid having to repeatedly retrieve data which usually changes very infrequently. By default
Realm caching realm configuration
XML
JSON
YAML
5.9.1. Flushing realm caches Copy linkLink copied to clipboard!
Use the CLI to flush security realm caches across the whole cluster.
[node-1@mycluster//containers/default]> server aclcache flush
[node-1@mycluster//containers/default]> server aclcache flush
Chapter 6. Configuring TLS/SSL encryption Copy linkLink copied to clipboard!
You can secure Data Grid Server connections using SSL/TLS encryption by configuring a keystore that contains public and private keys for Data Grid. You can also configure client certificate authentication if you require mutual TLS.
6.1. Configuring Data Grid Server keystores Copy linkLink copied to clipboard!
Add keystores to Data Grid Server and configure it to present SSL/TLS certificates that verify its identity to clients. If a security realm contains TLS/SSL identities, it encrypts any connections to Data Grid Server endpoints that use that security realm.
Prerequisites
- Create a keystore that contains certificates, or certificate chains, for Data Grid Server.
Data Grid Server supports the following keystore formats: JKS, JCEKS, PKCS12/PFX and PEM. BKS, BCFKS, and UBER are also supported if the Bouncy Castle library is present.
Certificates should include the subjectAltName
extension of type dNSName
and/or iPAddress
so that clients can correctly perform hostname validation, according to the rules defined by the RFC 2818 specification. The server will issue a warning if it is started with a certificate which does not include such an extension.
In production environments, server certificates should be signed by a trusted Certificate Authority, either Root or Intermediate CA.
You can use PEM files as keystores if they contain both of the following:
- A private key in PKCS#1 or PKCS#8 format.
- One or more certificates.
You should also configure PEM file keystores with an empty password (password=""
).
Procedure
- Open your Data Grid Server configuration for editing.
-
Add the keystore that contains SSL/TLS identities for Data Grid Server to the
$RHDG_HOME/server/conf
directory. -
Add a
server-identities
definition to the Data Grid Server security realm. -
Specify the keystore file name with the
path
attribute. -
Provide the keystore password and certificate alias with the
keystore-password
andalias
attributes. - Save the changes to your configuration.
Next steps
Configure clients with a trust store so they can verify SSL/TLS identities for Data Grid Server.
Keystore configuration
XML
JSON
YAML
6.1.1. SSL/TLS Certificate rotation Copy linkLink copied to clipboard!
SSL/TLS certificates have an expiration date, after which they are no longer valid. The process of renewing a certificate is also known as "rotation". Data Grid monitors the keystore files for changes and automatically reloads them without requiring a server or client restart.
To ensure seamless operations during certificate rotation, use certificates signed by a Certificate Authority (CA) and configure both server and client trust stores with the CA certificate. Using self-signed certificates will cause temporary handshake failures until all clients and servers have been updated.
6.1.2. Generating Data Grid Server keystores Copy linkLink copied to clipboard!
Configure Data Grid Server to automatically generate keystores at startup.
Automatically generated keystores:
- Should not be used in production environments.
- Are generated whenever necessary; for example, while obtaining the first connection from a client.
- Contain certificates that you can use directly in Hot Rod clients.
Procedure
- Open your Data Grid Server configuration for editing.
-
Include the
generate-self-signed-certificate-host
attribute for thekeystore
element in the server configuration. - Specify a hostname for the server certificate as the value.
- Save the changes to your configuration.
Generated keystore configuration
XML
JSON
YAML
6.1.3. Configuring TLS versions and cipher suites Copy linkLink copied to clipboard!
When using SSL/TLS encryption to secure your deployment, you can configure Data Grid Server to use specific versions of the TLS protocol as well as specific cipher suites within the protocol.
Procedure
- Open your Data Grid Server configuration for editing.
-
Add the
engine
element to the SSL configuration for Data Grid Server. Configure Data Grid to use one or more TLS versions with the
enabled-protocols
attribute.Data Grid Server supports TLS version 1.2 and 1.3 by default. If appropriate you can set
TLSv1.3
only to restrict the security protocol for client connections. Data Grid does not recommend enablingTLSv1.1
because it is an older protocol with limited support and provides weak security. You should never enable any version of TLS older than 1.1.WarningIf you modify the SSL
engine
configuration for Data Grid Server you must explicitly configure TLS versions with theenabled-protocols
attribute. Omitting theenabled-protocols
attribute allows any TLS version.<engine enabled-protocols="TLSv1.3 TLSv1.2" />
<engine enabled-protocols="TLSv1.3 TLSv1.2" />
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure Data Grid to use one or more cipher suites with the
enabled-ciphersuites
attribute (for TLSv1.2 and below) and theenabled-ciphersuites-tls13
attribute (for TLSv1.3).You must ensure that you set a cipher suite that supports any protocol features you plan to use; for example
HTTP/2 ALPN
.- Save the changes to your configuration.
SSL engine configuration
XML
JSON
YAML
6.2. Configuring Data Grid Server on a system with FIPS 140-2 compliant cryptography Copy linkLink copied to clipboard!
FIPS (Federal Information Processing Standards) are standards and guidelines for US federal computer systems. Although FIPS are developed for use by the US federal government, many in the private sector voluntarily use these standards.
FIPS 140-2 defines security requirements for cryptographic modules. You can configure your Data Grid Server to use encryption ciphers that adhere to the FIPS 140-2 specification by using alternative JDK security providers.
Additional resources
6.2.1. Configuring the PKCS11 cryptographic provider Copy linkLink copied to clipboard!
You can configure the PKCS11 cryptographic provider by specifying the PKCS11 keystore with the SunPKCS11-NSS-FIPS
provider.
Prerequisites
-
Configure your system for FIPS mode. You can check if your system has FIPS Mode enabled by issuing the
fips-mode-setup --check
command in your Data Grid command-line Interface (CLI) -
Initialize the system-wide NSS database by using the
certutil
tool. -
Install the JDK with the
java.security
file configured to enable theSunPKCS11
provider. This provider points to the NSS database and the SSL provider. - Install a certificate in the NSS database.
Procedure
- Open your Data Grid Server configuration for editing.
-
Add a
server-identities
definition to the Data Grid Server security realm. -
Specify the PKCS11 keystore with the
SunPKCS11-NSS-FIPS
provider. - Save the changes to your configuration.
Keystore configuration
XML
JSON
YAML
6.2.2. Configuring the Bouncy Castle FIPS cryptographic provider Copy linkLink copied to clipboard!
You can configure the Bouncy Castle FIPS (Federal Information Processing Standards) cryptographic provider in your Data Grid server’s configuration.
Prerequisites
-
Configure your system for FIPS mode. You can check if your system has FIPS Mode enabled by issuing the
fips-mode-setup --check
command in your Data Grid command-line Interface (CLI). - Create a keystore in BCFKS format that contains a certificate.
Procedure
-
Download the Bouncy Castle FIPS JAR file, and add the file to the
server/lib
directory of your Data Grid Server installation. To install Bouncy Castle, issue the
install
command:[disconnected]> install org.bouncycastle:bc-fips:1.0.2.3
[disconnected]> install org.bouncycastle:bc-fips:1.0.2.3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Open your Data Grid Server configuration for editing.
-
Add a
server-identities
definition to the Data Grid Server security realm. -
Specify the BCFKS keystore with the
BCFIPS
provider. - Save the changes to your configuration.
Keystore configuration
XML
JSON
YAML
6.3. Configuring client certificate authentication Copy linkLink copied to clipboard!
Configure Data Grid Server to use mutual TLS to secure client connections.
You can configure Data Grid to verify client identities from certificates in a trust store in two ways:
- Require a trust store that contains only the signing certificate, which is typically a Certificate Authority (CA). Any client that presents a certificate signed by the CA can connect to Data Grid.
- Require a trust store that contains all client certificates in addition to the signing certificate. Only clients that present a signed certificate that is present in the trust store can connect to Data Grid.
Alternatively to providing trust stores you can use shared system certificates.
Prerequisites
- Create a client trust store that contains either the CA certificate or all public certificates.
- Create a keystore for Data Grid Server and configure an SSL/TLS identity.
PEM files can be used as trust stores provided they contain one or more certificates. These trust stores should be configured with an empty password: password=""
.
Procedure
- Open your Data Grid Server configuration for editing.
-
Add the
require-ssl-client-auth="true"
parameter to yourendpoints
configuration. -
Add the client trust store to the
$RHDG_HOME/server/conf
directory. -
Specify the
path
andpassword
attributes for thetruststore
element in the Data Grid Server security realm configuration. -
Add the
<truststore-realm/>
element to the security realm if you want Data Grid Server to authenticate each client certificate. - Save the changes to your configuration.
Next steps
- Set up authorization with client certificates in the Data Grid Server configuration if you control access with security roles and permissions.
- Configure clients to negotiate SSL/TLS connections with Data Grid Server.
Client certificate authentication configuration
XML
JSON
YAML
6.4. Configuring authorization with client certificates Copy linkLink copied to clipboard!
Enabling client certificate authentication means you do not need to specify Data Grid user credentials in client configuration, which means you must associate roles with the Common Name (CN) field in the client certificate(s).
Prerequisites
- Provide clients with a Java keystore that contains either their public certificates or part of the certificate chain, typically a public CA certificate.
- Configure Data Grid Server to perform client certificate authentication.
Procedure
- Open your Data Grid Server configuration for editing.
-
Enable the
common-name-role-mapper
in the security authorization configuration. -
Assign the Common Name (
CN
) from the client certificate a role with the appropriate permissions. - Save the changes to your configuration.
Data Grid creates the identity of the client by extracting the certificate principal. Any other Subject Alternative Names (SANs) which may be present in the certificate are currently ignored. For this reason, the authorization.group-only-mapping
attribute below must be set to false
.
Client certificate authorization configuration
XML
JSON
YAML
Chapter 7. Storing Data Grid Server credentials in keystores Copy linkLink copied to clipboard!
External services require credentials to authenticate with Data Grid Server. To protect sensitive text strings such as passwords, add them to a credential keystore rather than directly in Data Grid Server configuration files.
You can then configure Data Grid Server to decrypt passwords for establishing connections with services such as databases or LDAP directories.
Plain-text passwords in $RHDG_HOME/server/conf
are unencrypted. Any user account with read access to the host filesystem can view plain-text passwords.
While credential keystores are password-protected store encrypted passwords, any user account with write access to the host filesystem can tamper with the keystore itself.
To completely secure Data Grid Server credentials, you should grant read-write access only to user accounts that can configure and run Data Grid Server.
7.1. Setting up credential keystores Copy linkLink copied to clipboard!
Create keystores that encrypt credential for Data Grid Server access.
A credential keystore contains at least one alias that is associated with an encrypted password. After you create a keystore, you specify the alias in a connection configuration such as a database connection pool. Data Grid Server then decrypts the password for that alias from the keystore when the service attempts authentication.
You can create as many credential keystores with as many aliases as required.
As a security best practice, keystores should be readable only by the user who runs the process for Data Grid Server.
Procedure
-
Open a terminal in
$RHDG_HOME
. Create a keystore and add credentials to it with the
credentials
command.TipBy default, keystores are of type PKCS12. Run
help credentials
for details on changing keystore defaults.The following example shows how to create a keystore that contains an alias of "dbpassword" for the password "changeme". When you create a keystore you also specify a password to access the keystore with the
-p
argument.- Linux
bin/cli.sh credentials add dbpassword -c changeme -p "secret1234!"
bin/cli.sh credentials add dbpassword -c changeme -p "secret1234!"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Microsoft Windows
bin\cli.bat credentials add dbpassword -c changeme -p "secret1234!"
bin\cli.bat credentials add dbpassword -c changeme -p "secret1234!"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Check that the alias is added to the keystore.
bin/cli.sh credentials ls -p "secret1234!" dbpassword
bin/cli.sh credentials ls -p "secret1234!" dbpassword
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Open your Data Grid Server configuration for editing.
Configure Data Grid to use the credential keystore.
-
Add a
credential-stores
section to thesecurity
configuration. - Specify the name and location of the credential keystore.
Specify the password to access the credential keystore with the
clear-text-credential
configuration.NoteInstead of adding a clear-text password for the credential keystore to your Data Grid Server configuration you can use an external command or masked password for additional security.
You can also use a password in one credential store as the master password for another credential store.
-
Add a
Reference the credential keystore in configuration that Data Grid Server uses to connect with an external system such as a datasource or LDAP server.
-
Add a
credential-reference
section. -
Specify the name of the credential keystore with the
store
attribute. Specify the password alias with the
alias
attribute.TipAttributes in the
credential-reference
configuration are optional.-
store
is required only if you have multiple keystores. -
alias
is required only if the keystore contains multiple password aliases.
-
-
Add a
- Save the changes to your configuration.
7.2. Securing passwords for credential keystores Copy linkLink copied to clipboard!
Data Grid Server requires a password to access credential keystores. You can add that password to Data Grid Server configuration in clear text or, as an added layer of security, you can use an external command for the password or you can mask the password.
Prerequisites
- Set up a credential keystore for Data Grid Server.
Procedure
Do one of the following:
Use the
credentials mask
command to obscure the password, for example:bin/cli.sh credentials mask -i 100 -s pepper99 "secret1234!"
bin/cli.sh credentials mask -i 100 -s pepper99 "secret1234!"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Masked passwords use Password Based Encryption (PBE) and must be in the following format in your Data Grid Server configuration: <MASKED_VALUE;SALT;ITERATION>.
Use an external command that provides the password as standard output.
An external command can be any executable, such as a shell script or binary, that uses
java.lang.Runtime#exec(java.lang.String)
.
If the command requires parameters, provide them as a space-separated list of strings.
7.3. Credential keystore configuration Copy linkLink copied to clipboard!
You can add credential keystores to Data Grid Server configuration and use clear-text passwords, masked passwords, or external commands that supply passwords.
Credential keystore with a clear text password
XML
JSON
YAML
Credential keystore with a masked password
XML
JSON
YAML
External command passwords
XML
JSON
YAML
7.4. Credential keystore references Copy linkLink copied to clipboard!
After you add credential keystores to Data Grid Server you can reference them in connection configurations.
Datasource connections
XML
JSON
YAML
LDAP connections
XML
JSON
YAML
Chapter 8. Security authorization with role-based access control Copy linkLink copied to clipboard!
Role-based access control (RBAC) capabilities use different permissions levels to restrict user interactions with Data Grid.
For information on creating users and configuring authorization specific to remote or embedded caches, see:
8.1. Data Grid user roles and permissions Copy linkLink copied to clipboard!
Data Grid includes several roles that provide users with permissions to access caches and Data Grid resources.
Role | Permissions | Description |
---|---|---|
| ALL | Superuser with all permissions including control of the Cache Manager lifecycle. |
| ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR, CREATE |
Can create and delete Data Grid resources in addition to |
| ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR |
Has read and write access to Data Grid resources in addition to |
| ALL_READ, MONITOR |
Has read access to Data Grid resources in addition to |
| MONITOR |
Can view statistics via JMX and the |
8.1.1. Permissions Copy linkLink copied to clipboard!
User roles are sets of permissions with different access levels.
Permission | Function | Description |
CONFIGURATION |
| Defines new cache configurations. |
LISTEN |
| Registers listeners against a Cache Manager. |
LIFECYCLE |
| Stops the Cache Manager. |
CREATE |
| Create and remove container resources such as caches, counters, schemas, and scripts. |
MONITOR |
|
Allows access to JMX statistics and the |
ALL | - | Includes all Cache Manager permissions. |
Permission | Function | Description |
READ |
| Retrieves entries from a cache. |
WRITE |
| Writes, replaces, removes, evicts data in a cache. |
EXEC |
| Allows code execution against a cache. |
LISTEN |
| Registers listeners against a cache. |
BULK_READ |
| Executes bulk retrieve operations. |
BULK_WRITE |
| Executes bulk write operations. |
LIFECYCLE |
| Starts and stops a cache. |
ADMIN |
| Allows access to underlying components and internal structures. |
MONITOR |
|
Allows access to JMX statistics and the |
ALL | - | Includes all cache permissions. |
ALL_READ | - | Combines the READ and BULK_READ permissions. |
ALL_WRITE | - | Combines the WRITE and BULK_WRITE permissions. |
8.1.2. Role and permission mappers Copy linkLink copied to clipboard!
Data Grid implements users as a collection of principals. Principals represent either an individual user identity, such as a username, or a group to which the users belong. Internally, these are implemented with the javax.security.auth.Subject
class.
To enable authorization, the principals must be mapped to role names, which are then expanded into a set of permissions.
Data Grid includes the PrincipalRoleMapper
API for associating security principals to roles, and the RolePermissionMapper
API for associating roles with specific permissions.
Data Grid provides the following role and permission mapper implementations:
- Cluster role mapper
- Stores principal to role mappings in the cluster registry.
- Cluster permission mapper
- Stores role to permission mappings in the cluster registry. Allows you to dynamically modify user roles and permissions.
- Identity role mapper
- Uses the principal name as the role name. The type or format of the principal name depends on the source. For example, in an LDAP directory the principal name could be a Distinguished Name (DN).
- Common name role mapper
-
Uses the Common Name (CN) as the role name. You can use this role mapper with an LDAP directory or with client certificates that contain Distinguished Names (DN); for example
cn=managers,ou=people,dc=example,dc=com
maps to themanagers
role.
By default, principal-to-role mapping is only applied to principals which represent groups. It is possible to configure Data Grid to also perform the mapping for user principals by setting the authorization.group-only-mapping
configuration attribute to false
.
8.1.2.1. Mapping users to roles and permissions in Data Grid Copy linkLink copied to clipboard!
Consider the following user retrieved from an LDAP server, as a collection of DNs:
CN=myapplication,OU=applications,DC=mycompany CN=dataprocessors,OU=groups,DC=mycompany CN=finance,OU=groups,DC=mycompany
CN=myapplication,OU=applications,DC=mycompany
CN=dataprocessors,OU=groups,DC=mycompany
CN=finance,OU=groups,DC=mycompany
Using the Common name role mapper, the user would be mapped to the following roles:
dataprocessors finance
dataprocessors
finance
Data Grid has the following role definitions:
dataprocessors: ALL_WRITE ALL_READ finance: LISTEN
dataprocessors: ALL_WRITE ALL_READ
finance: LISTEN
The user would have the following permissions:
ALL_WRITE ALL_READ LISTEN
ALL_WRITE ALL_READ LISTEN
8.1.3. Configuring role mappers Copy linkLink copied to clipboard!
Data Grid enables the cluster role mapper and cluster permission mapper by default. To use a different implementation for role mapping, you must configure the role mappers.
Procedure
- Open your Data Grid configuration for editing.
- Declare the role mapper as part of the security authorization in the Cache Manager configuration.
- Save the changes to your configuration.
Role mapper configuration
XML
JSON
YAML
infinispan: cacheContainer: security: authorization: commonNameRoleMapper: ~
infinispan:
cacheContainer:
security:
authorization:
commonNameRoleMapper: ~
8.1.4. Configuring the cluster role and permission mappers Copy linkLink copied to clipboard!
The cluster role mapper maintains a dynamic mapping between principals and roles. The cluster permission mapper maintains a dynamic set of role definitions. In both cases, the mappings are stored in the cluster registry and can be manipulated at runtime using either the CLI or the REST API.
Prerequisites
-
Have
ADMIN
permissions for Data Grid. - Start the Data Grid CLI.
- Connect to a running Data Grid cluster.
8.1.4.1. Creating new roles Copy linkLink copied to clipboard!
Create new roles and set the permissions.
Procedure
Create roles with the
user roles create
command, for example:user roles create --permissions=ALL_READ,ALL_WRITE simple
user roles create --permissions=ALL_READ,ALL_WRITE simple
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
List roles that you grant to users with the user roles ls
command.
user roles ls ["observer","application","admin","monitor","simple","deployer"]
user roles ls
["observer","application","admin","monitor","simple","deployer"]
Describe roles with the user roles describe
command.
user roles describe simple { "name" : "simple", "permissions" : [ "ALL_READ","ALL_WRITE" ] }
user roles describe simple
{
"name" : "simple",
"permissions" : [ "ALL_READ","ALL_WRITE" ]
}
8.1.4.2. Granting roles to users Copy linkLink copied to clipboard!
Assign roles to users and grant them permissions to perform cache operations and interact with Data Grid resources.
Grant roles to groups instead of users if you want to assign the same role to multiple users and centrally maintain their permissions.
Prerequisites
-
Have
ADMIN
permissions for Data Grid. - Create Data Grid users.
Procedure
- Create a CLI connection to Data Grid.
Assign roles to users with the
user roles grant
command, for example:user roles grant --roles=deployer katie
user roles grant --roles=deployer katie
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
List roles that you grant to users with the user roles ls
command.
user roles ls katie ["deployer"]
user roles ls katie
["deployer"]
8.1.4.3. Cluster role mapper name rewriters Copy linkLink copied to clipboard!
By default, the mapping is performed using a strict string equivalence between principal names and roles. It is possible to configure the cluster role mapper to apply transformation to the principal name before performing a lookup.
Procedure
- Open your Data Grid configuration for editing.
- Specify a name rewriter for the cluster role mapper as part of the security authorization in the Cache Manager configuration.
- Save the changes to your configuration.
Principal names may have different forms, depending on the security realm type:
- Properties and Token realms may return simple strings
- Trust and LDAP realms may return X.500-style distinguished names
-
Kerberos realms may return
user@domain
-style names
Names can be normalized to a common form using one of the following transformers:
8.1.4.3.1. Case Principal Transformer Copy linkLink copied to clipboard!
XML
JSON
YAML
8.1.4.3.2. Regex Principal Transformer Copy linkLink copied to clipboard!
XML
JSON
YAML
8.2. Configuring caches with security authorization Copy linkLink copied to clipboard!
Add security authorization to caches to enforce role-based access control (RBAC). This requires Data Grid users to have a role with a sufficient level of permission to perform cache operations.
Prerequisites
- Create Data Grid users and either grant them with roles or assign them to groups.
Procedure
- Open your Data Grid configuration for editing.
-
Add a
security
section to the configuration. Specify roles that users must have to perform cache operations with the
authorization
element.You can implicitly add all roles defined in the Cache Manager or explicitly define a subset of roles.
- Save the changes to your configuration.
Implicit role configuration
The following configuration implicitly adds every role defined in the Cache Manager:
XML
<distributed-cache> <security> <authorization/> </security> </distributed-cache>
<distributed-cache>
<security>
<authorization/>
</security>
</distributed-cache>
JSON
YAML
distributedCache: security: authorization: enabled: true
distributedCache:
security:
authorization:
enabled: true
Explicit role configuration
The following configuration explicitly adds a subset of roles defined in the Cache Manager. In this case Data Grid denies cache operations for any users that do not have one of the configured roles.
XML
<distributed-cache> <security> <authorization roles="admin supervisor"/> </security> </distributed-cache>
<distributed-cache>
<security>
<authorization roles="admin supervisor"/>
</security>
</distributed-cache>
JSON
YAML
distributedCache: security: authorization: enabled: true roles: ["admin","supervisor"]
distributedCache:
security:
authorization:
enabled: true
roles: ["admin","supervisor"]
Chapter 9. Enabling and configuring Data Grid statistics and JMX monitoring Copy linkLink copied to clipboard!
Data Grid can provide Cache Manager and cache statistics as well as export JMX MBeans.
9.1. Enabling statistics in remote caches Copy linkLink copied to clipboard!
Data Grid Server automatically enables statistics for the default Cache Manager. However, you must explicitly enable statistics for your caches.
Procedure
- Open your Data Grid configuration for editing.
-
Add the
statistics
attribute or field and specifytrue
as the value. - Save and close your Data Grid configuration.
Remote cache statistics
XML
<distributed-cache statistics="true" />
<distributed-cache statistics="true" />
JSON
{ "distributed-cache": { "statistics": "true" } }
{
"distributed-cache": {
"statistics": "true"
}
}
YAML
distributedCache: statistics: true
distributedCache:
statistics: true
9.2. Enabling Hot Rod client statistics Copy linkLink copied to clipboard!
Hot Rod Java clients can provide statistics that include remote cache and near-cache hits and misses as well as connection pool usage.
Procedure
- Open your Hot Rod Java client configuration for editing.
-
Set
true
as the value for thestatistics
property or invoke thestatistics().enable()
methods. -
Export JMX MBeans for your Hot Rod client with the
jmx
andjmx_domain
properties or invoke thejmxEnable()
andjmxDomain()
methods. - Save and close your client configuration.
Hot Rod Java client statistics
ConfigurationBuilder
hotrod-client.properties
infinispan.client.hotrod.statistics = true infinispan.client.hotrod.jmx = true infinispan.client.hotrod.jmx_domain = my.domain.org
infinispan.client.hotrod.statistics = true
infinispan.client.hotrod.jmx = true
infinispan.client.hotrod.jmx_domain = my.domain.org
9.3. Configuring Data Grid metrics Copy linkLink copied to clipboard!
Data Grid generates metrics that are compatible with any monitoring system.
- Gauges provide values such as the average number of nanoseconds for write operations or JVM uptime.
- Histograms provide details about operation execution times such as read, write, and remove times.
By default, Data Grid generates gauges when you enable statistics but you can also configure it to generate histograms.
Data Grid metrics are provided at the vendor
scope. Metrics related to the JVM are provided in the base
scope.
Procedure
- Open your Data Grid configuration for editing.
-
Add the
metrics
element or object to the cache container. -
Enable or disable gauges with the
gauges
attribute or field. -
Enable or disable histograms with the
histograms
attribute or field. - Save and close your client configuration.
Metrics configuration
XML
JSON
YAML
Verification
Data Grid Server exposes statistics through the metrics
endpoint that you can collect with monitoring tools such as Prometheus. To verify that statistics are exported to the metrics
endpoint, you can do the following:
Prometheus format
curl -v http://localhost:11222/metrics \ --digest -u username:password
curl -v http://localhost:11222/metrics \
--digest -u username:password
OpenMetrics format
curl -v http://localhost:11222/metrics \ --digest -u username:password \ -H "Accept: application/openmetrics-text"
curl -v http://localhost:11222/metrics \
--digest -u username:password \
-H "Accept: application/openmetrics-text"
Data Grid no longer provides metrics in MicroProfile JSON format.
9.4. Registering JMX MBeans Copy linkLink copied to clipboard!
Data Grid can register JMX MBeans that you can use to collect statistics and perform administrative operations. You must also enable statistics otherwise Data Grid provides 0
values for all statistic attributes in JMX MBeans.
Use JMX Mbeans for collecting statistics only when Data Grid is embedded in applications and not with a remote Data Grid server.
When you use JMX Mbeans for collecting statistics from a remote Data Grid server, the data received from JMX Mbeans might differ from the data received from other APIs such as REST. In such cases the data received from the other APIs is more accurate.
Procedure
- Open your Data Grid configuration for editing.
-
Add the
jmx
element or object to the cache container and specifytrue
as the value for theenabled
attribute or field. -
Add the
domain
attribute or field and specify the domain where JMX MBeans are exposed, if required. - Save and close your client configuration.
JMX configuration
XML
JSON
YAML
9.4.1. Enabling JMX remote ports Copy linkLink copied to clipboard!
Provide unique remote JMX ports to expose Data Grid MBeans through connections in JMXServiceURL format.
Data Grid Server does not expose JMX remotely via the single port endpoint. If you want to remotely access Data Grid Server via JMX you must enable a remote port.
You can enable remote JMX ports using one of the following approaches:
- Enable remote JMX ports that require authentication to one of the Data Grid Server security realms.
- Enable remote JMX ports manually using the standard Java management configuration options.
Prerequisites
-
For remote JMX with authentication, define JMX specific user roles using the default security realm. Users must have
controlRole
with read/write access or themonitorRole
with read-only access to access any JMX resources. Data Grid automatically maps globalADMIN
andMONITOR
permissions to the JMXcontrolRole
andmonitorRole
roles.
Procedure
Start Data Grid Server with a remote JMX port enabled using one of the following ways:
Enable remote JMX through port
9999
.bin/server.sh --jmx 9999
bin/server.sh --jmx 9999
Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningUsing remote JMX with SSL disabled is not intended for production environments.
Pass the following system properties to Data Grid Server at startup.
bin/server.sh -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false
bin/server.sh -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false
Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningEnabling remote JMX with no authentication or SSL is not secure and not recommended in any environment. Disabling authentication and SSL allows unauthorized users to connect to your server and access the data hosted there.
9.4.2. Data Grid MBeans Copy linkLink copied to clipboard!
Data Grid exposes JMX MBeans that represent manageable resources.
org.infinispan:type=Cache
- Attributes and operations available for cache instances.
org.infinispan:type=CacheManager
- Attributes and operations available for Cache Managers, including Data Grid cache and cluster health statistics.
For a complete list of available JMX MBeans along with descriptions and available operations and attributes, see the Data Grid JMX Components documentation.
9.4.3. Registering MBeans in custom MBean servers Copy linkLink copied to clipboard!
Data Grid includes an MBeanServerLookup
interface that you can use to register MBeans in custom MBeanServer instances.
Prerequisites
-
Create an implementation of
MBeanServerLookup
so that thegetMBeanServer()
method returns the custom MBeanServer instance. - Configure Data Grid to register JMX MBeans.
Procedure
- Open your Data Grid configuration for editing.
-
Add the
mbean-server-lookup
attribute or field to the JMX configuration for the Cache Manager. -
Specify fully qualified name (FQN) of your
MBeanServerLookup
implementation. - Save and close your client configuration.
JMX MBean server lookup configuration
XML
JSON
YAML
9.5. Exporting metrics during a state transfer operation Copy linkLink copied to clipboard!
You can export time metrics for clustered caches that Data Grid redistributes across nodes.
A state transfer operation occurs when a clustered cache topology changes, such as a node joining or leaving a cluster. During a state transfer operation, Data Grid exports metrics from each cache, so that you can determine a cache’s status. A state transfer exposes attributes as properties, so that Data Grid can export metrics from each cache.
You cannot perform a state transfer operation in invalidation mode.
Data Grid generates time metrics that are compatible with the REST API and the JMX API.
Prerequisites
- Configure Data Grid metrics.
- Enable metrics for your cache type, such as embedded cache or remote cache.
- Initiate a state transfer operation by changing your clustered cache topology.
Procedure
Choose one of the following methods:
- Configure Data Grid to use the REST API to collect metrics.
- Configure Data Grid to use the JMX API to collect metrics.
9.6. Monitoring the status of cross-site replication Copy linkLink copied to clipboard!
Monitor the site status of your backup locations to detect interruptions in the communication between the sites. When a remote site status changes to offline
, Data Grid stops replicating your data to the backup location. Your data become out of sync and you must fix the inconsistencies before bringing the clusters back online.
Monitoring cross-site events is necessary for early problem detection. Use one of the following monitoring strategies:
- Monitoring cross-site replication with the REST API
- Monitoring cross-site replication with the Prometheus metrics or any other monitoring system
Monitoring cross-site replication with the REST API
Monitor the status of cross-site replication for all caches using the REST endpoint. You can implement a custom script to poll the REST endpoint or use the following example.
Prerequisites
- Enable cross-site replication.
Procedure
Implement a script to poll the REST endpoint.
The following example demonstrates how you can use a Python script to poll the site status every five seconds.
When a site status changes from online
to offline
or vice-versa, the function on_event
is invoked.
If you want to use this script, you must specify the following variables:
-
USERNAME
andPASSWORD
: The username and password of Data Grid user with permission to access the REST endpoint. -
POLL_INTERVAL_SEC
: The number of seconds between polls. -
SERVERS
: The list of Data Grid Servers at this site. The script only requires a single valid response but the list is provided to allow fail over. -
REMOTE_SITES
: The list of remote sites to monitor on these servers. -
CACHES
: The list of cache names to monitor.
Monitoring cross-site replication with the Prometheus metrics
Prometheus, and other monitoring systems, let you configure alerts to detect when a site status changes to offline
.
Monitoring cross-site latency metrics can help you to discover potential issues.
Prerequisites
- Enable cross-site replication.
Procedure
- Configure Data Grid metrics.
Configure alerting rules using the Prometheus metrics format.
-
For the site status, use
1
foronline
and0
foroffline
. For the
expr
filed, use the following format:vendor_cache_manager_default_cache_<cache name>_x_site_admin_<site name>_status
.In the following example, Prometheus alerts you when the NYC site gets
offline
for cache namedwork
orsessions
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following image shows an alert that the NYC site is
offline
for cachework
.Figure 9.1. Prometheus Alert
-
For the site status, use
Chapter 10. Enabling and configuring Data Grid OpenTelemetry tracing Copy linkLink copied to clipboard!
Data Grid generates tracing spans compatible with the OpenTelemetry standard, allowing you to export, visualize, and analyze tracing data related to the most important cache operations.
10.1. Configuring Data Grid tracing Copy linkLink copied to clipboard!
Configure OpenTelemetry tracing, to enable monitoring and tracing of cache operations.
Procedure
- Open your Data Grid configuration for editing.
-
Add the
tracing
element or object to the cache container. -
Define the endpoint URL of the OpenTelemetry collector with the
collector_endpoint
attribute or field. It is mandatory to enable tracing.4318
is typically thehttp/protobuf
OTLP standard port. -
Enable or disable tracing globally with the
enable
attribute or field. -
Enable or disable security event tracing with the
security
attribute or field. -
Optionally change the tracing exporter protocol changing the
exporter_protocol
attribute or field. By default, it isotlp
(OpenTelemetry protocol). -
Optionally change the tracing service name associated with the generated tracing span changing the
service_name
attribute or field. By default, it isinfinispan-server
. - Save and close your client configuration.
Next steps
To apply any global tracing configuration changes, stop the server and repeat the procedure.
Tracing configuration
Data Grid applies the tracing configuration globally to all caches.
XML
JSON
YAML
10.1.1. Further Tracing Options Copy linkLink copied to clipboard!
To configure further tracing options it is possible to pass system properties or to set environment variables to the Data Grid server at startup to configure directly the OpenTelemetry SDK Autoconfigure that is used by the Data Grid server to configure the OpenTelemetry tracing.
Procedure
Pass the system properties to Data Grid Server at startup.
Use
-D<property-name>=<property-value>
arguments like in the following example:bin/server.sh -Dotel.exporter.otlp.timeout=10000
bin/server.sh -Dotel.exporter.otlp.timeout=10000
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Tracing data format
The Data Grid Server, by default, exports tracing data using the OTLP http/protobuf
protocol.
tracing.properties
otel.exporter.otlp.protocol = http/protobuf
otel.exporter.otlp.protocol = http/protobuf
To use a different protocol, you must copy JAR files or dependencies to the $ISPN_HOME/server/lib
directory of your Data Grid Server installation.
10.2. Configure tracing at cache level Copy linkLink copied to clipboard!
Once the tracing is configured at server level, it will be automatically enabled by default for all caches. A cache configuration level of tracing allows on the other hand to enable or disable it at cache level and at runtime.
Tracing categories
Several categories are potentially traced:
- Container. That are all the main cache operations, such as replace, put, clear, getForReplace, remove operations and size. With the exception of all the getting operation.
- Cluster. Operations that are replicated to another node in the same cluster.
- X-Site. Operations that are replicated to another external site.
- Persistence. All the operations involving persistence via a cache store and/or cache loader.
Each of category can be enabled/disabled at start time or runtime listing them in the categories
list attribute. By default only the container category is enabled.
There is also the Security
category, to trace security audit events. This category is configured globally, not only at cache level, since their events can have different scopes (cache, container or server), not only cache scope.
Enable/disable tracing for a given cache
XML
<replicated-cache> <tracing enabled="true" categories="container cluster x-site persistence" /> </replicated-cache>
<replicated-cache>
<tracing enabled="true" categories="container cluster x-site persistence" />
</replicated-cache>
JSON
YAML
Enable/disable tracing at runtime
The cache-level tracing attribute enable
is a mutable attribute, it means it can be changed at runtime without the need to restart the Infinispan cluster.
To change a mutable attribute both HotRod and REST APIs can be used.
HotRod
remoteCacheManager.administration() .updateConfigurationAttribute(CACHE_A, "tracing.enabled", "false");
remoteCacheManager.administration()
.updateConfigurationAttribute(CACHE_A, "tracing.enabled", "false");
REST
restClient.cache(CACHE_A) .updateConfigurationAttribute("tracing.enabled", "false");
restClient.cache(CACHE_A)
.updateConfigurationAttribute("tracing.enabled", "false");
Chapter 11. Adding managed datasources to Data Grid Server Copy linkLink copied to clipboard!
Optimize connection pooling and performance for JDBC database connections by adding managed datasources to your Data Grid Server configuration.
11.1. Configuring managed datasources Copy linkLink copied to clipboard!
Create managed datasources as part of your Data Grid Server configuration to optimize connection pooling and performance for JDBC database connections. You can then specify the JDNI name of the managed datasources in your caches, which centralizes JDBC connection configuration for your deployment.
Prerequisites
Copy database drivers to the
server/lib
directory in your Data Grid Server installation.TipUse the
install
command with the Data Grid Command Line Interface (CLI) to download the required drivers to theserver/lib
directory, for example:install org.postgresql:postgresql:42.4.3
install org.postgresql:postgresql:42.4.3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
- Open your Data Grid Server configuration for editing.
-
Add a new
data-source
to thedata-sources
section. -
Uniquely identify the datasource with the
name
attribute or field. Specify a JNDI name for the datasource with the
jndi-name
attribute or field.TipYou use the JNDI name to specify the datasource in your JDBC cache store configuration.
-
Set
true
as the value of thestatistics
attribute or field to enable statistics for the datasource through the/metrics
endpoint. Provide JDBC driver details that define how to connect to the datasource in the
connection-factory
section.-
Specify the name of the database driver with the
driver
attribute or field. -
Specify the JDBC connection url with the
url
attribute or field. -
Specify credentials with the
username
andpassword
attributes or fields. - Provide any other configuration as appropriate.
-
Specify the name of the database driver with the
-
Define how Data Grid Server nodes pool and reuse connections with connection pool tuning properties in the
connection-pool
section. - Save the changes to your configuration.
Verification
Use the Data Grid Command Line Interface (CLI) to test the datasource connection, as follows:
Start a CLI session.
bin/cli.sh
bin/cli.sh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List all datasources and confirm the one you created is available.
server datasource ls
server datasource ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Test a datasource connection.
server datasource test my-datasource
server datasource test my-datasource
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Managed datasource configuration
XML
JSON
YAML
11.2. Configuring caches with JNDI names Copy linkLink copied to clipboard!
When you add a managed datasource to Data Grid Server you can add the JNDI name to a JDBC-based cache store configuration.
Prerequisites
- Configure Data Grid Server with a managed datasource.
Procedure
- Open your cache configuration for editing.
-
Add the
data-source
element or field to the JDBC-based cache store configuration. -
Specify the JNDI name of the managed datasource as the value of the
jndi-url
attribute. - Configure the JDBC-based cache stores as appropriate.
- Save the changes to your configuration.
JNDI name in cache configuration
XML
JSON
YAML
11.3. Connection pool tuning properties Copy linkLink copied to clipboard!
You can tune JDBC connection pools for managed datasources in your Data Grid Server configuration.
Property | Description |
---|---|
| Initial number of connections the pool should hold. |
| Maximum number of connections in the pool. |
| Minimum number of connections the pool should hold. |
|
Maximum time in milliseconds to block while waiting for a connection before throwing an exception. This will never throw an exception if creating a new connection takes an inordinately long period of time. Default is |
|
Time in milliseconds between background validation runs. A duration of |
|
Connections idle for longer than this time, specified in milliseconds, are validated before being acquired (foreground validation). A duration of |
| Time in minutes a connection has to be idle before it can be removed. |
| Time in milliseconds a connection has to be held before a leak warning. |
Chapter 12. Setting up Data Grid cluster transport Copy linkLink copied to clipboard!
Data Grid requires a transport layer so nodes can automatically join and leave clusters. The transport layer also enables Data Grid nodes to replicate or distribute data across the network and perform operations such as re-balancing and state transfer.
12.1. Default JGroups stacks Copy linkLink copied to clipboard!
Data Grid provides default JGroups stack files, default-jgroups-*.xml
, in the default-configs
directory inside the infinispan-core-14.0.21.Final-redhat-00001.jar
file.
You can find this JAR file in the $RHDG_HOME/lib
directory.
File name | Stack name | Description |
---|---|---|
|
| Uses UDP for transport and UDP multicast for discovery. Suitable for larger clusters (over 100 nodes) or if you are using replicated caches or invalidation mode. Minimizes the number of open sockets. |
|
|
Uses TCP for transport and the |
|
|
Uses TCP for transport and |
|
|
Uses TCP for transport and |
|
|
Uses TCP for transport and |
|
|
Uses TCP for transport and |
|
|
Uses |
12.2. Cluster discovery protocols Copy linkLink copied to clipboard!
Data Grid supports different protocols that allow nodes to automatically find each other on the network and form clusters.
There are two types of discovery mechanisms that Data Grid can use:
- Generic discovery protocols that work on most networks and do not rely on external services.
-
Discovery protocols that rely on external services to store and retrieve topology information for Data Grid clusters.
For instance the DNS_PING protocol performs discovery through DNS server records.
Running Data Grid on hosted platforms requires using discovery mechanisms that are adapted to network constraints that individual cloud providers impose.
12.2.1. PING Copy linkLink copied to clipboard!
PING, or UDPPING is a generic JGroups discovery mechanism that uses dynamic multicasting with the UDP protocol.
When joining, nodes send PING requests to an IP multicast address to discover other nodes already in the Data Grid cluster. Each node responds to the PING request with a packet that contains the address of the coordinator node and its own address. C=coordinator’s address and A=own address. If no nodes respond to the PING request, the joining node becomes the coordinator node in a new cluster.
PING configuration example
<PING num_discovery_runs="3"/>
<PING num_discovery_runs="3"/>
12.2.2. TCPPING Copy linkLink copied to clipboard!
TCPPING is a generic JGroups discovery mechanism that uses a list of static addresses for cluster members.
With TCPPING, you manually specify the IP address or hostname of each node in the Data Grid cluster as part of the JGroups stack, rather than letting nodes discover each other dynamically.
TCPPING configuration example
<TCP bind_port="7800" /> <TCPPING timeout="3000" initial_hosts="${jgroups.tcpping.initial_hosts:hostname1[port1],hostname2[port2]}" port_range="0" num_initial_members="3"/>
<TCP bind_port="7800" />
<TCPPING timeout="3000"
initial_hosts="${jgroups.tcpping.initial_hosts:hostname1[port1],hostname2[port2]}"
port_range="0"
num_initial_members="3"/>
12.2.3. MPING Copy linkLink copied to clipboard!
MPING uses IP multicast to discover the initial membership of Data Grid clusters.
You can use MPING to replace TCPPING discovery with TCP stacks and use multicasing for discovery instead of static lists of initial hosts. However, you can also use MPING with UDP stacks.
MPING configuration example
<MPING mcast_addr="${jgroups.mcast_addr:239.6.7.8}" mcast_port="${jgroups.mcast_port:46655}" num_discovery_runs="3" ip_ttl="${jgroups.udp.ip_ttl:2}"/>
<MPING mcast_addr="${jgroups.mcast_addr:239.6.7.8}"
mcast_port="${jgroups.mcast_port:46655}"
num_discovery_runs="3"
ip_ttl="${jgroups.udp.ip_ttl:2}"/>
12.2.4. TCPGOSSIP Copy linkLink copied to clipboard!
Gossip routers provide a centralized location on the network from which your Data Grid cluster can retrieve addresses of other nodes.
You inject the address (IP:PORT
) of the Gossip router into Data Grid nodes as follows:
-
Pass the address as a system property to the JVM; for example,
-DGossipRouterAddress="10.10.2.4[12001]"
. - Reference that system property in the JGroups configuration file.
Gossip router configuration example
<TCP bind_port="7800" /> <TCPGOSSIP timeout="3000" initial_hosts="${GossipRouterAddress}" num_initial_members="3" />
<TCP bind_port="7800" />
<TCPGOSSIP timeout="3000"
initial_hosts="${GossipRouterAddress}"
num_initial_members="3" />
12.2.5. JDBC_PING2 Copy linkLink copied to clipboard!
JDBC_PING2 uses shared databases to store information about Data Grid clusters. This protocol supports any database that can use a JDBC connection.
Nodes write their IP addresses to the shared database so joining nodes can find the Data Grid cluster on the network. When nodes leave Data Grid clusters, they delete their IP addresses from the shared database.
JDBC_PING2 configuration example
<JDBC_PING connection_url="jdbc:mysql://localhost:3306/database_name" connection_username="user" connection_password="password" connection_driver="com.mysql.jdbc.Driver"/>
<JDBC_PING connection_url="jdbc:mysql://localhost:3306/database_name"
connection_username="user"
connection_password="password"
connection_driver="com.mysql.jdbc.Driver"/>
Add the appropriate JDBC driver to the classpath so Data Grid can use JDBC_PING2.
12.2.5.1. Using a server datasource for JDBC_PING2 discovery Copy linkLink copied to clipboard!
Add a managed datasource to a Data Grid Server and use it to provide database connections for the cluster transport JDBC_PING2 discovery protocol.
Prerequisites
- Install a Data Grid Server cluster.
Procedure
-
Deploy a JDBC driver JAR to your Data Grid Server
server/lib
directory Create a datasource for your database.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a JGroups stack which uses the
JDBC_PING2
protocol for discovery. Configure cluster transport to use the datasource by specifying the name of the datasource with the
server:data-source
attribute.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2.6. DNS_PING Copy linkLink copied to clipboard!
JGroups DNS_PING queries DNS servers to discover Data Grid cluster members in Kubernetes environments such as OKD and Red Hat OpenShift.
DNS_PING configuration example
<dns.DNS_PING dns_query="myservice.myproject.svc.cluster.local" />
<dns.DNS_PING dns_query="myservice.myproject.svc.cluster.local" />
12.2.7. Cloud discovery protocols Copy linkLink copied to clipboard!
Data Grid includes default JGroups stacks that use discovery protocol implementations that are specific to cloud providers.
Discovery protocol | Default stack file | Artifact | Version |
---|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
Providing dependencies for cloud discovery protocols
To use aws.S3_PING
, GOOGLE_PING2
, or azure.AZURE_PING
cloud discovery protocols, you need to provide dependent libraries to Data Grid.
Procedure
- Download the artifact JAR file and all dependencies.
Add the artifact JAR file and all dependencies to the
$RHDG_HOME/server/lib
directory of your Data Grid Server installation.For more details see the Downloading artifacts for JGroups cloud discover protocols for Data Grid Server (Red Hat knowledgebase article)
You can then configure the cloud discovery protocol as part of a JGroups stack file or with system properties.
12.3. Using the default JGroups stacks Copy linkLink copied to clipboard!
Data Grid uses JGroups protocol stacks so nodes can send each other messages on dedicated cluster channels.
Data Grid provides preconfigured JGroups stacks for UDP
and TCP
protocols. You can use these default stacks as a starting point for building custom cluster transport configuration that is optimized for your network requirements.
Procedure
Do one of the following to use one of the default JGroups stacks:
Use the
stack
attribute in yourinfinispan.xml
file.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
cluster-stack
argument to set the JGroups stack file when Data Grid Server starts:bin/server.sh --cluster-stack=udp
bin/server.sh --cluster-stack=udp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Data Grid logs the following message to indicate which stack it uses:
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack udp
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack udp
12.4. Customizing JGroups stacks Copy linkLink copied to clipboard!
Adjust and tune properties to create a cluster transport configuration that works for your network requirements.
Data Grid provides attributes that let you extend the default JGroups stacks for easier configuration. You can inherit properties from the default stacks while combining, removing, and replacing other properties.
Procedure
-
Create a new JGroups stack declaration in your
infinispan.xml
file. -
Add the
extends
attribute and specify a JGroups stack to inherit properties from. -
Use the
stack.combine
attribute to modify properties for protocols configured in the inherited stack. -
Use the
stack.position
attribute to define the location for your custom stack. Specify the stack name as the value for the
stack
attribute in thetransport
configuration.For example, you might evaluate using a Gossip router and symmetric encryption with the default TCP stack as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check Data Grid logs to ensure it uses the stack.
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack my-stack
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack my-stack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Reference
- JGroups cluster transport configuration for Data Grid 8.x (Red Hat knowledgebase article)
12.4.1. Inheritance attributes Copy linkLink copied to clipboard!
When you extend a JGroups stack, inheritance attributes let you adjust protocols and properties in the stack you are extending.
-
stack.position
specifies protocols to modify. stack.combine
uses the following values to extend JGroups stacks:Expand Value Description COMBINE
Overrides protocol properties.
REPLACE
Replaces protocols.
INSERT_AFTER
Adds a protocol into the stack after another protocol. Does not affect the protocol that you specify as the insertion point.
Protocols in JGroups stacks affect each other based on their location in the stack. For example, you should put a protocol such as
NAKACK2
after theSYM_ENCRYPT
orASYM_ENCRYPT
protocol so thatNAKACK2
is secured.INSERT_BEFORE
Inserts a protocols into the stack before another protocol. Affects the protocol that you specify as the insertion point.
REMOVE
Removes protocols from the stack.
12.5. Using JGroups system properties Copy linkLink copied to clipboard!
Pass system properties to Data Grid at startup to tune cluster transport.
Procedure
-
Use
-D<property-name>=<property-value>
arguments to set JGroups system properties as required.
For example, set a custom bind port and IP address as follows:
bin/server.sh -Djgroups.bind.port=1234 -Djgroups.bind.address=192.0.2.0
bin/server.sh -Djgroups.bind.port=1234 -Djgroups.bind.address=192.0.2.0
12.5.1. Cluster transport properties Copy linkLink copied to clipboard!
Use the following properties to customize JGroups cluster transport.
System Property | Description | Default Value | Required/Optional |
---|---|---|---|
| Bind address for cluster transport. |
| Optional |
| Bind port for the socket. |
| Optional |
| IP address for multicast, both discovery and inter-cluster communication. The IP address must be a valid "class D" address that is suitable for IP multicast. |
| Optional |
| Port for the multicast socket. |
| Optional |
| Time-to-live (TTL) for IP multicast packets. The value defines the number of network hops a packet can make before it is dropped. | 2 | Optional |
| Minimum number of threads for the thread pool. | 0 | Optional |
| Maximum number of threads for the thread pool. | 200 | Optional |
| Maximum number of milliseconds to wait for join requests to succeed. | 2000 | Optional |
| Number of times a thread pool needs to be full before a thread dump is logged. | 10000 | Optional |
|
Offset from |
| Optional |
| Maximum number of bytes in a message. Messages larger than that are fragmented. | 60000 | Optional |
| Enables JGroups diagnostic probing. | false | Optional |
12.5.2. System properties for cloud discovery protocols Copy linkLink copied to clipboard!
Use the following properties to configure JGroups discovery protocols for hosted platforms.
12.5.2.1. Amazon EC2 Copy linkLink copied to clipboard!
System properties for configuring aws.S3_PING
.
System Property | Description | Default Value | Required/Optional |
---|---|---|---|
| Name of the Amazon S3 region. | No default value. | Optional |
| Name of the Amazon S3 bucket. The name must exist and be unique. | No default value. | Optional |
12.5.2.2. Google Cloud Platform Copy linkLink copied to clipboard!
System properties for configuring GOOGLE_PING2
.
System Property | Description | Default Value | Required/Optional |
---|---|---|---|
| Name of the Google Compute Engine bucket. The name must exist and be unique. | No default value. | Required |
12.5.2.3. Azure Copy linkLink copied to clipboard!
System properties for azure.AZURE_PING`.
System Property | Description | Default Value | Required/Optional |
---|---|---|---|
| Name of the Azure storage account. The name must exist and be unique. | No default value. | Required |
| Name of the Azure storage access key. | No default value. | Required |
| Valid DNS name of the container that stores ping information. | No default value. | Required |
12.5.2.4. OpenShift Copy linkLink copied to clipboard!
System properties for DNS_PING
.
System Property | Description | Default Value | Required/Optional |
---|---|---|---|
| Sets the DNS record that returns cluster members. | No default value. | Required |
| Sets the DNS record type. | A | Optional |
12.6. Using inline JGroups stacks Copy linkLink copied to clipboard!
You can insert complete JGroups stack definitions into infinispan.xml
files.
Procedure
Embed a custom JGroups stack declaration in your
infinispan.xml
file.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.7. Using external JGroups stacks Copy linkLink copied to clipboard!
Reference external files that define custom JGroups stacks in infinispan.xml
files.
Procedure
Add custom JGroups stack files to the
$RHDG_HOME/server/conf
directory.Alternatively you can specify an absolute path when you declare the external stack file.
Reference the external stack file with the
stack-file
element.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.8. Encrypting cluster transport Copy linkLink copied to clipboard!
Secure cluster transport so that nodes communicate with encrypted messages. You can also configure Data Grid clusters to perform certificate authentication so that only nodes with valid identities can join.
12.8.1. Securing cluster transport with TLS identities Copy linkLink copied to clipboard!
Add SSL/TLS identities to a Data Grid Server security realm and use them to secure cluster transport. Nodes in the Data Grid Server cluster then exchange SSL/TLS certificates to encrypt JGroups messages, including RELAY messages if you configure cross-site replication.
Prerequisites
- Install a Data Grid Server cluster.
Procedure
Create a TLS keystore that contains a single certificate to identify Data Grid Server.
You can also use a PEM file if it contains a private key in PKCS#1 or PKCS#8 format, a certificate, and has an empty password:
password=""
.NoteIf the certificate in the keystore is not signed by a public certificate authority (CA) then you must also create a trust store that contains either the signing certificate or the public key.
-
Add the keystore to the
$RHDG_HOME/server/conf
directory. Add the keystore to a new security realm in your Data Grid Server configuration.
ImportantYou should create dedicated keystores and security realms so that Data Grid Server endpoints do not use the same security realm as cluster transport.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure cluster transport to use the security realm by specifying the name of the security realm with the
server:security-realm
attribute.<infinispan> <cache-container> <transport server:security-realm="cluster-transport"/> </cache-container> </infinispan>
<infinispan> <cache-container> <transport server:security-realm="cluster-transport"/> </cache-container> </infinispan>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
When you start Data Grid Server, the following log message indicates that the cluster is using the security realm for cluster transport:
[org.infinispan.SERVER] ISPN080060: SSL Transport using realm <security_realm_name>
[org.infinispan.SERVER] ISPN080060: SSL Transport using realm <security_realm_name>
12.8.2. JGroups encryption protocols Copy linkLink copied to clipboard!
To secure cluster traffic, you can configure Data Grid nodes to encrypt JGroups message payloads with secret keys.
Data Grid nodes can obtain secret keys from either:
- The coordinator node (asymmetric encryption).
- A shared keystore (symmetric encryption).
Retrieving secret keys from coordinator nodes
You configure asymmetric encryption by adding the ASYM_ENCRYPT
protocol to a JGroups stack in your Data Grid configuration. This allows Data Grid clusters to generate and distribute secret keys.
When using asymmetric encryption, you should also provide keystores so that nodes can perform certificate authentication and securely exchange secret keys. This protects your cluster from man-in-the-middle (MitM) attacks.
Asymmetric encryption secures cluster traffic as follows:
- The first node in the Data Grid cluster, the coordinator node, generates a secret key.
- A joining node performs certificate authentication with the coordinator to mutually verify identity.
- The joining node requests the secret key from the coordinator node. That request includes the public key for the joining node.
- The coordinator node encrypts the secret key with the public key and returns it to the joining node.
- The joining node decrypts and installs the secret key.
- The node joins the cluster, encrypting and decrypting messages with the secret key.
Retrieving secret keys from shared keystores
You configure symmetric encryption by adding the SYM_ENCRYPT
protocol to a JGroups stack in your Data Grid configuration. This allows Data Grid clusters to obtain secret keys from keystores that you provide.
- Nodes install the secret key from a keystore on the Data Grid classpath at startup.
- Node join clusters, encrypting and decrypting messages with the secret key.
Comparison of asymmetric and symmetric encryption
ASYM_ENCRYPT
with certificate authentication provides an additional layer of encryption in comparison with SYM_ENCRYPT
. You provide keystores that encrypt the requests to coordinator nodes for the secret key. Data Grid automatically generates that secret key and handles cluster traffic, while letting you specify when to generate secret keys. For example, you can configure clusters to generate new secret keys when nodes leave. This ensures that nodes cannot bypass certificate authentication and join with old keys.
SYM_ENCRYPT
, on the other hand, is faster than ASYM_ENCRYPT
because nodes do not need to exchange keys with the cluster coordinator. A potential drawback to SYM_ENCRYPT
is that there is no configuration to automatically generate new secret keys when cluster membership changes. Users are responsible for generating and distributing the secret keys that nodes use to encrypt cluster traffic.
12.8.3. Securing cluster transport with asymmetric encryption Copy linkLink copied to clipboard!
Configure Data Grid clusters to generate and distribute secret keys that encrypt JGroups messages.
Procedure
- Create a keystore with certificate chains that enables Data Grid to verify node identity.
Place the keystore on the classpath for each node in the cluster.
For Data Grid Server, you put the keystore in the $RHDG_HOME directory.
Add the
SSL_KEY_EXCHANGE
andASYM_ENCRYPT
protocols to a JGroups stack in your Data Grid configuration, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
When you start your Data Grid cluster, the following log message indicates that the cluster is using the secure JGroups stack:
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack <encrypted_stack_name>
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack <encrypted_stack_name>
Data Grid nodes can join the cluster only if they use ASYM_ENCRYPT
and can obtain the secret key from the coordinator node. Otherwise the following message is written to Data Grid logs:
[org.jgroups.protocols.ASYM_ENCRYPT] <hostname>: received message without encrypt header from <hostname>; dropping it
[org.jgroups.protocols.ASYM_ENCRYPT] <hostname>: received message without encrypt header from <hostname>; dropping it
12.8.4. Securing cluster transport with symmetric encryption Copy linkLink copied to clipboard!
Configure Data Grid clusters to encrypt JGroups messages with secret keys from keystores that you provide.
Procedure
- Create a keystore that contains a secret key.
Place the keystore on the classpath for each node in the cluster.
For Data Grid Server, you put the keystore in the $RHDG_HOME directory.
-
Add the
SYM_ENCRYPT
protocol to a JGroups stack in your Data Grid configuration.
Verification
When you start your Data Grid cluster, the following log message indicates that the cluster is using the secure JGroups stack:
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack <encrypted_stack_name>
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack <encrypted_stack_name>
Data Grid nodes can join the cluster only if they use SYM_ENCRYPT
and can obtain the secret key from the shared keystore. Otherwise the following message is written to Data Grid logs:
[org.jgroups.protocols.SYM_ENCRYPT] <hostname>: received message without encrypt header from <hostname>; dropping it
[org.jgroups.protocols.SYM_ENCRYPT] <hostname>: received message without encrypt header from <hostname>; dropping it
12.9. TCP and UDP ports for cluster traffic Copy linkLink copied to clipboard!
Data Grid uses the following ports for cluster transport messages:
Default Port | Protocol | Description |
---|---|---|
| TCP/UDP | JGroups cluster bind port |
| UDP | JGroups multicast |
Cross-site replication
Data Grid uses the following ports for the JGroups RELAY2 protocol:
7900
- For Data Grid clusters running on OpenShift.
7800
- If using UDP for traffic between nodes and TCP for traffic between clusters.
7801
- If using TCP for traffic between nodes and TCP for traffic between clusters.
Chapter 13. Creating remote caches Copy linkLink copied to clipboard!
When you create remote caches at runtime, Data Grid Server synchronizes your configuration across the cluster so that all nodes have a copy. For this reason you should always create remote caches dynamically with the following mechanisms:
- Data Grid Console
- Data Grid Command Line Interface (CLI)
- Hot Rod or HTTP clients
13.1. Default Cache Manager Copy linkLink copied to clipboard!
Data Grid Server provides a default Cache Manager that controls the lifecycle of remote caches. Starting Data Grid Server automatically instantiates the Cache Manager so you can create and delete remote caches and other resources like Protobuf schema.
After you start Data Grid Server and add user credentials, you can view details about the Cache Manager and get cluster information from Data Grid Console.
-
Open
127.0.0.1:11222
in any browser.
You can also get information about the Cache Manager through the Command Line Interface (CLI) or REST API:
- CLI
Run the
describe
command in the default container.[//containers/default]> describe
[//containers/default]> describe
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - REST
-
Open
127.0.0.1:11222/rest/v2/container/
in any browser.
Default Cache Manager configuration
XML
JSON
YAML
13.2. Creating caches with Data Grid Console Copy linkLink copied to clipboard!
Use Data Grid Console to create remote caches in an intuitive visual interface from any web browser.
Prerequisites
-
Create a Data Grid user with
admin
permissions. - Start at least one Data Grid Server instance.
- Have a Data Grid cache configuration.
Procedure
-
Open
127.0.0.1:11222/console/
in any browser. - Select Create Cache and follow the steps as Data Grid Console guides you through the process.
13.3. Creating remote caches with the Data Grid CLI Copy linkLink copied to clipboard!
Use the Data Grid Command Line Interface (CLI) to add remote caches on Data Grid Server.
Prerequisites
-
Create a Data Grid user with
admin
permissions. - Start at least one Data Grid Server instance.
- Have a Data Grid cache configuration.
Procedure
Start the CLI.
bin/cli.sh
bin/cli.sh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Run the
connect
command and enter your username and password when prompted. Use the
create cache
command to create remote caches.For example, create a cache named "mycache" from a file named
mycache.xml
as follows:create cache --file=mycache.xml mycache
create cache --file=mycache.xml mycache
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
List all remote caches with the
ls
command.ls caches mycache
ls caches mycache
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View cache configuration with the
describe
command.describe caches/mycache
describe caches/mycache
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
13.4. Creating remote caches from Hot Rod clients Copy linkLink copied to clipboard!
Use the Data Grid Hot Rod API to create remote caches on Data Grid Server from Java, C++, .NET/C#, JS clients and more.
This procedure shows you how to use Hot Rod Java clients that create remote caches on first access. You can find code examples for other Hot Rod clients in the Data Grid Tutorials.
Prerequisites
-
Create a Data Grid user with
admin
permissions. - Start at least one Data Grid Server instance.
- Have a Data Grid cache configuration.
Procedure
-
Invoke the
remoteCache()
method as part of your theConfigurationBuilder
. -
Set the
configuration
orconfiguration_uri
properties in thehotrod-client.properties
file on your classpath.
ConfigurationBuilder
hotrod-client.properties
infinispan.client.hotrod.cache.another-cache.configuration=<distributed-cache name=\"another-cache\"/> infinispan.client.hotrod.cache.[my.other.cache].configuration_uri=file:///path/to/infinispan.xml
infinispan.client.hotrod.cache.another-cache.configuration=<distributed-cache name=\"another-cache\"/>
infinispan.client.hotrod.cache.[my.other.cache].configuration_uri=file:///path/to/infinispan.xml
If the name of your remote cache contains the .
character, you must enclose it in square brackets when using hotrod-client.properties
files.
13.5. Creating remote caches with the REST API Copy linkLink copied to clipboard!
Use the Data Grid REST API to create remote caches on Data Grid Server from any suitable HTTP client.
Prerequisites
-
Create a Data Grid user with
admin
permissions. - Start at least one Data Grid Server instance.
- Have a Data Grid cache configuration.
Procedure
-
Invoke
POST
requests to/rest/v2/caches/<cache_name>
with cache configuration in the payload.
Chapter 14. Running scripts and tasks on Data Grid Server Copy linkLink copied to clipboard!
Add tasks and scripts to Data Grid Server deployments for remote execution from the Command Line Interface (CLI) and Hot Rod or REST clients. You can implement tasks as custom Java classes or define scripts in languages such as JavaScript.
14.1. Adding tasks to Data Grid Server deployments Copy linkLink copied to clipboard!
Add your custom server task classes to Data Grid Server.
Prerequisites
Stop Data Grid Server if it is running.
Data Grid Server does not support runtime deployment of custom classes.
Procedure
Add a
META-INF/services/org.infinispan.tasks.ServerTask
file that contains the fully qualified names of server tasks, for example:example.HelloTask
example.HelloTask
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Package your server task implementation in a JAR file.
-
Copy the JAR file to the
$RHDG_HOME/server/lib
directory of your Data Grid Server installation. - Add your classes to the deserialization allow list in your Data Grid configuration. Alternatively set the allow list using system properties.
14.1.1. Data Grid Server tasks Copy linkLink copied to clipboard!
Data Grid Server tasks are classes that extend the org.infinispan.tasks.ServerTask
interface and generally include the following method calls:
setTaskContext()
-
Allows access to execution context information including task parameters, cache references on which tasks are executed, and so on. In most cases, implementations store this information locally and use it when tasks are actually executed. When using
SHARED
instantiation mode, the task should use aThreadLocal
to store theTaskContext
for concurrent invocations. getName()
- Returns unique names for tasks. Clients invoke tasks with these names.
getExecutionMode()
Returns the execution mode for tasks.
-
TaskExecutionMode.ONE_NODE
only the node that handles the request executes the script. Although scripts can still invoke clustered operations. This is the default. -
TaskExecutionMode.ALL_NODES
Data Grid uses clustered executors to run scripts across nodes. For example, server tasks that invoke stream processing need to be executed on a single node because stream processing is distributed to all nodes.
-
getInstantiationMode()
Returns the instantiation mode for tasks.
-
TaskInstantiationMode.SHARED
creates a single instance that is reused for every task execution on the same server. This is the default. -
TaskInstantiationMode.ISOLATED
creates a new instance for every invocation.
-
call()
-
Computes a result. This method is defined in the
java.util.concurrent.Callable
interface and is invoked with server tasks.
Server task implementations must adhere to service loader pattern requirements. For example, implementations must have a zero-argument constructors.
The following HelloTask
class implementation provides an example task that has one parameter. It also illustrates the use of a ThreadLocal
to store the TaskContext
for concurrent invocations.
14.2. Adding scripts to Data Grid Server deployments Copy linkLink copied to clipboard!
Use the command line interface to add scripts to Data Grid Server.
Prerequisites
Data Grid Server stores scripts in the ___script_cache
cache. If you enable cache authorization, users must have CREATE
permissions to add to ___script_cache
.
Assign users the deployer
role at minimum if you use default authorization settings.
Procedure
Define scripts as required.
For example, create a file named
multiplication.js
that runs on a single Data Grid server, has two parameters, and uses JavaScript to multiply a given value:// mode=local,language=javascript multiplicand * multiplier
// mode=local,language=javascript multiplicand * multiplier
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a CLI connection to Data Grid.
Use the
task
command to upload scripts, as in the following example:task upload --file=multiplication.js multiplication
task upload --file=multiplication.js multiplication
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that your scripts are available.
ls tasks multiplication
ls tasks multiplication
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
14.2.1. Data Grid Server scripts Copy linkLink copied to clipboard!
Data Grid Server scripting is based on the javax.script
API and is compatible with any JVM-based ScriptEngine implementation.
Hello world
The following is a simple example that runs on a single Data Grid server, has one parameter, and uses JavaScript:
// mode=local,language=javascript,parameters=[greetee] "Hello " + greetee
// mode=local,language=javascript,parameters=[greetee]
"Hello " + greetee
When you run the preceding script, you pass a value for the greetee
parameter and Data Grid returns "Hello ${value}"
.
14.2.1.1. Script metadata Copy linkLink copied to clipboard!
Metadata provides additional information about scripts that Data Grid Server uses when running scripts.
Script metadata are property=value
pairs that you add to comments in the first lines of scripts, such as the following example:
// name=test, language=javascript // mode=local, parameters=[a,b,c]
// name=test, language=javascript
// mode=local, parameters=[a,b,c]
-
Use comment styles that match the scripting language (
//
,;;
,#
). -
Separate
property=value
pairs with commas. - Separate values with single (') or double (") quote characters.
Property | Description |
---|---|
| Defines the execution mode and has the following values:
|
| Specifies the ScriptEngine that executes the script. |
| Specifies filename extensions as an alternative method to set the ScriptEngine. |
| Specifies roles that users must have to execute scripts. |
| Specifies an array of valid parameter names for this script. Invocations which specify parameters not included in this list cause exceptions. |
| Optionally sets the MediaType (MIME type) for storing data as well as parameter and return values. This property is useful for remote clients that support particular data formats only.
Currently you can set only |
14.2.1.2. Script bindings Copy linkLink copied to clipboard!
Data Grid exposes internal objects as bindings for script execution.
Binding | Description |
---|---|
| Specifies the cache against which the script is run. |
| Specifies the marshaller to use for serializing data to the cache. |
|
Specifies the |
| Specifies the instance of the script manager that runs the script. You can use this binding to run other scripts from a script. |
14.2.1.3. Script parameters Copy linkLink copied to clipboard!
Data Grid lets you pass named parameters as bindings for running scripts.
Parameters are name,value
pairs, where name
is a string and value
is any value that the marshaller can interpret.
The following example script has two parameters, multiplicand
and multiplier
. The script takes the value of multiplicand
and multiplies it with the value of multiplier
.
// mode=local,language=javascript multiplicand * multiplier
// mode=local,language=javascript
multiplicand * multiplier
When you run the preceding script, Data Grid responds with the result of the expression evaluation.
14.2.2. Programmatically Creating Scripts Copy linkLink copied to clipboard!
Add scripts with the Hot Rod RemoteCache
interface as in the following example:
RemoteCache<String, String> scriptCache = cacheManager.getCache("___script_cache"); scriptCache.put("multiplication.js", "// mode=local,language=javascript\n" + "multiplicand * multiplier\n");
RemoteCache<String, String> scriptCache = cacheManager.getCache("___script_cache");
scriptCache.put("multiplication.js",
"// mode=local,language=javascript\n" +
"multiplicand * multiplier\n");
Reference
14.3. Running scripts and tasks Copy linkLink copied to clipboard!
Use the command line interface to run tasks and scripts on Data Grid Server deployments. Alternatively you can execute scripts and tasks from Hot Rod clients.
Prerequisites
- Add scripts or tasks to Data Grid Server.
Procedure
- Create a CLI connection to Data Grid.
Use the
task
command to run tasks and scripts, as in the following examples:Execute a script named
multiplier.js
and specify two parameters:task exec multiplier.js -Pmultiplicand=10 -Pmultiplier=20 200.0
task exec multiplier.js -Pmultiplicand=10 -Pmultiplier=20 200.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute a task named
@@cache@names
to retrieve a list of all available caches:task exec @@cache@names ["___protobuf_metadata","mycache","___script_cache"]
task exec @@cache@names ["___protobuf_metadata","mycache","___script_cache"]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Programmatic execution
-
Call the
execute()
method to run scripts with the Hot RodRemoteCache
interface, as in the following examples:
Script execution
Task execution
Chapter 15. Configuring Data Grid Server logging Copy linkLink copied to clipboard!
Data Grid Server uses Apache Log4j 2 to provide configurable logging mechanisms that capture details about the environment and record cache operations for troubleshooting purposes and root cause analysis.
15.1. Data Grid Server log files Copy linkLink copied to clipboard!
Data Grid writes server logs to the following files in the $RHDG_HOME/server/log
directory:
server.log
-
Messages in human readable format, including boot logs that relate to the server startup.
Data Grid creates this file when you start the server. server.log.json
-
Messages in JSON format that let you parse and analyze Data Grid logs.
Data Grid creates this file when you enable theJSON-FILE
appender.
15.1.1. Configuring Data Grid Server logs Copy linkLink copied to clipboard!
Data Grid uses Apache Log4j technology to write server log messages. You can configure server logs in the log4j2.xml
file.
Procedure
-
Open
$RHDG_HOME/server/conf/log4j2.xml
with any text editor. - Change server logging as appropriate.
-
Save and close
log4j2.xml
.
15.1.2. Log levels Copy linkLink copied to clipboard!
Log levels indicate the nature and severity of messages.
Log level | Description |
---|---|
| Fine-grained debug messages, capturing the flow of individual requests through the application. |
| Messages for general debugging, not related to an individual request. |
| Messages about the overall progress of applications, including lifecycle events. |
| Events that can lead to error or degrade performance. |
| Error conditions that might prevent operations or activities from being successful but do not prevent applications from running. |
| Events that could cause critical service failure and application shutdown. |
In addition to the levels of individual messages presented above, the configuration allows two more values: ALL
to include all messages, and OFF
to exclude all messages.
15.1.3. Data Grid logging categories Copy linkLink copied to clipboard!
Data Grid provides categories for INFO
, WARN
, ERROR
, FATAL
level messages that organize logs by functional area.
org.infinispan.CLUSTER
- Messages specific to Data Grid clustering that include state transfer operations, rebalancing events, partitioning, and so on.
org.infinispan.CONFIG
- Messages specific to Data Grid configuration.
org.infinispan.CONTAINER
- Messages specific to the data container that include expiration and eviction operations, cache listener notifications, transactions, and so on.
org.infinispan.PERSISTENCE
- Messages specific to cache loaders and stores.
org.infinispan.SECURITY
- Messages specific to Data Grid security.
org.infinispan.SERVER
- Messages specific to Data Grid servers.
org.infinispan.XSITE
- Messages specific to cross-site replication operations.
15.1.4. Log appenders Copy linkLink copied to clipboard!
Log appenders define how Data Grid Server records log messages.
- CONSOLE
-
Write log messages to the host standard out (
stdout
) or standard error (stderr
) stream.
Uses theorg.apache.logging.log4j.core.appender.ConsoleAppender
class by default. - FILE
-
Write log messages to a file.
Uses theorg.apache.logging.log4j.core.appender.RollingFileAppender
class by default. - JSON-FILE
-
Write log messages to a file in JSON format.
Uses theorg.apache.logging.log4j.core.appender.RollingFileAppender
class by default.
15.1.5. Log pattern formatters Copy linkLink copied to clipboard!
The CONSOLE
and FILE
appenders use a PatternLayout
to format the log messages according to a pattern.
An example is the default pattern in the FILE appender:%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p (%t) [%c{1}] %m%throwable%n
-
%d{yyyy-MM-dd HH:mm:ss,SSS}
adds the current time and date. -
%-5p
specifies the log level, aligned to the right. -
%t
adds the name of the current thread. -
%c{1}
adds the short name of the logging category. -
%m
adds the log message. -
%throwable
adds the exception stack trace. -
%n
adds a new line.
Patterns are fully described in the PatternLayout
documentation .
15.1.6. Enabling the JSON log handler Copy linkLink copied to clipboard!
Data Grid Server provides a log handler to write messages in JSON format.
Prerequisites
-
Stop Data Grid Server if it is running.
You cannot dynamically enable log handlers.
Procedure
-
Open
$RHDG_HOME/server/conf/log4j2.xml
with any text editor. Uncomment the
JSON-FILE
appender and comment out theFILE
appender:<!--<AppenderRef ref="FILE"/>--> <AppenderRef ref="JSON-FILE"/>
<!--<AppenderRef ref="FILE"/>--> <AppenderRef ref="JSON-FILE"/>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Optionally configure the JSON appender and JSON layout as required.
-
Save and close
log4j2.xml
.
When you start Data Grid, it writes each log message as a JSON map in the following file:$RHDG_HOME/server/log/server.log.json
15.2. Access logs Copy linkLink copied to clipboard!
Access logs record all inbound client requests for Hot Rod and REST endpoints to files in the $RHDG_HOME/server/log
directory.
org.infinispan.HOTROD_ACCESS_LOG
-
Logging category that writes Hot Rod access messages to a
hotrod-access.log
file. org.infinispan.REST_ACCESS_LOG
-
Logging category that writes REST access messages to a
rest-access.log
file.
15.2.1. Enabling access logs Copy linkLink copied to clipboard!
To record Hot Rod, REST and Memcached endpoint access messages, you need to enable the logging categories in log4j2.xml
.
Procedure
-
Open
$RHDG_HOME/server/conf/log4j2.xml
with any text editor. -
Change the level for the
org.infinispan.HOTROD_ACCESS_LOG
,org.infinispan.REST_ACCESS_LOG
andorg.infinispan.MEMCACHED_ACCESS_LOG
logging categories toTRACE
. -
Save and close
log4j2.xml
.
<Logger name="org.infinispan.HOTROD_ACCESS_LOG" additivity="false" level="TRACE"> <AppenderRef ref="HR-ACCESS-FILE"/> </Logger>
<Logger name="org.infinispan.HOTROD_ACCESS_LOG" additivity="false" level="TRACE">
<AppenderRef ref="HR-ACCESS-FILE"/>
</Logger>
15.2.2. Access log properties Copy linkLink copied to clipboard!
The default format for access logs is as follows:
%X{address} %X{user} [%d{dd/MMM/yyyy:HH:mm:ss Z}] "%X{method} %m %X{protocol}" %X{status} %X{requestSize} %X{responseSize} %X{duration}%n
%X{address} %X{user} [%d{dd/MMM/yyyy:HH:mm:ss Z}] "%X{method} %m
%X{protocol}" %X{status} %X{requestSize} %X{responseSize} %X{duration}%n
The preceding format creates log entries such as the following:
127.0.0.1 - [DD/MM/YYYY:HH:MM:SS +0000] "PUT /rest/v2/caches/default/key HTTP/1.1" 404 5 77 10
Logging properties use the %X{name}
notation and let you modify the format of access logs. The following are the default logging properties:
Property | Description |
---|---|
|
Either the |
| Principal name, if using authentication. |
|
The protocol-specific method used. |
|
Protocol used. |
|
An HTTP status code for the REST endpoint. |
| Size, in bytes, of the request. |
| Size, in bytes, of the response. |
| Number of milliseconds that the server took to handle the request. |
Use the header name prefixed with h:
to log headers that were included in requests; for example, %X{h:User-Agent}
.
15.3. Audit logs Copy linkLink copied to clipboard!
Audit logs let you track changes to your Data Grid Server deployment so you know when changes occur and which users make them. Enable and configure audit logging to record server configuration events and administrative operations.
org.infinispan.AUDIT
-
Logging category that writes security audit messages to an
audit.log
file in the$RHDG_HOME/server/log
directory.
15.3.1. Enabling audit logging Copy linkLink copied to clipboard!
To record security audit messages, you need to enable the logging category in log4j2.xml
.
Procedure
-
Open
$RHDG_HOME/server/conf/log4j2.xml
with any text editor. -
Change the level for the
org.infinispan.AUDIT
logging category toINFO
. -
Save and close
log4j2.xml
.
<!-- Set to INFO to enable audit logging --> <Logger name="org.infinispan.AUDIT" additivity="false" level="INFO"> <AppenderRef ref="AUDIT-FILE"/> </Logger>
<!-- Set to INFO to enable audit logging -->
<Logger name="org.infinispan.AUDIT" additivity="false" level="INFO">
<AppenderRef ref="AUDIT-FILE"/>
</Logger>
15.3.2. Configuring audit logging appenders Copy linkLink copied to clipboard!
Apache Log4j provides different appenders that you can use to send audit messages to a destination other than the default log file. For instance, if you want to send audit logs to a syslog daemon, JDBC database, or Apache Kafka server, you can configure an appender in log4j2.xml
.
Procedure
-
Open
$RHDG_HOME/server/conf/log4j2.xml
with any text editor. Comment or remove the default
AUDIT-FILE
rolling file appender.<!--RollingFile name="AUDIT-FILE" ... </RollingFile-->
<!--RollingFile name="AUDIT-FILE" ... </RollingFile-->
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the desired logging appender for audit messages.
For example, you could add a logging appender for a Kafka server as follows:
<Kafka name="AUDIT-KAFKA" topic="audit"> <PatternLayout pattern="%date %message"/> <Property name="bootstrap.servers">localhost:9092</Property> </Kafka>
<Kafka name="AUDIT-KAFKA" topic="audit"> <PatternLayout pattern="%date %message"/> <Property name="bootstrap.servers">localhost:9092</Property> </Kafka>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save and close
log4j2.xml
.
15.3.3. Using custom audit logging implementations Copy linkLink copied to clipboard!
You can create custom implementations of the org.infinispan.security.AuditLogger
API if configuring Log4j appenders does not meet your needs.
Prerequisites
-
Implement
org.infinispan.security.AuditLogger
as required and package it in a JAR file.
Procedure
-
Add your JAR to the
server/lib
directory in your Data Grid Server installation. Specify the fully qualified class name of your custom audit logger as the value for the
audit-logger
attribute on theauthorization
element in your cache container security configuration.For example, the following configuration defines
my.package.CustomAuditLogger
as the class for logging audit messages:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 16. Performing rolling upgrades for Data Grid Server clusters Copy linkLink copied to clipboard!
Perform rolling upgrades of your Data Grid clusters to change between versions without downtime or data loss and migrate data over the Hot Rod protocol.
16.1. Setting up target Data Grid clusters Copy linkLink copied to clipboard!
Create a cluster that uses the Data Grid version to which you plan to upgrade and then connect the source cluster to the target cluster using a remote cache store.
Prerequisites
- Install Data Grid Server nodes with the desired version for your target cluster.
Ensure the network properties for the target cluster do not overlap with those for the source cluster. You should specify unique names for the target and source clusters in the JGroups transport configuration. Depending on your environment you can also use different network interfaces and port offsets to separate the target and source clusters.
Procedure
Create a remote cache store configuration, in JSON format, that allows the target cluster to connect to the source cluster.
Remote cache stores on the target cluster use the Hot Rod protocol to retrieve data from the source cluster.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the Data Grid Command Line Interface (CLI) or REST API to add the remote cache store configuration to the target cluster so it can connect to the source cluster.
CLI: Use the
migrate cluster connect
command on the target cluster.[//containers/default]> migrate cluster connect -c myCache --file=remote-store.json
[//containers/default]> migrate cluster connect -c myCache --file=remote-store.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow REST API: Invoke a POST request that includes the remote store configuration in the payload with the
rolling-upgrade/source-connection
method.POST /rest/v2/caches/myCache/rolling-upgrade/source-connection
POST /rest/v2/caches/myCache/rolling-upgrade/source-connection
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Repeat the preceding step for each cache that you want to migrate.
Switch clients over to the target cluster, so it starts handling all requests.
- Update client configuration with the location of the target cluster.
- Restart clients.
If you need to migrate Indexed caches you must first migrate the internal ___protobuf_metadata
cache so that the .proto schemas defined on the source cluster will also be present on the target cluster.
16.2. Synchronizing data to target clusters Copy linkLink copied to clipboard!
When you set up a target Data Grid cluster and connect it to a source cluster, the target cluster can handle client requests using a remote cache store and load data on demand. To completely migrate data to the target cluster, so you can decommission the source cluster, you can synchronize data. This operation reads data from the source cluster and writes it to the target cluster. Data migrates to all nodes in the target cluster in parallel, with each node receiving a subset of the data. You must perform the synchronization for each cache that you want to migrate to the target cluster.
Prerequisites
- Set up a target cluster with the appropriate Data Grid version.
Procedure
Start synchronizing each cache that you want to migrate to the target cluster with the Data Grid Command Line Interface (CLI) or REST API.
CLI: Use the
migrate cluster synchronize
command.migrate cluster synchronize -c myCache
migrate cluster synchronize -c myCache
Copy to Clipboard Copied! Toggle word wrap Toggle overflow REST API: Use the
?action=sync-data
parameter with a POST request.POST /rest/v2/caches/myCache?action=sync-data
POST /rest/v2/caches/myCache?action=sync-data
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the operation completes, Data Grid responds with the total number of entries copied to the target cluster.
Disconnect each node in the target cluster from the source cluster.
CLI: Use the
migrate cluster disconnect
command.migrate cluster disconnect -c myCache
migrate cluster disconnect -c myCache
Copy to Clipboard Copied! Toggle word wrap Toggle overflow REST API: Invoke a DELETE request.
DELETE /rest/v2/caches/myCache/rolling-upgrade/source-connection
DELETE /rest/v2/caches/myCache/rolling-upgrade/source-connection
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
After you synchronize all data from the source cluster, the rolling upgrade process is complete. You can now decommission the source cluster.
Chapter 17. Troubleshooting Data Grid Server deployments Copy linkLink copied to clipboard!
Gather diagnostic information about Data Grid Server deployments and perform troubleshooting steps to resolve issues.
17.1. Getting diagnostic reports from Data Grid Server Copy linkLink copied to clipboard!
Data Grid Server provides aggregated reports in tar.gz
archives that contain diagnostic information about server instances and host systems. The report provides details about CPU, memory, open files, network sockets and routing, threads, in addition to configuration and log files.
Procedure
- Create a CLI connection to Data Grid Server.
Use the
server report
command to download atar.gz
archive:server report Downloaded report 'infinispan-<hostname>-<timestamp>-report.tar.gz'
server report Downloaded report 'infinispan-<hostname>-<timestamp>-report.tar.gz'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The command responds with the name of the report, as in the following example:
Downloaded report 'infinispan-<hostname>-<timestamp>-report.tar.gz'
Downloaded report 'infinispan-<hostname>-<timestamp>-report.tar.gz'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Move the
tar.gz
file to a suitable location on your filesystem. -
Extract the
tar.gz
file with any archiving tool.
17.2. Changing Data Grid Server logging configuration at runtime Copy linkLink copied to clipboard!
Modify the logging configuration for Data Grid Server at runtime to temporarily adjust logging to troubleshoot issues and perform root cause analysis.
Modifying the logging configuration through the CLI is a runtime-only operation, which means that changes:
-
Are not saved to the
log4j2.xml
file. Restarting server nodes or the entire cluster resets the logging configuration to the default properties in thelog4j2.xml
file. - Apply only to the nodes in the cluster when you invoke the CLI. Nodes that join the cluster after you change the logging configuration use the default properties.
Procedure
- Create a CLI connection to Data Grid Server.
Use the
logging
to make the required adjustments.List all appenders defined on the server:
logging list-appenders
logging list-appenders
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The command provides a JSON response such as the following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List all logger configurations defined on the server:
logging list-loggers
logging list-loggers
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The command provides a JSON response such as the following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add and modify logger configurations with the
set
subcommandFor example, the following command sets the logging level for the
org.infinispan
package toDEBUG
:logging set --level=DEBUG org.infinispan
logging set --level=DEBUG org.infinispan
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove existing logger configurations with the
remove
subcommand.For example, the following command removes the
org.infinispan
logger configuration, which means the root configuration is used instead:logging remove org.infinispan
logging remove org.infinispan
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.3. Gathering resource statistics from the CLI Copy linkLink copied to clipboard!
You can inspect server-collected statistics for some Data Grid Server resources with the stats
command.
Use the stats
command either from the context of a resource that provides statistics (containers, caches) or with a path to such a resource:
stats
stats
stats /containers/default/caches/mycache
stats /containers/default/caches/mycache
17.4. Accessing cluster health via REST Copy linkLink copied to clipboard!
Get Data Grid cluster health via the REST API.
Procedure
Invoke a
GET
request to retrieve cluster health.GET /rest/v2/container/health
GET /rest/v2/container/health
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Data Grid responds with a JSON
document such as the following:
Get Cache Manager status as follows:
GET /rest/v2/container/health/status
GET /rest/v2/container/health/status
Reference
See the REST v2 (version 2) API documentation for more information.
17.5. Accessing cluster health via JMX Copy linkLink copied to clipboard!
Retrieve Data Grid cluster health statistics via JMX.
Procedure
Connect to Data Grid server using any JMX capable tool such as JConsole and navigate to the following object:
org.infinispan:type=CacheManager,name="default",component=CacheContainerHealth
org.infinispan:type=CacheManager,name="default",component=CacheContainerHealth
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Select available MBeans to retrieve cluster health statistics.
Chapter 18. Reference Copy linkLink copied to clipboard!
18.1. Data Grid Server 8.5.2 Readme Copy linkLink copied to clipboard!
Information about Data Grid Server 14.0.21.Final-redhat-00001 distribution.
18.1.1. Requirements Copy linkLink copied to clipboard!
Data Grid Server requires JDK 11 or later.
18.1.2. Starting servers Copy linkLink copied to clipboard!
Use the server
script to run Data Grid Server instances.
Unix / Linux
$RHDG_HOME/bin/server.sh
$RHDG_HOME/bin/server.sh
Windows
$RHDG_HOME\bin\server.bat
$RHDG_HOME\bin\server.bat
Include the --help
or -h
option to view command arguments.
18.1.3. Stopping servers Copy linkLink copied to clipboard!
Use the shutdown
command with the CLI to perform a graceful shutdown.
Alternatively, enter Ctrl-C from the terminal to interrupt the server process or kill it via the TERM signal.
18.1.4. Configuration Copy linkLink copied to clipboard!
Server configuration extends Data Grid configuration with the following server-specific elements:
cache-container
- Defines cache containers for managing cache lifecycles.
endpoints
- Enables and configures endpoint connectors for client protocols.
security
- Configures endpoint security realms.
socket-bindings
- Maps endpoint connectors to interfaces and ports.
The default configuration file is $RHDG_HOME/server/conf/infinispan.xml
.
infinispan.xml
- Provides configuration to run Data Grid Server using default cache container with statistics and authorization enabled. Demonstrates how to set up authentication and authorization using security realms.
Data Grid provides other ready-to-use configuration files that are primarily for development and testing purposes.
$RHDG_HOME/server/conf/
provides the following configuration files:
infinispan-dev-mode.xml
-
Configures Data Grid Server specifically for cross-site replication with IP multicast discovery. The configuration provides
BASIC
authentication to connect to the Hot Rod and REST endpoints. The configuration is designed for development mode and should not be used in production environments. infinispan-local.xml
- Configures Data Grid Server without clustering capabilities.
infinispan-xsite.xml
- Configures cross-site replication on a single host and uses IP multicast for discovery.
infinispan-memcached.xml
- Configures Data Grid Server to behave like a default Memcached server, listening on port 11221 and without authentication.
infinispan-resp.xml
- Configures Data Grid Server to behave like a default Redis server, listening on port 6379 and without authentication.
log4j2.xml
- Configures Data Grid Server logging.
Use different configuration files with the -c
argument, as in the following example that starts a server without clustering capabilities:
Unix / Linux
$RHDG_HOME/bin/server.sh -c infinispan-local.xml
$RHDG_HOME/bin/server.sh -c infinispan-local.xml
Windows
$RHDG_HOME\bin\server.bat -c infinispan-local.xml
$RHDG_HOME\bin\server.bat -c infinispan-local.xml
18.1.5. Bind address Copy linkLink copied to clipboard!
Data Grid Server binds to the loopback IP address localhost
on your network by default.
Use the -b
argument to set a different IP address, as in the following example that binds to all network interfaces:
Unix / Linux
$RHDG_HOME/bin/server.sh -b 0.0.0.0
$RHDG_HOME/bin/server.sh -b 0.0.0.0
Windows
$RHDG_HOME\bin\server.bat -b 0.0.0.0
$RHDG_HOME\bin\server.bat -b 0.0.0.0
18.1.6. Bind port Copy linkLink copied to clipboard!
Data Grid Server listens on port 11222
by default.
Use the -p
argument to set an alternative port:
Unix / Linux
$RHDG_HOME/bin/server.sh -p 30000
$RHDG_HOME/bin/server.sh -p 30000
Windows
$RHDG_HOME\bin\server.bat -p 30000
$RHDG_HOME\bin\server.bat -p 30000
18.1.7. Clustering address Copy linkLink copied to clipboard!
Data Grid Server configuration defines cluster transport so multiple instances on the same network discover each other and automatically form clusters.
Use the -k
argument to change the IP address for cluster traffic:
Unix / Linux
$RHDG_HOME/bin/server.sh -k 192.168.1.100
$RHDG_HOME/bin/server.sh -k 192.168.1.100
Windows
$RHDG_HOME\bin\server.bat -k 192.168.1.100
$RHDG_HOME\bin\server.bat -k 192.168.1.100
18.1.8. Cluster stacks Copy linkLink copied to clipboard!
JGroups stacks configure the protocols for cluster transport. Data Grid Server uses the tcp
stack by default.
Use alternative cluster stacks with the -j
argument, as in the following example that uses UDP for cluster transport:
Unix / Linux
$RHDG_HOME/bin/server.sh -j udp
$RHDG_HOME/bin/server.sh -j udp
Windows
$RHDG_HOME\bin\server.bat -j udp
$RHDG_HOME\bin\server.bat -j udp
18.1.9. Authentication Copy linkLink copied to clipboard!
Data Grid Server requires authentication.
Create a username and password with the CLI as follows:
Unix / Linux
$RHDG_HOME/bin/cli.sh user create username -p "qwer1234!"
$RHDG_HOME/bin/cli.sh user create username -p "qwer1234!"
Windows
$RHDG_HOME\bin\cli.bat user create username -p "qwer1234!"
$RHDG_HOME\bin\cli.bat user create username -p "qwer1234!"
18.1.10. Server home directory Copy linkLink copied to clipboard!
Data Grid Server uses infinispan.server.home.path
to locate the contents of the server distribution on the host filesystem.
The server home directory, referred to as $RHDG_HOME
, contains the following folders:
Folder | Description |
---|---|
| Contains scripts to start servers and CLI. |
|
Contains |
| Provides configuration examples, schemas, component licenses, and other resources. |
|
Contains |
| Provides a root folder for Data Grid Server instances. |
| Contains static resources for Data Grid Console. |
18.1.11. Server root directory Copy linkLink copied to clipboard!
Data Grid Server uses infinispan.server.root.path
to locate configuration files and data for Data Grid Server instances.
You can create multiple server root folders in the same directory or in different directories and then specify the locations with the -s
or --server-root
argument, as in the following example:
Unix / Linux
$RHDG_HOME/bin/server.sh -s server2
$RHDG_HOME/bin/server.sh -s server2
Windows
$RHDG_HOME\bin\server.bat -s server2
$RHDG_HOME\bin\server.bat -s server2
Each server root directory contains the following folders:
├── server │ ├── conf │ ├── data │ ├── lib │ └── log
├── server
│ ├── conf
│ ├── data
│ ├── lib
│ └── log
Folder | Description | System property override |
---|---|---|
| Contains server configuration files. |
|
| Contains data files organized by container name. |
|
|
Contains server extension files. |
|
| Contains server log files. |
|
18.1.12. Logging Copy linkLink copied to clipboard!
Configure Data Grid Server logging with the log4j2.xml
file in the server/conf
folder.
Use the --logging-config=<path_to_logfile>
argument to use custom paths, as follows:
Unix / Linux
$RHDG_HOME/bin/server.sh --logging-config=/path/to/log4j2.xml
$RHDG_HOME/bin/server.sh --logging-config=/path/to/log4j2.xml
To ensure custom paths take effect, do not use the ~
shortcut.
Windows
$RHDG_HOME\bin\server.bat --logging-config=path\to\log4j2.xml
$RHDG_HOME\bin\server.bat --logging-config=path\to\log4j2.xml