Data Grid Server Guide
Deploy, secure, and manage Data Grid Server
Abstract
Red Hat Data Grid
Data Grid is a high-performance, distributed in-memory data store.
- Schemaless data structure
- Flexibility to store different objects as key-value pairs.
- Grid-based data storage
- Designed to distribute and replicate data across clusters.
- Elastic scaling
- Dynamically adjust the number of nodes to meet demand without service disruption.
- Data interoperability
- Store, retrieve, and query data in the grid from different endpoints.
Data Grid documentation
Documentation for Data Grid is available on the Red Hat customer portal.
Data Grid downloads
Access the Data Grid Software Downloads on the Red Hat customer portal.
You must have a Red Hat account to access and download Data Grid software.
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. Getting Started with Data Grid Server
Quickly set up Data Grid Server and learn the basics.
1.1. Data Grid Server Requirements
Data Grid Server requires a Java Virtual Machine. See the Data Grid Supported Configurations for details on supported versions.
1.2. Downloading Server Distributions
The Data Grid server distribution is an archive of Java libraries (JAR
files), configuration files, and a data
directory.
Procedure
- Access the Red Hat customer portal.
- Download Red Hat Data Grid 8.2 Server from the software downloads section.
Run the
md5sum
orsha256sum
command with the server download archive as the argument, for example:$ sha256sum jboss-datagrid-${version}-server.zip
-
Compare with the
MD5
orSHA-256
checksum value on the Data Grid Software Details page.
Reference
- Data Grid Server README describes the contents of the server distribution.
1.3. Installing Data Grid Server
Install the Data Grid Server distribution on a host system.
Prerequisites
Download a Data Grid Server distribution archive.
Procedure
- Use any appropriate tool to extract the Data Grid Server archive to the host filesystem.
$ unzip redhat-datagrid-8.2.3-server.zip
The resulting directory is your $RHDG_HOME
.
1.4. Starting Data Grid Servers
Run Data Grid Server instances in a Java Virtual Machine (JVM) on any supported host.
Prerequisites
- Download and install the server distribution.
Procedure
-
Open a terminal in
$RHDG_HOME
. Start Data Grid Server instances with the
server
script.- Linux
$ bin/server.sh
- Microsoft Windows
bin\server.bat
Data Grid Server is running successfully when it logs the following messages:
ISPN080004: Protocol SINGLE_PORT listening on 127.0.0.1:11222 ISPN080034: Server '...' listening on http://127.0.0.1:11222 ISPN080001: Data Grid Server <version> started in <mm>ms
Verification
-
Open
127.0.0.1:11222/console/
in any browser. - Enter your credentials at the prompt and continue to Data Grid Console.
1.5. Creating and Modifying Users
Add Data Grid user credentials and assign permissions to control access to data.
Data Grid server installations use a property realm to authenticate users for the Hot Rod and REST endpoints. This means you need to create at least one user before you can access Data Grid.
By default, users also need roles with permissions to access caches and interact with Data Grid resources. You can assign roles to users individually or add users to groups that have role permissions.
You create users and assign roles with the user
command in the Data Grid command line interface (CLI).
Run help user
from a CLI session to get complete command details.
1.5.1. Adding Credentials
You need an admin
user for the Data Grid Console and full control over your Data Grid environment. For this reason you should create a user with admin
permissions the first time you add credentials.
Procedure
-
Open a terminal in
$RHDG_HOME
. Create an
admin
user with theuser create
command in the CLI.$ bin/cli.sh user create myuser -p changeme -g admin
Alternatively, the username "admin" automatically gets
admin
permissions.$ bin/cli.sh user create admin -p changeme
Open
user.properties
andgroups.properties
with any text editor to verify users and groups.$ cat server/conf/users.properties #$REALM_NAME=default$ #$ALGORITHM=encrypted$ myuser=scram-sha-1\:BYGcIAwvf6b... $ cat server/conf/groups.properties myuser=admin
1.5.2. Assigning Roles to Users
Assign roles to users so they have the correct permissions to access data and modify Data Grid resources.
Procedure
Start a CLI session with an
admin
user.$ bin/cli.sh
Assign the
deployer
role to "katie".[//containers/default]> user roles grant --roles=deployer katie
List roles for "katie".
[//containers/default]> user roles ls katie ["deployer"]
1.5.3. Adding Users to Groups
Groups let you change permissions for multiple users. You assign a role to a group and then add users to that group. Users inherit permissions from the group role.
Procedure
-
Start a CLI session with an
admin
user. Use the
user create
command to create a group.-
Specify "developers" as the group name with the
--groups
argument. Set a username and password for the group.
In a property realm, a group is a special type of user that also requires a username and password.
[//containers/default]> user create --groups=developers developers -p changeme
-
Specify "developers" as the group name with the
List groups.
[//containers/default]> user ls --groups ["developers"]
Assign the
application
role to the "developers" group.[//containers/default]> user roles grant --roles=application developers
List roles for the "developers" group.
[//containers/default]> user roles ls developers ["application"]
Add existing users, one at a time, to the group as required.
[//containers/default]> user groups john --groups=developers
1.5.4. User Roles and Permissions
Data Grid includes a default set of roles that grant users with permissions to access data and interact with Data Grid resources.
ClusterRoleMapper
is the default mechanism that Data Grid uses to associate security principals to authorization roles.
ClusterRoleMapper
matches principal names to role names. A user named admin
gets admin
permissions automatically, a user named deployer
gets deployer
permissions, and so on.
Role | Permissions | Description |
---|---|---|
| ALL | Superuser with all permissions including control of the Cache Manager lifecycle. |
| ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR, CREATE |
Can create and delete Data Grid resources in addition to |
| ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR |
Has read and write access to Data Grid resources in addition to |
| ALL_READ, MONITOR |
Has read access to Data Grid resources in addition to |
| MONITOR |
Can view statistics via JMX and the |
1.6. Verifying Cluster Views
Data Grid nodes on the same network automatically discover each other and form clusters.
Complete this procedure to observe cluster discovery with the MPING
protocol in the default TCP
stack with locally running Data Grid Server instances. If you want to adjust cluster transport for custom network requirements, see the documentation for setting up Data Grid clusters.
This procedure is intended to demonstrate the principle of cluster discovery and is not intended for production environments. Doing things like specifying a port offset on the command line is not a reliable way to configure cluster transport for production.
Prerequisites
Have one instance of Data Grid Server running.
Procedure
-
Open a terminal in
$RHDG_HOME
. Copy the root directory to
server2
.$ cp -r server server2
Specify a port offset and the
server2
directory.$ bin/server.sh -o 100 -s server2
Verification
You can view cluster membership in the console at 127.0.0.1:11222/console/cluster-membership
.
Data Grid also logs the following messages when nodes join clusters:
INFO [org.infinispan.CLUSTER] (jgroups-11,<server_hostname>) ISPN000094: Received new cluster view for channel cluster: [<server_hostname>|3] (2) [<server_hostname>, <server2_hostname>] INFO [org.infinispan.CLUSTER] (jgroups-11,<server_hostname>) ISPN100000: Node <server2_hostname> joined the cluster
Reference
1.7. Shutting Down Data Grid Server
Stop individually running servers or bring down clusters gracefully.
Procedure
- Create a CLI connection to Data Grid.
Shut down Data Grid Server in one of the following ways:
Stop all nodes in a cluster with the
shutdown cluster
command, for example:[//containers/default]> shutdown cluster
This command saves cluster state to the
data
folder for each node in the cluster. If you use a cache store, theshutdown cluster
command also persists all data in the cache.Stop individual server instances with the
shutdown server
command and the server hostname, for example:[//containers/default]> shutdown server <my_server01>
The shutdown server
command does not wait for rebalancing operations to complete, which can lead to data loss if you specify multiple hostnames at the same time.
Run help shutdown
for more details about using the command.
Verification
Data Grid logs the following messages when you shut down servers:
ISPN080002: Data Grid Server stopping ISPN000080: Disconnecting JGroups channel cluster ISPN000390: Persisted state, version=<$version> timestamp=YYYY-MM-DDTHH:MM:SS ISPN080003: Data Grid Server stopped
1.7.1. Restarting Data Grid Clusters
When you bring Data Grid clusters back online after shutting them down, you should wait for the cluster to be available before adding or removing nodes or modifying cluster state.
If you shutdown clustered nodes with the shutdown server
command, you must restart each server in reverse order.
For example, if you shutdown server1
and then shutdown server2
, you should first start server2
and then start server1
.
If you shutdown a cluster with the shutdown cluster
command, clusters become fully operational only after all nodes rejoin.
You can restart nodes in any order but the cluster remains in DEGRADED state until all nodes that were joined before shutdown are running.
1.8. Data Grid Server Filesystem
Data Grid Server uses the following folders on the host filesystem under $RHDG_HOME
:
├── bin ├── boot ├── docs ├── lib ├── server └── static
See the Data Grid Server README for descriptions of the each folder in your $RHDG_HOME
directory as well as system properties you can use to customize the filesystem.
1.8.1. Server Root Directory
Apart from resources in the bin
and docs
folders, the only folder under $RHDG_HOME
that you should interact with is the server root directory, which is named server
by default.
You can create multiple nodes under the same $RHDG_HOME
directory or in different directories, but each Data Grid Server instance must have its own server root directory. For example, a cluster of 5 nodes could have the following server root directories on the filesystem:
├── server ├── server1 ├── server2 ├── server3 └── server4
Each server root directory should contain the following folders:
├── server │ ├── conf │ ├── data │ ├── lib │ └── log
server/conf
Holds infinispan.xml
configuration files for a Data Grid Server instance.
Data Grid separates configuration into two layers:
- Dynamic
-
Create mutable cache configurations for data scalability.
Data Grid Server permanently saves the caches you create at runtime along with the cluster state that is distributed across nodes. Each joining node receives a complete cluster state that Data Grid Server synchronizes across all nodes whenever changes occur. - Static
-
Add configuration to
infinispan.xml
for underlying server mechanisms such as cluster transport, security, and shared datasources.
server/data
Provides internal storage that Data Grid Server uses to maintain cluster state.
Never directly delete or modify content in server/data
.
Modifying files such as caches.xml
while the server is running can cause corruption. Deleting content can result in an incorrect state, which means clusters cannot restart after shutdown.
server/lib
Contains extension JAR
files for custom filters, custom event listeners, JDBC drivers, custom ServerTask
implementations, and so on.
server/log
Holds Data Grid Server log files.
Reference
- Data Grid Server README
- What is stored in the <server>/data directory used by a RHDG server (Red Hat Knowledgebase)
Chapter 2. Network Interfaces and Endpoints
Expose Data Grid Server through a network interface by binding it to an IP address. You can then configure endpoints to use the interface so Data Grid Server can handle requests from remote client applications.
By default, Data Grid Server exposes a single port that automatically detects the protocol of inbound requests.
2.1. Network Interfaces
Data Grid Server multiplexes endpoints to a single TCP/IP port and automatically detects protocols of inbound client requests. You can configure how Data Grid Server binds to network interfaces to listen for client requests.
Internet Protocol (IP) address
<!-- Selects a specific IPv4 address, which can be public, private, or loopback. This is the default network interface for Data Grid Server. --> <interfaces> <interface name="public"> <inet-address value="${infinispan.bind.address:127.0.0.1}"/> </interface> </interfaces>
Loopback address
<!-- Selects an IP address in an IPv4 or IPv6 loopback address block. --> <interfaces> <interface name="public"> <loopback/> </interface> </interfaces>
Non-loopback address
<!-- Selects an IP address in an IPv4 or IPv6 non-loopback address block. --> <interfaces> <interface name="public"> <non-loopback/> </interface> </interfaces>
Any address
<!-- Uses the `INADDR_ANY` wildcard address which means Data Grid Server listens for inbound client requests on all interfaces. --> <interfaces> <interface name="public"> <any-address/> </interface> </interfaces>
Link local
<!-- Selects a link-local IP address in an IPv4 or IPv6 address block. --> <interfaces> <interface name="public"> <link-local/> </interface> </interfaces>
Site local
<!-- Selects a site-local (private) IP address in an IPv4 or IPv6 address block. --> <interfaces> <interface name="public"> <site-local/> </interface> </interfaces>
Match and fallback strategies
Data Grid Server can enumerate all network interfaces on the host system and bind to an interface, host, or IP address that matches a value, which can include regular expressions for additional flexibility.
Match host
<!-- Selects an IP address that is assigned to a matching host name. --> <interfaces> <interface name="public"> <match-host value="my_host_name"/> </interface> </interfaces>
Match interface
<!--Selects an IP address assigned to a matching network interface. --> <interfaces> <interface name="public"> <match-interface value="eth0"/> </interface> </interfaces>
Match address
<!-- Selects an IP address that matches a regular expression. --> <interfaces> <interface name="public"> <match-address value="132\..*"/> </interface> </interfaces>
Fallback
<!-- Includes multiple strategies that Data Grid Server tries in the declared order until it finds a match. --> <interfaces> <interface name="public"> <match-host value="my_host_name"/> <match-address value="132\..*"/> <any-address/> </interface> </interfaces>
2.2. Socket Bindings
Socket bindings map endpoint connectors to server interfaces and ports.
By default, Data Grid servers provide the following socket bindings:
<socket-bindings default-interface="public" port-offset="${infinispan.socket.binding.port-offset:0}"> <socket-binding name="default" port="${infinispan.bind.port:11222}"/> <socket-binding name="memcached" port="11221"/> </socket-bindings>
-
socket-bindings
declares the default interface and port offset. -
default
binds to hotrod and rest connectors to the default port11222
. memcached
binds the memcached connector to port11221
.NoteThe memcached endpoint is disabled by default.
To override the default interface for socket-binding
declarations, specify the interface
attribute.
For example, you add an interface
declaration named "private":
<interfaces> ... <interface name="private"> <inet-address value="10.1.2.3"/> </interface> </interfaces>
You can then specify interface="private"
in a socket-binding
declaration to bind to the private IP address, as follows:
<socket-bindings default-interface="public" port-offset="${infinispan.socket.binding.port-offset:0}"> ... <socket-binding name="private_binding" interface="private" port="1234"/> </socket-bindings>
2.3. Changing the Default Bind Address for Data Grid Servers
You can use the server -b
switch or the infinispan.bind.address
system property to bind to a different address.
For example, bind the public
interface to 127.0.0.2
as follows:
- Linux
$ bin/server.sh -b 127.0.0.2
- Windows
bin\server.bat -b 127.0.0.2
2.4. Specifying Port Offsets
Configure port offsets with Data Grid servers when running multiple instances on the same host. The default port offset is 0
.
Use the -o
switch with the Data Grid CLI or the infinispan.socket.binding.port-offset
system property to set port offsets.
For example, start a server instance with an offset of 100
as follows. With the default configuration, this results in the Data Grid server listening on port 11322
.
- Linux
$ bin/server.sh -o 100
- Windows
bin\server.bat -o 100
2.5. Data Grid Endpoints
Data Grid endpoints expose the CacheManager
interface over different connector protocols so you can remotely access data and perform operations to manage and maintain Data Grid clusters.
You can define multiple endpoint connectors on different socket bindings.
2.5.1. Hot Rod
Hot Rod is a binary TCP client-server protocol designed to provide faster data access and improved performance in comparison to text-based protocols.
Data Grid provides Hot Rod client libraries in Java, C++, C#, Node.js and other programming languages.
Topology state transfer
Data Grid uses topology caches to provide clients with cluster views. Topology caches contain entries that map internal JGroups transport addresses to exposed Hot Rod endpoints.
When client send requests, Data Grid servers compare the topology ID in request headers with the topology ID from the cache. Data Grid servers send new topology views if client have older topology IDs.
Cluster topology views allow Hot Rod clients to immediately detect when nodes join and leave, which enables dynamic load balancing and failover.
In distributed cache modes, the consistent hashing algorithm also makes it possible to route Hot Rod client requests directly to primary owners.
2.5.2. REST
Reference
Data Grid exposes a RESTful interface that allows HTTP clients to access data, monitor and maintain clusters, and perform administrative operations.
You can use standard HTTP load balancers to provide clients with load balancing and failover capabilities. However, HTTP load balancers maintain static cluster views and require manual updates when cluster topology changes occur.
2.5.3. Protocol Comparison
Hot Rod | HTTP / REST | |
---|---|---|
Topology-aware | Y | N |
Hash-aware | Y | N |
Encryption | Y | Y |
Authentication | Y | Y |
Conditional ops | Y | Y |
Bulk ops | Y | N |
Transactions | Y | N |
Listeners | Y | N |
Query | Y | Y |
Execution | Y | N |
Cross-site failover | Y | N |
2.6. Endpoint Connectors
You configure Data Grid server endpoints with connector declarations that specify socket bindings, authentication mechanisms, and encryption configuration.
The default endpoint connector configuration is as follows:
<endpoints socket-binding="default" security-realm="default"/>
-
endpoints
contains endpoint connector declarations and defines global configuration for endpoints such as default socket bindings, security realms, and whether clients must present valid TLS certificates. -
<hotrod-connector/>
declares a Hot Rod connector. -
<rest-connector/>
declares a REST connector. -
<memcached-connector socket-binding="memcached"/>
declares a Memcached connector that uses the memcached socket binding.
Declaring an empty <endpoints/>
element implicitly enables the Hot Rod and REST connectors.
It is possible to have multiple endpoints
bound to different sockets. These can use different security realms and offer different authentication and encryption configurations. The following configuration enables two endpoints on distinct socket bindings, each one with a dedicated security realm. Additionally, the public
endpoint disables administrative features, such as the console and CLI.
<endpoints socket-binding="public" security-realm="application-realm" admin="false"> <hotrod-connector/> <rest-connector/> </endpoints> <endpoints socket-binding="private" security-realm="management-realm"> <hotrod-connector/> <rest-connector/> </endpoints>
Reference
urn:infinispan:server schema provides all available endpoint configuration.
2.6.1. Hot Rod Connectors
Hot Rod connector declarations enable Hot Rod servers.
<hotrod-connector name="hotrod"> <topology-state-transfer /> <authentication> <!-- Hot Rod endpoint authentication configuration. --> </authentication> <encryption> <!-- Hot Rod endpoint SSL/TLS encryption configuration. --> </encryption> </hotrod-connector>
-
name="hotrod"
logically names the Hot Rod connector. By default the name is derived from the socket binding name, for example hotrod-default. -
topology-state-transfer
tunes the state transfer operations that provide Hot Rod clients with cluster topology. -
authentication
configures SASL authentication mechanisms. -
encryption
configures TLS settings for client connections.
Reference
urn:infinispan:server schema provides all available Hot Rod connector configuration.
2.6.2. REST Connectors
REST connector declarations enable REST servers.
<rest-connector name="rest"> <authentication> <!-- REST endpoint authentication configuration. --> </authentication> <cors-rules> <!-- Cross-Origin Resource Sharing (CORS) rules. --> </cors-rules> <encryption> <!-- REST endpoint SSL/TLS encryption configuration. --> </encryption> </rest-connector>
-
name="rest"
logically names the REST connector. By default the name is derived from the socket binding name, for example rest-default. -
authentication
configures authentication mechanisms. -
cors-rules
specifies CORS (Cross Origin Resource Sharing) rules for cross-domain requests. -
encryption
configures TLS settings for client connections.
Reference
urn:infinispan:server schema provides all available REST connector configuration.
2.7. Data Grid Server Ports and Protocols
Data Grid Server exposes endpoints on your network for remote client access.
Port | Protocol | Description |
---|---|---|
| TCP | Hot Rod and REST endpoint |
| TCP | Memcached endpoint, which is disabled by default. |
2.8. Single Port
Data Grid Server exposes multiple protocols through a single TCP port, which is 11222
by default. Handling multiple protocols with a single port simplifies configuration and reduces management complexity when deploying Data Grid clusters. Using a single port also enhances security by minimizing the attack surface on the network.
Data Grid Server handles HTTP/1.1, HTTP/2, and Hot Rod protocol requests from clients via the single port in different ways.
HTTP/1.1 upgrade headers
Client requests can include the HTTP/1.1 upgrade
header field to initiate HTTP/1.1 connections with Data Grid Server. Client applications can then send the Upgrade: protocol
header field, where protocol
is a server endpoint.
Application-Layer Protocol Negotiation (ALPN)/Transport Layer Security (TLS)
Client requests include Server Name Indication (SNI) mappings for Data Grid Server endpoints to negotiate protocols over a TLS connection.
Applications must use a TLS library that supports the ALPN extension. Data Grid uses WildFly OpenSSL bindings for Java.
Automatic Hot Rod detection
Client requests that include Hot Rod headers automatically route to Hot Rod endpoints.
2.8.1. Configuring Network Firewalls for Remote Connections
Adjust any firewall rules to allow traffic between the server and external clients.
Procedure
On Red Hat Enterprise Linux (RHEL) workstations, for example, you can allow traffic to port 11222
with firewalld as follows:
# firewall-cmd --add-port=11222/tcp --permanent success # firewall-cmd --list-ports | grep 11222 11222/tcp
To configure firewall rules that apply across a network, you can use the nftables utility.
Chapter 3. Security Realms
Security realms define identity, encryption, authentication, and authorization configuration for Data Grid Server endpoints.
3.1. Property Realms
Property realms use property files to define users and groups.
users.properties
maps usernames to passwords in plain-text format. Passwords can also be pre-digested if you use the DIGEST-MD5
SASL mechanism or Digest
HTTP mechanism.
myuser=a_password user2=another_password
groups.properties
maps users to roles.
myuser=supervisor,reader,writer user2=supervisor
Endpoint authentication mechanisms
When you configure Data Grid Server to use a property realm, you can configure endpoints to use the following authentication mechanisms:
-
Hot Rod (SASL):
PLAIN
,DIGEST-*
, andSCRAM-*
-
REST (HTTP):
Basic
andDigest
Property realm configuration
<security xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:server:12.1 https://infinispan.org/schemas/infinispan-server-12.1.xsd" xmlns="urn:infinispan:server:12.1"> <security-realms> <security-realm name="default"> <!-- Defines groups as roles for server authorization. --> <properties-realm groups-attribute="Roles"> <!-- Specifies the properties file that holds usernames and passwords. --> <!-- The plain-text="true" attribute stores passwords in plain text. --> <user-properties path="users.properties" relative-to="infinispan.server.config.path" plain-text="true"/> <!-- Specifies the properties file that defines roles for users. --> <group-properties path="groups.properties" relative-to="infinispan.server.config.path"/> </properties-realm> </security-realm> </security-realms> </security>
3.1.1. Creating and Modifying Users
Add Data Grid user credentials and assign permissions to control access to data.
Data Grid server installations use a property realm to authenticate users for the Hot Rod and REST endpoints. This means you need to create at least one user before you can access Data Grid.
By default, users also need roles with permissions to access caches and interact with Data Grid resources. You can assign roles to users individually or add users to groups that have role permissions.
You create users and assign roles with the user
command in the Data Grid command line interface (CLI).
Run help user
from a CLI session to get complete command details.
3.1.1.1. Adding Credentials
You need an admin
user for the Data Grid Console and full control over your Data Grid environment. For this reason you should create a user with admin
permissions the first time you add credentials.
Procedure
-
Open a terminal in
$RHDG_HOME
. Create an
admin
user with theuser create
command in the CLI.$ bin/cli.sh user create myuser -p changeme -g admin
Alternatively, the username "admin" automatically gets
admin
permissions.$ bin/cli.sh user create admin -p changeme
Open
user.properties
andgroups.properties
with any text editor to verify users and groups.$ cat server/conf/users.properties #$REALM_NAME=default$ #$ALGORITHM=encrypted$ myuser=scram-sha-1\:BYGcIAwvf6b... $ cat server/conf/groups.properties myuser=admin
3.1.1.2. Assigning Roles to Users
Assign roles to users so they have the correct permissions to access data and modify Data Grid resources.
Procedure
Start a CLI session with an
admin
user.$ bin/cli.sh
Assign the
deployer
role to "katie".[//containers/default]> user roles grant --roles=deployer katie
List roles for "katie".
[//containers/default]> user roles ls katie ["deployer"]
3.1.1.3. Adding Users to Groups
Groups let you change permissions for multiple users. You assign a role to a group and then add users to that group. Users inherit permissions from the group role.
Procedure
-
Start a CLI session with an
admin
user. Use the
user create
command to create a group.-
Specify "developers" as the group name with the
--groups
argument. Set a username and password for the group.
In a property realm, a group is a special type of user that also requires a username and password.
[//containers/default]> user create --groups=developers developers -p changeme
-
Specify "developers" as the group name with the
List groups.
[//containers/default]> user ls --groups ["developers"]
Assign the
application
role to the "developers" group.[//containers/default]> user roles grant --roles=application developers
List roles for the "developers" group.
[//containers/default]> user roles ls developers ["application"]
Add existing users, one at a time, to the group as required.
[//containers/default]> user groups john --groups=developers
3.2. LDAP Realms
LDAP realms connect to LDAP servers, such as OpenLDAP, Red Hat Directory Server, Apache Directory Server, or Microsoft Active Directory, to authenticate users and obtain membership information.
LDAP servers can have different entry layouts, depending on the type of server and deployment. It is beyond the scope of this document to provide examples for all possible configurations.
Endpoint authentication mechanisms
When you configure Data Grid Server to use an LDAP realm, you can configure endpoints to use the following authentication mechanisms:
-
Hot Rod (SASL):
PLAIN
,DIGEST-*
, andSCRAM-*
-
REST (HTTP):
Basic
andDigest
LDAP realm configuration
<security xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:server:12.1 https://infinispan.org/schemas/infinispan-server-12.1.xsd" xmlns="urn:infinispan:server:12.1"> <security-realms> <security-realm name="default"> <!-- Names an LDAP realm and specifies connection properties. --> <ldap-realm name="ldap" url="ldap://my-ldap-server:10389" principal="uid=admin,ou=People,dc=infinispan,dc=org" credential="strongPassword" connection-timeout="3000" read-timeout="30000" connection-pooling="true" referral-mode="ignore" page-size="30" direct-verification="true"> <!-- Defines how principals are mapped to LDAP entries. --> <identity-mapping rdn-identifier="uid" search-dn="ou=People,dc=infinispan,dc=org" search-recursive="false"> <!-- Retrieves all the groups of which the user is a member. --> <attribute-mapping> <attribute from="cn" to="Roles" filter="(&(objectClass=groupOfNames)(member={1}))" filter-dn="ou=Roles,dc=infinispan,dc=org"/> </attribute-mapping> </identity-mapping> </ldap-realm> </security-realm> </security-realms> </security>
The principal for LDAP connections must have necessary privileges to perform LDAP queries and access specific attributes.
As an alternative to verifying user credentials with the direct-verification
attribute, you can specify an LDAP password with the user-password-mapper
element.
The rdn-identifier
attribute specifies an LDAP attribute that finds the user entry based on a provided identifier, which is typically a username; for example, the uid
or sAMAccountName
attribute. Add search-recursive="true"
to the configuration to search the directory recursively. By default, the search for the user entry uses the (rdn_identifier={0})
filter. Specify a different filter with the filter-name
attribute.
The attribute-mapping
element retrieves all the groups of which the user is a member. There are typically two ways in which membership information is stored:
-
Under group entries that usually have class
groupOfNames
in themember
attribute. In this case, you can use an attribute filter as in the preceding example configuration. This filter searches for entries that match the supplied filter, which locates groups with amember
attribute equal to the user’s DN. The filter then extracts the group entry’s CN as specified byfrom
, and adds it to the user’sRoles
. In the user entry in the
memberOf
attribute. In this case you should use an attribute reference such as the following:<attribute-reference reference="memberOf" from="cn" to="Roles" />
This reference gets all
memberOf
attributes from the user’s entry, extracts the CN as specified byfrom
, and adds them to the user’sRoles
.
3.2.1. LDAP Realm Principal Rewriting
Some SASL authentication mechanisms, such as GSSAPI
, GS2-KRB5
and Negotiate
, supply a username that needs to be cleaned up before you can use it to search LDAP servers.
<security xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:server:12.1 https://infinispan.org/schemas/infinispan-server-12.1.xsd" xmlns="urn:infinispan:server:12.1"> <security-realms> <security-realm name="default"> <ldap-realm name="ldap" url="ldap://${org.infinispan.test.host.address}:10389" principal="uid=admin,ou=People,dc=infinispan,dc=org" credential="strongPassword"> <name-rewriter> <!-- Defines a rewriter that extracts the username from the principal using a regular expression. --> <regex-principal-transformer name="domain-remover" pattern="(.*)@INFINISPAN\.ORG" replacement="$1"/> </name-rewriter> <identity-mapping rdn-identifier="uid" search-dn="ou=People,dc=infinispan,dc=org"> <attribute-mapping> <attribute from="cn" to="Roles" filter="(&(objectClass=groupOfNames)(member={1}))" filter-dn="ou=Roles,dc=infinispan,dc=org" /> </attribute-mapping> <user-password-mapper from="userPassword" /> </identity-mapping> </ldap-realm> </security-realm> </security-realms> </security>
3.3. Token Realms
Token realms use external services to validate tokens and require providers that are compatible with RFC-7662 (OAuth2 Token Introspection), such as Red Hat SSO.
Endpoint authentication mechanisms
When you configure Data Grid Server to use a token realm, you must configure endpoints to use the following authentication mechanisms:
-
Hot Rod (SASL):
OAUTHBEARER
-
REST (HTTP):
Bearer
Token realm configuration
<security xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:server:12.1 https://infinispan.org/schemas/infinispan-server-12.1.xsd" xmlns="urn:infinispan:server:12.1"> <security-realms> <security-realm name="default"> <!-- Specifies the URL of the authentication server. --> <token-realm name="token" auth-server-url="https://oauth-server/auth/"> <!-- Specifies the URL of the token introspection endpoint. --> <oauth2-introspection introspection-url="https://oauth-server/auth/realms/infinispan/protocol/openid-connect/token/introspect" client-id="infinispan-server" client-secret="1fdca4ec-c416-47e0-867a-3d471af7050f"/> </token-realm> </security-realm> </security-realms> </security>
3.4. Trust Store Realms
Trust store realms use certificates, or certificates chains, that verify Data Grid Server and client identities when they negotiate connections.
- Keystores
- Contain server certificates that provide a Data Grid Server identity to clients. If you configure a keystore with server certificates, Data Grid Server encrypts traffic using industry standard SSL/TLS protocols.
- Trust stores
- Contain client certificates, or certificate chains, that clients present to Data Grid Server. Client trust stores are optional and allow Data Grid Server to perform client certificate authentication.
Client certificate authentication
You must add the require-ssl-client-auth="true"
attribute to the endpoint configuration if you want Data Grid Server to validate or authenticate client certificates.
Endpoint authentication mechanisms
If you configure Data Grid Server with a keystore only, you can use encryption in combination with any authentication mechanism.
When you configure Data Grid Server to use a client trust store, you must configure endpoints to use the following authentication mechanisms:
-
Hot Rod (SASL):
EXTERNAL
-
REST (HTTP):
CLIENT_CERT
Trust store realm configuration
<security xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:server:12.1 https://infinispan.org/schemas/infinispan-server-12.1.xsd" xmlns="urn:infinispan:server:12.1"> <security-realms> <security-realm name="default"> <server-identities> <ssl> <!-- Provides an SSL/TLS identity with a keystore that contains server certificates. --> <keystore path="server.p12" relative-to="infinispan.server.config.path" keystore-password="secret" alias="server"/> <!-- Configures a trust store that contains client certificates or part of a certificate chain. --> <truststore path="trust.p12" relative-to="infinispan.server.config.path" password="secret"/> </ssl> </server-identities> <!-- Authenticates client certificates against the trust store. If you configure this, the trust store must contain the public certificates for all clients. --> <truststore-realm/> </security-realm> </security-realms> </security> <!-- Configures Data Grid Server to require client certificates with the "require-ssl-client-auth" attribute. --> <endpoints xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:server:12.1 https://infinispan.org/schemas/infinispan-server-12.1.xsd" xmlns="urn:infinispan:server:12.1" socket-binding="default" security-realm="default" require-ssl-client-auth="true"> <hotrod-connector> <!-- Configures the Hot Rod endpoint for client certificate authentication. --> <authentication> <sasl mechanisms="EXTERNAL" server-name="infinispan" qop="auth"/> </authentication> </hotrod-connector> <rest-connector> <!-- Configures the REST endpoint for client certificate authentication. --> <authentication mechanisms="CLIENT_CERT"/> </rest-connector> </endpoints>
Chapter 4. Configuring Endpoint Authentication Mechanisms
Configure Hot Rod and REST connectors with SASL or HTTP authentication mechanisms to authenticate with clients.
Data Grid servers require user authentication to access the command line interface (CLI) and console as well as the Hot Rod and REST endpoints. Data Grid servers also automatically configure authentication mechanisms based on the security realms that you define.
4.1. Data Grid Server Authentication
Data Grid servers automatically configure authentication mechanisms based on the security realm that you assign to endpoints.
SASL Authentication Mechanisms
The following SASL authentication mechanisms apply to Hot Rod endpoints:
Security Realm | SASL Authentication Mechanism |
---|---|
Property Realms and LDAP Realms | SCRAM-*, DIGEST-*, CRAM-MD5 |
Token Realms | OAUTHBEARER |
Trust Realms | EXTERNAL |
Kerberos Identities | GSSAPI, GS2-KRB5 |
SSL/TLS Identities | PLAIN |
HTTP Authentication Mechanisms
The following HTTP authentication mechanisms apply to REST endpoints:
Security Realm | HTTP Authentication Mechanism |
---|---|
Property Realms and LDAP Realms | DIGEST |
Token Realms | BEARER_TOKEN |
Trust Realms | CLIENT_CERT |
Kerberos Identities | SPNEGO |
SSL/TLS Identities | BASIC |
Default Configuration
Data Grid servers provide a security realm named "default" that uses a property realm with plain text credentials defined in $RHDG_HOME/server/ conf/users.properties
, as shown in the following snippet:
<security-realm name="default"> <properties-realm groups-attribute="Roles"> <user-properties path="users.properties" relative-to="infinispan.server.config.path" plain-text="true"/> <group-properties path="groups.properties" relative-to="infinispan.server.config.path" /> </properties-realm> </security-realm>
The endpoints
configuration assigns the "default" security realm to the Hot Rod and REST connectors, as follows:
<endpoints socket-binding="default" security-realm="default"> <hotrod-connector name="hotrod"/> <rest-connector name="rest"/> </endpoints>
As a result of the preceding configuration, Data Grid servers require authentication with a mechanism that the property realm supports.
4.2. Manually Configuring Hot Rod Authentication
Explicitly configure Hot Rod connector authentication to override the default SASL authentication mechanisms that Data Grid servers use for security realms.
Procedure
-
Add an
authentication
definition to the Hot Rod connector configuration. - Specify which Data Grid security realm the Hot Rod connector uses for authentication.
- Specify the SASL authentication mechanisms for the Hot Rod endpoint to use.
- Configure SASL authentication properties as appropriate.
4.2.1. Hot Rod Authentication Configuration
Hot Rod connector with SCRAM, DIGEST, and PLAIN authentication
<endpoints xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:server:12.1 https://infinispan.org/schemas/infinispan-server-12.1.xsd" xmlns="urn:infinispan:server:12.1" socket-binding="default" security-realm="default"> <hotrod-connector> <authentication> <!-- Specifies SASL mechanisms to use for authentication. --> <!-- Defines the name that the server declares to clients. --> <sasl mechanisms="SCRAM-SHA-512 SCRAM-SHA-384 SCRAM-SHA-256 SCRAM-SHA-1 DIGEST-SHA-512 DIGEST-SHA-384 DIGEST-SHA-256 DIGEST-SHA DIGEST-MD5 PLAIN" server-name="infinispan" qop="auth"/> </authentication> </hotrod-connector> </endpoints>
Hot Rod connector with Kerberos authentication
<endpoints xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:server:12.1 https://infinispan.org/schemas/infinispan-server-12.1.xsd" xmlns="urn:infinispan:server:12.1" socket-binding="default" security-realm="default"> <hotrod-connector> <authentication> <!-- Enables the GSSAPI and GS2-KRB5 mechanisms for Kerberos authentication. --> <!-- Defines the server name, which is equivalent to the Kerberos service name, and specifies the Kerberos identity for the server. --> <sasl mechanisms="GSSAPI GS2-KRB5" server-name="datagrid" server-principal="hotrod/datagrid@INFINISPAN.ORG"/> </authentication> </hotrod-connector> </endpoints>
4.2.2. Hot Rod Endpoint Authentication Mechanisms
Data Grid supports the following SASL authentications mechanisms with the Hot Rod connector:
Authentication mechanism | Description | Related details |
---|---|---|
|
Uses credentials in plain-text format. You should use |
Similar to the |
|
Uses hashing algorithms and nonce values. Hot Rod connectors support |
Similar to the |
|
Uses salt values in addition to hashing algorithms and nonce values. Hot Rod connectors support |
Similar to the |
|
Uses Kerberos tickets and requires a Kerberos Domain Controller. You must add a corresponding |
Similar to the |
|
Uses Kerberos tickets and requires a Kerberos Domain Controller. You must add a corresponding |
Similar to the |
| Uses client certificates. |
Similar to the |
|
Uses OAuth tokens and requires a |
Similar to the |
4.2.3. SASL Quality of Protection (QoP)
If SASL mechanisms support integrity and privacy protection settings, you can add them to your Hot Rod connector configuration with the qop
attribute.
QoP setting | Description |
---|---|
| Authentication only. |
| Authentication with integrity protection. |
| Authentication with integrity and privacy protection. |
4.2.4. SASL Policies
SASL policies let you control which authentication mechanisms Hot Rod connectors can use.
Policy | Description | Default value |
---|---|---|
| Use only SASL mechanisms that support forward secrecy between sessions. This means that breaking into one session does not automatically provide information for breaking into future sessions. | false |
| Use only SASL mechanisms that require client credentials. | false |
| Do not use SASL mechanisms that are susceptible to simple plain passive attacks. | false |
| Do not use SASL mechanisms that are susceptible to active, non-dictionary, attacks. | false |
| Do not use SASL mechanisms that are susceptible to passive dictionary attacks. | false |
| Do not use SASL mechanisms that accept anonymous logins. | true |
Data Grid cache authorization restricts access to caches based on roles and permissions. If you configure cache authorization, you can then set <no-anonymous value=false />
to allow anonymous login and delegate access logic to cache authorization.
Hot Rod connector with SASL policy configuration
<hotrod-connector socket-binding="hotrod" cache-container="default"> <authentication security-realm="ApplicationRealm"> <!-- Specifies multiple SASL authentication mechanisms for the Hot Rod connector. --> <sasl server-name="myhotrodserver" mechanisms="PLAIN DIGEST-MD5 GSSAPI EXTERNAL" qop="auth"> <!-- Defines policies for SASL mechanisms. --> <policy> <no-active value="true" /> <no-anonymous value="true" /> <no-plain-text value="true" /> </policy> </sasl> </authentication> </hotrod-connector>
As a result of the preceding configuration, the Hot Rod connector uses the GSSAPI
mechanism because it is the only mechanism that complies with all policies.
4.3. Manually Configuring REST Authentication
Explicitly configure REST connector authentication to override the default HTTP authentication mechanisms that Data Grid servers use for security realms.
Procedure
-
Add an
authentication
definition to the REST connector configuration. - Specify which Data Grid security realm the REST connector uses for authentication.
- Specify the authentication mechanisms for the REST endpoint to use.
4.3.1. REST Authentication Configuration
REST connector with BASIC and DIGEST authentication
<endpoints xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:server:12.1 https://infinispan.org/schemas/infinispan-server-12.1.xsd" xmlns="urn:infinispan:server:12.1" socket-binding="default" security-realm="default"> <rest-connector> <!-- Specifies SASL mechanisms to use for authentication. --> <authentication mechanisms="DIGEST BASIC"/> </rest-connector> </endpoints>
REST connector with Kerberos authentication
<endpoints xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:server:12.1 https://infinispan.org/schemas/infinispan-server-12.1.xsd" xmlns="urn:infinispan:server:12.1" socket-binding="default" security-realm="default"> <rest-connector> <!-- Enables the `SPENGO` mechanism for Kerberos authentication and specifies an identity for the server. --> <authentication mechanisms="SPNEGO" server-principal="HTTP/localhost@INFINISPAN.ORG"/> </rest-connector> </endpoints>
4.3.2. REST Endpoint Authentication Mechanisms
Data Grid supports the following authentications mechanisms with the REST connector:
Authentication mechanism | Description | Related details |
---|---|---|
|
Uses credentials in plain-text format. You should use |
Corresponds to the |
|
Uses hashing algorithms and nonce values. REST connectors support |
Corresponds to the |
|
Uses Kerberos tickets and requires a Kerberos Domain Controller. You must add a corresponding |
Corresponds to the |
|
Uses OAuth tokens and requires a |
Corresponds to the |
| Uses client certificates. |
Similar to the |
4.4. Disabling Authentication
In local development environments or on isolated networks you can configure Data Grid to allow unauthenticated client requests.
When you disable user authentication you should also disable authorization in your Data Grid security configuration.
Procedure
-
Open
infinispan.xml
for editing. -
Remove any
security-realm
attributes from theendpoints
configuration. Ensure that the Hot Rod and REST connectors do not include any
authentication
configuration.For example, the following configuration allows unauthenticated access to Data Grid:
<endpoints socket-binding="default"> <hotrod-connector name="hotrod"/> <rest-connector name="rest"/> </endpoints>
-
Remove any
authorization
elements from thesecurity
configuration for thecache-container
and each cache configuration.
Chapter 5. Encrypting Data Grid Server Connections
You can secure Data Grid Server connections using SSL/TLS encryption by configuring a keystore that contains public and private keys for Data Grid. You can also configure client certificate authentication if you require mutual TLS.
5.1. Configuring Data Grid Server Keystores
Add keystores to Data Grid Server and configure it to present SSL/TLS certificates that verify its identity to clients. If a security realm contains TLS/SSL identities, it encrypts any connections to Data Grid Server endpoints that use that security realm.
Prerequisites
- Create a keystore that contains certificates, or certificate chains, for Data Grid Server.
Data Grid Server supports the following keystore formats: JKS, JCEKS, PKCS12, BKS, BCFKS, and UBER.
In production environments, server certificates should be signed by a trusted Certificate Authority, either Root or Intermediate CA.
Procedure
-
Add the keystore that contains SSL/TLS identities for Data Grid Server to the
$RHDG_HOME/server/conf
directory. -
Add a
server-identities
definition to the Data Grid Server security realm. -
Specify the keystore file name with the
path
attribute. -
Provide the keystore password and certificate alias with the
keystore-password
andalias
attributes.
Data Grid Server keystore configuration
<security xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:server:12.1 https://infinispan.org/schemas/infinispan-server-12.1.xsd" xmlns="urn:infinispan:server:12.1"> <security-realms> <security-realm name="default"> <server-identities> <ssl> <!-- Adds a keystore that contains server certificates that provide SSL/TLS identities to clients. --> <keystore path="server.pfx" relative-to="infinispan.server.config.path" keystore-password="secret" alias="rhdg-server"/> </ssl> </server-identities> </security-realm> </security-realms> </security>
Next steps
Configure clients with a trust store so they can verify SSL/TLS identities for Data Grid Server.
Additional resources
5.1.1. Automatically Generating Keystores
Configure Data Grid servers to automatically generate keystores at startup.
Automatically generated keystores:
- Should not be used in production environments.
- Are generated whenever necessary; for example, while obtaining the first connection from a client.
- Contain certificates that you can use directly in Hot Rod clients.
Procedure
-
Include the
generate-self-signed-certificate-host
attribute for thekeystore
element in the server configuration. - Specify a hostname for the server certificate as the value.
SSL server identity with a generated keystore
<security xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:server:12.1 https://infinispan.org/schemas/infinispan-server-12.1.xsd" xmlns="urn:infinispan:server:12.1"> <security-realms> <security-realm name="default"> <server-identities> <ssl> <!-- Generates a keystore that includes a self-signed certificate with the specified hostname. --> <keystore path="server.p12" relative-to="infinispan.server.config.path" keystore-password="secret" alias="server" generate-self-signed-certificate-host="localhost"/> </ssl> </server-identities> </security-realm> </security-realms> </security>
5.1.2. Configuring TLS versions and cipher suites
When using SSL/TLS encryption to secure your deployment, you can configure Data Grid Server to use specific versions of the TLS protocol as well as specific cipher suites within the protocol.
Procedure
-
Add the
engine
element to the SSL configuration for Data Grid Server. Configure Data Grid to use one or more TLS versions with the
enabled-protocols
attribute.Data Grid Server supports TLS version 1.2 and 1.3 by default. If appropriate you can set
TLSv1.3
only to restrict the security protocol for client connections. Data Grid does not recommend enablingTLSv1.1
because it is an older protocol with limited support and provides weak security. You should never enable any version of TLS older than 1.1.WarningIf you modify the SSL
engine
configuration for Data Grid Server you must explicitly configure TLS versions with theenabled-protocols
attribute. Omitting theenabled-protocols
attribute allows any TLS version.<engine enabled-protocols="TLSv1.3 TLSv1.2" />
Configure Data Grid to use one or more cipher suites with the
enabled-ciphersuites
attribute.You must ensure that you set a cipher suite that supports any protocol features you plan to use; for example
HTTP/2 ALPN
.
SSL engine configuration
<security xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:server:12.1 https://infinispan.org/schemas/infinispan-server-12.1.xsd" xmlns="urn:infinispan:server:12.1"> <security-realms> <security-realm name="default"> <server-identities> <ssl> <keystore path="server.p12" relative-to="infinispan.server.config.path" keystore-password="secret" alias="server"/> <!-- Configures Data Grid Server to use specific TLS versions and cipher suites. --> <engine enabled-protocols="TLSv1.3" enabled-ciphersuites="TLS_AES_256_GCM_SHA384 TLS_AES_128_GCM_SHA256 TLS_AES_128_CCM_8_SHA256"/> </ssl> </server-identities> </security-realm> </security-realms> </security>
5.2. Configuring Client Certificate Authentication
Configure Data Grid Server to use mutual TLS to secure client connections.
You can configure Data Grid to verify client identities from certificates in a trust store in two ways:
- Require a trust store that contains only the signing certificate, which is typically a Certificate Authority (CA). Any client that presents a certificate signed by the CA can connect to Data Grid.
- Require a trust store that contains all client certificates in addition to the signing certificate. Only clients that present a signed certificate that is present in the trust store can connect to Data Grid.
Alternatively to providing trust stores you can use shared system certificates.
Prerequisites
- Create a client trust store that contains either the CA certificate or all public certificates.
- Create a keystore for Data Grid Server and configure an SSL/TLS identity.
Procedure
-
Add the
require-ssl-client-auth="true"
parameter to yourendpoints
configuration. -
Add the client trust store to the
$RHDG_HOME/server/conf
directory. -
Specify the
path
andpassword
attributes for thetruststore
element in the Data Grid Server security realm configuration. -
Add the
<truststore-realm/>
element to the security realm if you want Data Grid Server to authenticate each client certificate.
Data Grid Server trust store realm configuration
<security xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:server:12.1 https://infinispan.org/schemas/infinispan-server-12.1.xsd" xmlns="urn:infinispan:server:12.1"> <security-realms> <security-realm name="default"> <server-identities> <ssl> <!-- Provides an SSL/TLS identity with a keystore that contains server certificates. --> <keystore path="server.p12" relative-to="infinispan.server.config.path" keystore-password="secret" alias="server"/> <!-- Configures a trust store that contains client certificates or part of a certificate chain. --> <truststore path="trust.p12" relative-to="infinispan.server.config.path" password="secret"/> </ssl> </server-identities> <!-- Authenticates client certificates against the trust store. If you configure this, the trust store must contain the public certificates for all clients. --> <truststore-realm/> </security-realm> </security-realms> </security> <!-- Configures Data Grid Server to require client certificates with the "require-ssl-client-auth" attribute. --> <endpoints xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:server:12.1 https://infinispan.org/schemas/infinispan-server-12.1.xsd" xmlns="urn:infinispan:server:12.1" socket-binding="default" security-realm="default" require-ssl-client-auth="true"> <hotrod-connector> <!-- Configures the Hot Rod endpoint for client certificate authentication. --> <authentication> <sasl mechanisms="EXTERNAL" server-name="infinispan" qop="auth"/> </authentication> </hotrod-connector> <rest-connector> <!-- Configures the REST endpoint for client certificate authentication. --> <authentication mechanisms="CLIENT_CERT"/> </rest-connector> </endpoints>
Next steps
- Set up authorization with client certificates in the Data Grid Server configuration if you control access with security roles and permissions.
- Configure clients to negotiate SSL/TLS connections with Data Grid Server.
Additional resources
- Configuring Hot Rod client encryption
- Using Shared System Certificates (Red Hat Enterprise Linux 7 Security Guide)
5.3. Configuring Authorization with Client Certificates
Enabling client certificate authentication means you do not need to specify Data Grid user credentials in client configuration, which means you must associate roles with the Common Name (CN) field in the client certificate(s).
Prerequisites
- Provide clients with a Java keystore that contains either their public certificates or part of the certificate chain, typically a public CA certificate.
- Configure Data Grid Server to perform client certificate authentication.
Procedure
-
Enable the
common-name-role-mapper
in the security authorization configuration. Assign the Common Name (
CN
) from the client certificate a role with the appropriate permissions.<cache-container name="certificate-authentication" statistics="true"> <security> <authorization> <!-- Declare a role mapper that associates the common name (CN) field in client certificate trust stores with authorization roles. --> <common-name-role-mapper/> <!-- In this example, if a client certificate contains `CN=Client1` then clients with matching certificates get ALL permissions. --> <role name="Client1" permissions="ALL"/> </authorization> </security> </cache-container>
Chapter 6. Configuring Kerberos Identities for Data Grid Server
Provide Data Grid Server endpoints with Kerberos identities to secure connections with clients.
6.1. Setting Up Kerberos Identities
Kerberos identities use keytab files that contain service principal names and encrypted keys, derived from Kerberos passwords.
keytab files can contain both user and service account principals. However, Data Grid servers use service account principals only. As a result, Data Grid servers can provide identity to clients and allow clients to authenticate with Kerberos servers.
In most cases, you create unique principals for the Hot Rod and REST connectors. For example, you have a "datagrid" server in the "INFINISPAN.ORG" domain. In this case you should create the following service principals:
-
hotrod/datagrid@INFINISPAN.ORG
identifies the Hot Rod service. -
HTTP/datagrid@INFINISPAN.ORG
identifies the REST service.
Procedure
Create keytab files for the Hot Rod and REST services.
- Linux
$ ktutil ktutil: addent -password -p datagrid@INFINISPAN.ORG -k 1 -e aes256-cts Password for datagrid@INFINISPAN.ORG: [enter your password] ktutil: wkt http.keytab ktutil: quit
- Microsoft Windows
$ ktpass -princ HTTP/datagrid@INFINISPAN.ORG -pass * -mapuser INFINISPAN\USER_NAME $ ktab -k http.keytab -a HTTP/datagrid@INFINISPAN.ORG
-
Copy the keytab files to the
$ISPN_HOME/server/conf
directory. -
Add a
server-identities
definition to the Data Grid server security realm. - Specify the location of keytab files that provide service principals to Hot Rod and REST connectors.
- Name the Kerberos service principals.
6.2. Kerberos Identity Configuration
The following example configures Kerberos identities for Data Grid Server:
<security xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:server:12.1 https://infinispan.org/schemas/infinispan-server-12.1.xsd" xmlns="urn:infinispan:server:12.1"> <security-realms> <security-realm name="default"> <server-identities> <!-- Specifies a keytab file that provides a Kerberos identity for the Hot Rod connector. --> <!-- Names the Kerberos service principal for the Hot Rod connector. --> <!-- The required="true" attribute specifies that the keytab file must be present when the server starts. --> <kerberos keytab-path="hotrod.keytab" principal="hotrod/datagrid@INFINISPAN.ORG" required="true"/> <!-- Specifies a keytab file that provides a Kerberos identity for the REST connector. --> <!-- Names the Kerberos service principal for the REST connector. --> <kerberos keytab-path="http.keytab" principal="HTTP/localhost@INFINISPAN.ORG" required="true"/> </server-identities> </security-realm> </security-realms> </security>
Chapter 7. Storing Data Grid Server Credentials in Keystores
External services require credentials to authenticate with Data Grid Server. To protect sensitive text strings such as passwords, add them to a credential keystore rather than directly in Data Grid Server configuration files.
You can then configure Data Grid Server to decrypt passwords for establishing connections with services such as databases or LDAP directories.
Plain-text passwords in $RHDG_HOME/server/conf
are unencrypted. Any user account with read access to the host filesystem can view plain-text passwords.
While credential keystores are password-protected store encrypted passwords, any user account with write access to the host filesystem can tamper with the keystore itself.
To completely secure Data Grid Server credentials, you should grant read-write access only to user accounts that can configure and run Data Grid Server.
7.1. Setting Up Credential Keystores
Create keystores that encrypt credential for Data Grid Server access.
A credential keystore contains at least one alias that is associated with an encrypted password. After you create a keystore, you specify the alias in a connection configuration such as a database connection pool. Data Grid Server then decrypts the password for that alias from the keystore when the service attempts authentication.
You can create as many credential keystores with as many aliases as required.
Procedure
-
Open a terminal in
$RHDG_HOME
. Create a keystore and add credentials to it with the
credentials
command.TipBy default, keystores are of type PKCS12. Run
help credentials
for details on changing keystore defaults.The following example shows how to create a keystore that contains an alias of "dbpassword" for the password "changeme". When you create a keystore you also specify a password for the keystore with the
-p
argument.- Linux
$ bin/cli.sh credentials add dbpassword -c changeme -p "secret1234!"
- Microsoft Windows
$ bin\cli.bat credentials add dbpassword -c changeme -p "secret1234!"
Check that the alias is added to the keystore.
$ bin/cli.sh credentials ls -p "secret1234!" dbpassword
Configure Data Grid to use the credential keystore.
-
Specify the name and location of the credential keystore in the
credential-stores
configuration. Provide the credential keystore and alias in the
credential-reference
configuration.TipAttributes in the
credential-reference
configuration are optional.-
store
is required only if you have multiple keystores. -
alias
is required only if the keystore contains multiple aliases.
-
-
Specify the name and location of the credential keystore in the
Reference
7.2. Credential Keystore Configuration
Review example configurations for credential keystores in Data Grid Server configuration.
Credential keystore
<security xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:server:12.1 https://infinispan.org/schemas/infinispan-server-12.1.xsd" xmlns="urn:infinispan:server:12.1"> <!-- Uses a keystore to manage server credentials. --> <credential-stores> <!-- Specifies the name and filesystem location of a keystore. --> <credential-store name="credentials" path="credentials.pfx"> <!-- Specifies the password for the credential keystore. --> <clear-text-credential clear-text="secret1234!"/> </credential-store> </credential-stores> </security>
Datasource connection
<data-sources xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:server:12.1 https://infinispan.org/schemas/infinispan-server-12.1.xsd" xmlns="urn:infinispan:server:12.1"> <data-source name="postgres" jndi-name="jdbc/postgres"> <!-- Specifies the database username in the connection factory. --> <connection-factory driver="org.postgresql.Driver" username="dbuser" url="${org.infinispan.server.test.postgres.jdbcUrl}"> <!-- Specifies the credential keystore that contains an encrypted password and the alias for it. --> <credential-reference store="credentials" alias="dbpassword"/> </connection-factory> <connection-pool max-size="10" min-size="1" background-validation="1000" idle-removal="1" initial-size="1" leak-detection="10000"/> </data-source> </data-sources>
LDAP connection
<security xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:server:12.1 https://infinispan.org/schemas/infinispan-server-12.1.xsd" xmlns="urn:infinispan:server:12.1"> <credential-stores> <credential-store name="credentials" path="credentials.pfx"> <clear-text-credential clear-text="secret1234!"/> </credential-store> </credential-stores> <security-realms> <security-realm name="default"> <!-- Specifies the LDAP principal in the connection factory. --> <ldap-realm name="ldap" url="ldap://my-ldap-server:10389" principal="uid=admin,ou=People,dc=infinispan,dc=org" connection-timeout="3000" read-timeout="30000" connection-pooling="true" referral-mode="ignore" page-size="30"> <!-- Specifies the credential keystore that contains an encrypted password and the alias for it. --> <credential-reference store="credentials" alias="ldappassword"/> </ldap-realm> </security-realm> </security-realms> </security>
Chapter 8. Endpoint IP Filtering
Configure IP Filtering rules on the endpoints to accept or reject connections based on the client address.
8.1. Data Grid Server IP Filter Configuration
Data Grid endpoints and connectors can specify one or more IP filtering rules. These rules specify the type of action to take when a client which matches a supplied CIDR block connects. IP filtering rules are applied in order up until the first one that matches.
A CIDR block is a compact representation of an IP address and its associated network mask. CIDR notation specifies an IP address, a slash ('/') character, and a decimal number. The decimal number is the count of leading 1 bits in the network mask. The number can also be thought of as the width, in bits, of the network prefix. The IP address in CIDR notation is always represented according to the standards for IPv4 or IPv6.
The address can denote a specific interface address, including a host identifier, such as 10.0.0.1/8
, or it can be the beginning address of an entire network interface range using a host identifier of 0, as in 10.0.0.0/8
or 10/8
.
For example:
-
192.168.100.14/24
represents the IPv4 address192.168.100.14
and its associated network prefix192.168.100.0
, or equivalently, its subnet mask255.255.255.0
, which has 24 leading 1-bits. -
the IPv4 block
192.168.100.0/22
represents the 1024 IPv4 addresses from192.168.100.0
to192.168.103.255
. -
the IPv6 block
2001:db8::/48
represents the block of IPv6 addresses from2001:db8:0:0:0:0:0:0
to2001:db8:0:ffff:ffff:ffff:ffff:ffff
. -
::1/128
represents the IPv6 loopback address. Its prefix length is 128 which is the number of bits in the address.
<endpoints socket-binding="default" security-realm="default"> <ip-filter> <accept from="192.168.0.0/16"/> <accept from="10.0.0.0/8"/> <reject from="/0"/> </ip-filter> <hotrod-connector name="hotrod"/> <rest-connector name="rest"/> </endpoints>
As a result of the preceding configuration, Data Grid servers accept connections only from addresses in the 192.168.0.0/16
and 10.0.0.0/8
CIDR blocks. Data Grid servers reject all other connections.
8.2. Inspecting and Modifying Data Grid Server IP Filter Rules
Server IP filter rules can be manipulated via the CLI.
Procedure
-
Open a terminal in
$RHDG_HOME
. Inspect and modify the IP filter rules
server connector ipfilter
command as required.List all IP filtering rules active on a connector across the cluster:
[//containers/default]> server connector ipfilter ls endpoint-default
Set IP filtering rules across the cluster.
NoteThis command replaces any existing rules.
[//containers/default]> server connector ipfilter set endpoint-default --rules=ACCEPT/192.168.0.0/16,REJECT/10.0.0.0/8`
Remove all IP filtering rules on a connector across the cluster.
[//containers/default]> server connector ipfilter clear endpoint-default
Chapter 9. Configuring User Authorization
Authorization is a security feature that requires users to have certain permissions before they can access caches or interact with Data Grid resources. You assign roles to users that provide different levels of permissions, from read-only access to full, super user privileges.
9.1. Enabling Authorization in Cache Configuration
Use authorization in your cache configuration to restrict user access. Before they can read or write cache entries, or create and delete caches, users must have a role with a sufficient level of permission.
Procedure
-
Open your
infinispan.xml
configuration for editing. If it is not already declared, add the
<authorization />
tag inside thesecurity
elements for thecache-container
.This enables authorization for the Cache Manager and provides a global set of roles and permissions that caches can inherit.
-
Add the
<authorization />
tag to each cache for which Data Grid restricts access based on user roles.
The following configuration example shows how to use implicit authorization configuration with default roles and permissions:
<infinispan> <cache-container default-cache="rbac-cache" name="restricted"> <security> <!-- Enable authorization with the default roles and permissions. --> <authorization /> </security> <local-cache name="rbac-cache"> <security> <!-- Inherit authorization settings from the cache-container. --> <authorization/> </security> </local-cache> </cache-container> </infinispan>
9.2. User Roles and Permissions
Data Grid includes a default set of roles that grant users with permissions to access data and interact with Data Grid resources.
ClusterRoleMapper
is the default mechanism that Data Grid uses to associate security principals to authorization roles.
ClusterRoleMapper
matches principal names to role names. A user named admin
gets admin
permissions automatically, a user named deployer
gets deployer
permissions, and so on.
Role | Permissions | Description |
---|---|---|
| ALL | Superuser with all permissions including control of the Cache Manager lifecycle. |
| ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR, CREATE |
Can create and delete Data Grid resources in addition to |
| ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR |
Has read and write access to Data Grid resources in addition to |
| ALL_READ, MONITOR |
Has read access to Data Grid resources in addition to |
| MONITOR |
Can view statistics via JMX and the |
9.3. How Security Authorization Works
Data Grid authorization secures your installation by restricting user access.
User applications or clients must belong to a role that is assigned with sufficient permissions before they can perform operations on Cache Managers or caches.
For example, you configure authorization on a specific cache instance so that invoking Cache.get()
requires an identity to be assigned a role with read permission while Cache.put()
requires a role with write permission.
In this scenario, if a user application or client with the io
role attempts to write an entry, Data Grid denies the request and throws a security exception. If a user application or client with the writer
role sends a write request, Data Grid validates authorization and issues a token for subsequent operations.
Identities
Identities are security Principals of type java.security.Principal
. Subjects, implemented with the javax.security.auth.Subject
class, represent a group of security Principals. In other words, a Subject represents a user and all groups to which it belongs.
Identities to roles
Data Grid uses role mappers so that security principals correspond to roles, which you assign one or more permissions.
The following image illustrates how security principals correspond to roles:
9.3.1. Permissions
Authorization roles have different permissions with varying levels of access to Data Grid. Permissions let you restrict user access to both Cache Managers and caches.
9.3.1.1. Cache Manager permissions
Permission | Function | Description |
---|---|---|
CONFIGURATION |
| Defines new cache configurations. |
LISTEN |
| Registers listeners against a Cache Manager. |
LIFECYCLE |
| Stops the Cache Manager. |
CREATE |
| Create and remove container resources such as caches, counters, schemas, and scripts. |
MONITOR |
|
Allows access to JMX statistics and the |
ALL | - | Includes all Cache Manager permissions. |
9.3.1.2. Cache permissions
Permission | Function | Description |
---|---|---|
READ |
| Retrieves entries from a cache. |
WRITE |
| Writes, replaces, removes, evicts data in a cache. |
EXEC |
| Allows code execution against a cache. |
LISTEN |
| Registers listeners against a cache. |
BULK_READ |
| Executes bulk retrieve operations. |
BULK_WRITE |
| Executes bulk write operations. |
LIFECYCLE |
| Starts and stops a cache. |
ADMIN |
| Allows access to underlying components and internal structures. |
MONITOR |
|
Allows access to JMX statistics and the |
ALL | - | Includes all cache permissions. |
ALL_READ | - | Combines the READ and BULK_READ permissions. |
ALL_WRITE | - | Combines the WRITE and BULK_WRITE permissions. |
Reference
9.3.2. Role Mappers
Data Grid includes a PrincipalRoleMapper
API that maps security Principals in a Subject to authorization roles that you can assign to users.
9.3.2.1. Cluster role mappers
ClusterRoleMapper
uses a persistent replicated cache to dynamically store principal-to-role mappings for the default roles and permissions.
By default uses the Principal name as the role name and implements org.infinispan.security.MutableRoleMapper
which exposes methods to change role mappings at runtime.
-
Java class:
org.infinispan.security.mappers.ClusterRoleMapper
-
Declarative configuration:
<cluster-role-mapper />
9.3.2.2. Identity role mappers
IdentityRoleMapper
uses the Principal name as the role name.
-
Java class:
org.infinispan.security.mappers.IdentityRoleMapper
-
Declarative configuration:
<identity-role-mapper />
9.3.2.3. CommonName role mappers
CommonNameRoleMapper
uses the Common Name (CN) as the role name if the Principal name is a Distinguished Name (DN).
For example this DN, cn=managers,ou=people,dc=example,dc=com
, maps to the managers
role.
-
Java class:
org.infinispan.security.mappers.CommonRoleMapper
-
Declarative configuration:
<common-name-role-mapper />
9.3.2.4. Custom role mappers
Custom role mappers are implementations of org.infinispan.security.PrincipalRoleMapper
.
-
Declarative configuration:
<custom-role-mapper class="my.custom.RoleMapper" />
9.4. Access Control List (ACL) Cache
Data Grid caches roles that you grant to users internally for optimal performance. Whenever you grant or deny roles to users, Data Grid flushes the ACL cache to ensure user permissions are applied correctly.
If necessary, you can disable the ACL cache or configure it with the cache-size
and cache-timeout
attributes.
<security cache-size="1000" cache-timeout="300000"> <authorization /> </security>
Reference
9.5. Customizing Roles and Permissions
You can customize authorization settings in your Data Grid configuration to use role mappers with different combinations of roles and permissions.
Procedure
-
Open your
infinispan.xml
configuration for editing. -
Configure authorization for the
cache-container
by declaring a role mapper and a set of roles and permissions. - Configure authorization for caches to restrict access based on user roles.
The following configuration example shows how to configure security authorization with roles and permissions:
<infinispan> <cache-container default-cache="restricted" name="custom-authorization"> <security> <authorization> <!-- Declare a role mapper that associates a security principal to each role. --> <identity-role-mapper /> <!-- Specify user roles and corresponding permissions. --> <role name="admin" permissions="ALL" /> <role name="reader" permissions="READ" /> <role name="writer" permissions="WRITE" /> <role name="supervisor" permissions="READ WRITE EXEC"/> </authorization> </security> <local-cache name="implicit-authorization"> <security> <!-- Inherit roles and permissions from the cache-container. --> <authorization/> </security> </local-cache> <local-cache name="restricted"> <security> <!-- Explicitly define which roles can access the cache. --> <authorization roles="admin supervisor"/> </security> </local-cache> </cache-container> </infinispan>
9.6. Disabling Security Authorization
In local development environments you can disable authorization so that users do not need roles and permissions. Disabling security authorization means that any user can access data and interact with Data Grid resources.
Procedure
-
Open your
infinispan.xml
configuration for editing. -
Remove any
authorization
elements from thesecurity
configuration for thecache-container
and each cache configuration.
9.7. Configuring Authorization with Client Certificates
Enabling client certificate authentication means you do not need to specify Data Grid user credentials in client configuration, which means you must associate roles with the Common Name (CN) field in the client certificate(s).
Prerequisites
- Provide clients with a Java keystore that contains either their public certificates or part of the certificate chain, typically a public CA certificate.
- Configure Data Grid Server to perform client certificate authentication.
Procedure
-
Enable the
common-name-role-mapper
in the security authorization configuration. Assign the Common Name (
CN
) from the client certificate a role with the appropriate permissions.<cache-container name="certificate-authentication" statistics="true"> <security> <authorization> <!-- Declare a role mapper that associates the common name (CN) field in client certificate trust stores with authorization roles. --> <common-name-role-mapper/> <!-- In this example, if a client certificate contains `CN=Client1` then clients with matching certificates get ALL permissions. --> <role name="Client1" permissions="ALL"/> </authorization> </security> </cache-container>
Chapter 10. Setting Up Data Grid Clusters
Data Grid requires a transport layer so nodes can automatically join and leave clusters. The transport layer also enables Data Grid nodes to replicate or distribute data across the network and perform operations such as re-balancing and state transfer.
10.1. Default JGroups Stacks
Data Grid provides default JGroups stack files, default-jgroups-*.xml
, in the default-configs
directory inside the infinispan-core-12.1.11.Final-redhat-00001.jar
file.
You can find this JAR file in the $RHDG_HOME/lib
directory.
File name | Stack name | Description |
---|---|---|
|
| Uses UDP for transport and UDP multicast for discovery. Suitable for larger clusters (over 100 nodes) or if you are using replicated caches or invalidation mode. Minimizes the number of open sockets. |
|
|
Uses TCP for transport and the |
|
|
Uses TCP for transport and |
|
|
Uses TCP for transport and |
|
|
Uses TCP for transport and |
|
|
Uses TCP for transport and |
Additional resources
10.2. Cluster Discovery Protocols
Data Grid supports different protocols that allow nodes to automatically find each other on the network and form clusters.
There are two types of discovery mechanisms that Data Grid can use:
- Generic discovery protocols that work on most networks and do not rely on external services.
-
Discovery protocols that rely on external services to store and retrieve topology information for Data Grid clusters.
For instance the DNS_PING protocol performs discovery through DNS server records.
Running Data Grid on hosted platforms requires using discovery mechanisms that are adapted to network constraints that individual cloud providers impose.
Additional resources
- JGroups Discovery Protocols
- JGroups cluster transport configuration for Data Grid 8.x (Red Hat knowledgebase article)
10.2.1. PING
PING, or UDPPING is a generic JGroups discovery mechanism that uses dynamic multicasting with the UDP protocol.
When joining, nodes send PING requests to an IP multicast address to discover other nodes already in the Data Grid cluster. Each node responds to the PING request with a packet that contains the address of the coordinator node and its own address. C=coordinator’s address and A=own address. If no nodes respond to the PING request, the joining node becomes the coordinator node in a new cluster.
PING configuration example
<PING num_discovery_runs="3"/>
Additional resources
10.2.2. TCPPING
TCPPING is a generic JGroups discovery mechanism that uses a list of static addresses for cluster members.
With TCPPING, you manually specify the IP address or hostname of each node in the Data Grid cluster as part of the JGroups stack, rather than letting nodes discover each other dynamically.
TCPPING configuration example
<TCP bind_port="7800" /> <TCPPING timeout="3000" initial_hosts="${jgroups.tcpping.initial_hosts:hostname1[port1],hostname2[port2]}" port_range="0" num_initial_members="3"/>
Additional resources
10.2.3. MPING
MPING uses IP multicast to discover the initial membership of Data Grid clusters.
You can use MPING to replace TCPPING discovery with TCP stacks and use multicasing for discovery instead of static lists of initial hosts. However, you can also use MPING with UDP stacks.
MPING configuration example
<MPING mcast_addr="${jgroups.mcast_addr:228.6.7.8}" mcast_port="${jgroups.mcast_port:46655}" num_discovery_runs="3" ip_ttl="${jgroups.udp.ip_ttl:2}"/>
Additional resources
10.2.4. TCPGOSSIP
Gossip routers provide a centralized location on the network from which your Data Grid cluster can retrieve addresses of other nodes.
You inject the address (IP:PORT
) of the Gossip router into Data Grid nodes as follows:
-
Pass the address as a system property to the JVM; for example,
-DGossipRouterAddress="10.10.2.4[12001]"
. - Reference that system property in the JGroups configuration file.
Gossip router configuration example
<TCP bind_port="7800" /> <TCPGOSSIP timeout="3000" initial_hosts="${GossipRouterAddress}" num_initial_members="3" />
Additional resources
10.2.5. JDBC_PING
JDBC_PING uses shared databases to store information about Data Grid clusters. This protocol supports any database that can use a JDBC connection.
Nodes write their IP addresses to the shared database so joining nodes can find the Data Grid cluster on the network. When nodes leave Data Grid clusters, they delete their IP addresses from the shared database.
JDBC_PING configuration example
<JDBC_PING connection_url="jdbc:mysql://localhost:3306/database_name" connection_username="user" connection_password="password" connection_driver="com.mysql.jdbc.Driver"/>
Add the appropriate JDBC driver to the classpath so Data Grid can use JDBC_PING.
Additional resources
10.2.6. DNS_PING
JGroups DNS_PING queries DNS servers to discover Data Grid cluster members in Kubernetes environments such as OKD and Red Hat OpenShift.
DNS_PING configuration example
<dns.DNS_PING dns_query="myservice.myproject.svc.cluster.local" />
Additional resources
- JGroups DNS_PING
- DNS for Services and Pods (Kubernetes documentation for adding DNS entries)
10.2.7. Cloud Discovery Protocols
Data Grid includes default JGroups stacks that use discovery protocol implementations that are specific to cloud providers.
Discovery protocol | Default stack file | Artifact | Version |
---|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
Providing Dependencies for Cloud Discovery Protocols
To use NATIVE_S3_PING
, GOOGLE_PING2
, or AZURE_PING
cloud discovery protocols, you need to provide dependent libraries to Data Grid.
Procedure
- Download the artifact JAR file and all dependencies.
Add the artifact JAR file and all dependencies to the
$RHDG_HOME/server/lib
directory of your Data Grid Server installation.For more details see the Downloading artifacts for JGroups cloud discover protocols for Data Grid Server (Red Hat knowledgebase article)
You can then configure the cloud discovery protocol as part of a JGroups stack file or with system properties.
Additional resources
10.3. Using the Default JGroups Stacks
Data Grid uses JGroups protocol stacks so nodes can send each other messages on dedicated cluster channels.
Data Grid provides preconfigured JGroups stacks for UDP
and TCP
protocols. You can use these default stacks as a starting point for building custom cluster transport configuration that is optimized for your network requirements.
Procedure
Do one of the following to use one of the default JGroups stacks:
Use the
stack
attribute in yourinfinispan.xml
file.<infinispan> <cache-container default-cache="replicatedCache"> <!-- Use the default UDP stack for cluster transport. --> <transport cluster="${infinispan.cluster.name}" stack="udp" node-name="${infinispan.node.name:}"/> </cache-container> </infinispan>
Use the
cluster-stack
argument to set the JGroups stack file when Data Grid Server starts:$ bin/server.sh --cluster-stack=udp
Verification
Data Grid logs the following message to indicate which stack it uses:
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack udp
Additional resources
- JGroups cluster transport configuration for Data Grid 8.x (Red Hat knowledgebase article)
10.4. Customizing JGroups Stacks
Adjust and tune properties to create a cluster transport configuration that works for your network requirements.
Data Grid provides attributes that let you extend the default JGroups stacks for easier configuration. You can inherit properties from the default stacks while combining, removing, and replacing other properties.
Procedure
-
Create a new JGroups stack declaration in your
infinispan.xml
file. -
Add the
extends
attribute and specify a JGroups stack to inherit properties from. -
Use the
stack.combine
attribute to modify properties for protocols configured in the inherited stack. -
Use the
stack.position
attribute to define the location for your custom stack. Specify the stack name as the value for the
stack
attribute in thetransport
configuration.For example, you might evaluate using a Gossip router and symmetric encryption with the default TCP stack as follows:
<infinispan> <jgroups> <!-- Creates a custom JGroups stack named "my-stack". --> <!-- Inherits properties from the default TCP stack. --> <stack name="my-stack" extends="tcp"> <!-- Uses TCPGOSSIP as the discovery mechanism instead of MPING --> <TCPGOSSIP initial_hosts="${jgroups.tunnel.gossip_router_hosts:localhost[12001]}" stack.combine="REPLACE" stack.position="MPING" /> <!-- Removes the FD_SOCK protocol from the stack. --> <FD_SOCK stack.combine="REMOVE"/> <!-- Modifies the timeout value for the VERIFY_SUSPECT protocol. --> <VERIFY_SUSPECT timeout="2000"/> <!-- Adds SYM_ENCRYPT to the stack after VERIFY_SUSPECT. --> <SYM_ENCRYPT sym_algorithm="AES" keystore_name="mykeystore.p12" keystore_type="PKCS12" store_password="changeit" key_password="changeit" alias="myKey" stack.combine="INSERT_AFTER" stack.position="VERIFY_SUSPECT" /> </stack> <cache-container name="default" statistics="true"> <!-- Uses "my-stack" for cluster transport. --> <transport cluster="${infinispan.cluster.name}" stack="my-stack" node-name="${infinispan.node.name:}"/> </cache-container> </jgroups> </infinispan>
Check Data Grid logs to ensure it uses the stack.
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack my-stack
Reference
- JGroups cluster transport configuration for Data Grid 8.x (Red Hat knowledgebase article)
10.4.1. Inheritance Attributes
When you extend a JGroups stack, inheritance attributes let you adjust protocols and properties in the stack you are extending.
-
stack.position
specifies protocols to modify. stack.combine
uses the following values to extend JGroups stacks:Value Description COMBINE
Overrides protocol properties.
REPLACE
Replaces protocols.
INSERT_AFTER
Adds a protocol into the stack after another protocol. Does not affect the protocol that you specify as the insertion point.
Protocols in JGroups stacks affect each other based on their location in the stack. For example, you should put a protocol such as
NAKACK2
after theSYM_ENCRYPT
orASYM_ENCRYPT
protocol so thatNAKACK2
is secured.INSERT_BEFORE
Inserts a protocols into the stack before another protocol. Affects the protocol that you specify as the insertion point.
REMOVE
Removes protocols from the stack.
10.5. Using JGroups System Properties
Pass system properties to Data Grid at startup to tune cluster transport.
Procedure
-
Use
-D<property-name>=<property-value>
arguments to set JGroups system properties as required.
For example, set a custom bind port and IP address as follows:
$ bin/server.sh -Djgroups.bind.port=1234 -Djgroups.bind.address=192.0.2.0
10.5.1. Cluster Transport Properties
Use the following properties to customize JGroups cluster transport.
System Property | Description | Default Value | Required/Optional |
---|---|---|---|
| Bind address for cluster transport. |
| Optional |
| Bind port for the socket. |
| Optional |
| IP address for multicast, both discovery and inter-cluster communication. The IP address must be a valid "class D" address that is suitable for IP multicast. |
| Optional |
| Port for the multicast socket. |
| Optional |
| Time-to-live (TTL) for IP multicast packets. The value defines the number of network hops a packet can make before it is dropped. | 2 | Optional |
| Minimum number of threads for the thread pool. | 0 | Optional |
| Maximum number of threads for the thread pool. | 200 | Optional |
| Maximum number of milliseconds to wait for join requests to succeed. | 2000 | Optional |
| Number of times a thread pool needs to be full before a thread dump is logged. | 10000 | Optional |
10.5.2. System Properties for Cloud Discovery Protocols
Use the following properties to configure JGroups discovery protocols for hosted platforms.
10.5.2.1. Amazon EC2
System properties for configuring NATIVE_S3_PING
.
System Property | Description | Default Value | Required/Optional |
---|---|---|---|
| Name of the Amazon S3 region. | No default value. | Optional |
| Name of the Amazon S3 bucket. The name must exist and be unique. | No default value. | Optional |
10.5.2.2. Google Cloud Platform
System properties for configuring GOOGLE_PING2
.
System Property | Description | Default Value | Required/Optional |
---|---|---|---|
| Name of the Google Compute Engine bucket. The name must exist and be unique. | No default value. | Required |
10.5.2.3. Azure
System properties for AZURE_PING
.
System Property | Description | Default Value | Required/Optional |
---|---|---|---|
| Name of the Azure storage account. The name must exist and be unique. | No default value. | Required |
| Name of the Azure storage access key. | No default value. | Required |
| Valid DNS name of the container that stores ping information. | No default value. | Required |
10.5.2.4. OpenShift
System properties for DNS_PING
.
System Property | Description | Default Value | Required/Optional |
---|---|---|---|
| Sets the DNS record that returns cluster members. | No default value. | Required |
10.6. Using Inline JGroups Stacks
You can insert complete JGroups stack definitions into infinispan.xml
files.
Procedure
Embed a custom JGroups stack declaration in your
infinispan.xml
file.<infinispan> <!-- Contains one or more JGroups stack definitions. --> <jgroups> <!-- Defines a custom JGroups stack named "prod". --> <stack name="prod"> <TCP bind_port="7800" port_range="30" recv_buf_size="20000000" send_buf_size="640000"/> <MPING break_on_coord_rsp="true" mcast_addr="${jgroups.mping.mcast_addr:228.2.4.6}" mcast_port="${jgroups.mping.mcast_port:43366}" num_discovery_runs="3" ip_ttl="${jgroups.udp.ip_ttl:2}"/> <MERGE3 /> <FD_SOCK /> <FD_ALL timeout="3000" interval="1000" timeout_check_interval="1000" /> <VERIFY_SUSPECT timeout="1000" /> <pbcast.NAKACK2 use_mcast_xmit="false" xmit_interval="100" xmit_table_num_rows="50" xmit_table_msgs_per_row="1024" xmit_table_max_compaction_time="30000" /> <UNICAST3 xmit_interval="100" xmit_table_num_rows="50" xmit_table_msgs_per_row="1024" xmit_table_max_compaction_time="30000" /> <pbcast.STABLE stability_delay="200" desired_avg_gossip="2000" max_bytes="1M" /> <pbcast.GMS print_local_addr="false" join_timeout="${jgroups.join_timeout:2000}" /> <UFC max_credits="4m" min_threshold="0.40" /> <MFC max_credits="4m" min_threshold="0.40" /> <FRAG3 /> </stack> </jgroups> <cache-container default-cache="replicatedCache"> <!-- Uses "prod" for cluster transport. --> <transport cluster="${infinispan.cluster.name}" stack="prod" node-name="${infinispan.node.name:}"/> </cache-container> </infinispan>
10.7. Using External JGroups Stacks
Reference external files that define custom JGroups stacks in infinispan.xml
files.
Procedure
Add custom JGroups stack files to the
$RHDG_HOME/server/conf
directory.Alternatively you can specify an absolute path when you declare the external stack file.
Reference the external stack file with the
stack-file
element.<infinispan> <jgroups> <!-- Creates a "prod-tcp" stack that references an external file. --> <stack-file name="prod-tcp" path="prod-jgroups-tcp.xml"/> </jgroups> <cache-container default-cache="replicatedCache"> <!-- Use the "prod-tcp" stack for cluster transport. --> <transport stack="prod-tcp" /> <replicated-cache name="replicatedCache"/> </cache-container> <!-- Cache configuration goes here. --> </infinispan>
10.8. Encrypting Cluster Transport
Secure cluster transport so that nodes communicate with encrypted messages. You can also configure Data Grid clusters to perform certificate authentication so that only nodes with valid identities can join.
10.8.1. Data Grid Cluster Security
To secure cluster traffic, you configure Data Grid nodes to encrypt JGroups message payloads with secret keys.
Data Grid nodes can obtain secret keys from either:
- The coordinator node (asymmetric encryption).
- A shared keystore (symmetric encryption).
Retrieving secret keys from coordinator nodes
You configure asymmetric encryption by adding the ASYM_ENCRYPT
protocol to a JGroups stack in your Data Grid configuration. This allows Data Grid clusters to generate and distribute secret keys.
When using asymmetric encryption, you should also provide keystores so that nodes can perform certificate authentication and securely exchange secret keys. This protects your cluster from man-in-the-middle (MitM) attacks.
Asymmetric encryption secures cluster traffic as follows:
- The first node in the Data Grid cluster, the coordinator node, generates a secret key.
- A joining node performs certificate authentication with the coordinator to mutually verify identity.
- The joining node requests the secret key from the coordinator node. That request includes the public key for the joining node.
- The coordinator node encrypts the secret key with the public key and returns it to the joining node.
- The joining node decrypts and installs the secret key.
- The node joins the cluster, encrypting and decrypting messages with the secret key.
Retrieving secret keys from shared keystores
You configure symmetric encryption by adding the SYM_ENCRYPT
protocol to a JGroups stack in your Data Grid configuration. This allows Data Grid clusters to obtain secret keys from keystores that you provide.
- Nodes install the secret key from a keystore on the Data Grid classpath at startup.
- Node join clusters, encrypting and decrypting messages with the secret key.
Comparison of asymmetric and symmetric encryption
ASYM_ENCRYPT
with certificate authentication provides an additional layer of encryption in comparison with SYM_ENCRYPT
. You provide keystores that encrypt the requests to coordinator nodes for the secret key. Data Grid automatically generates that secret key and handles cluster traffic, while letting you specify when to generate secret keys. For example, you can configure clusters to generate new secret keys when nodes leave. This ensures that nodes cannot bypass certificate authentication and join with old keys.
SYM_ENCRYPT
, on the other hand, is faster than ASYM_ENCRYPT
because nodes do not need to exchange keys with the cluster coordinator. A potential drawback to SYM_ENCRYPT
is that there is no configuration to automatically generate new secret keys when cluster membership changes. Users are responsible for generating and distributing the secret keys that nodes use to encrypt cluster traffic.
10.8.2. Configuring Cluster Transport with Asymmetric Encryption
Configure Data Grid clusters to generate and distribute secret keys that encrypt JGroups messages.
Procedure
- Create a keystore with certificate chains that enables Data Grid to verify node identity.
Place the keystore on the classpath for each node in the cluster.
For Data Grid Server, you put the keystore in the $RHDG_HOME directory.
Add the
SSL_KEY_EXCHANGE
andASYM_ENCRYPT
protocols to a JGroups stack in your Data Grid configuration, as in the following example:<infinispan> <jgroups> <!-- Creates a secure JGroups stack named "encrypt-tcp" that extends the default TCP stack. --> <stack name="encrypt-tcp" extends="tcp"> <!-- Adds a keystore that nodes use to perform certificate authentication. --> <!-- Uses the stack.combine and stack.position attributes to insert SSL_KEY_EXCHANGE into the default TCP stack after VERIFY_SUSPECT. --> <SSL_KEY_EXCHANGE keystore_name="mykeystore.jks" keystore_password="changeit" stack.combine="INSERT_AFTER" stack.position="VERIFY_SUSPECT"/> <!-- Configures ASYM_ENCRYPT --> <!-- Uses the stack.combine and stack.position attributes to insert ASYM_ENCRYPT into the default TCP stack before pbcast.NAKACK2. --> <!-- The use_external_key_exchange = "true" attribute configures nodes to use the `SSL_KEY_EXCHANGE` protocol for certificate authentication. --> <ASYM_ENCRYPT asym_keylength="2048" asym_algorithm="RSA" change_key_on_coord_leave = "false" change_key_on_leave = "false" use_external_key_exchange = "true" stack.combine="INSERT_BEFORE" stack.position="pbcast.NAKACK2"/> </stack> </jgroups> <cache-container name="default" statistics="true"> <!-- Configures the cluster to use the JGroups stack. --> <transport cluster="${infinispan.cluster.name}" stack="encrypt-tcp" node-name="${infinispan.node.name:}"/> </cache-container> </infinispan>
Verification
When you start your Data Grid cluster, the following log message indicates that the cluster is using the secure JGroups stack:
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack <encrypted_stack_name>
Data Grid nodes can join the cluster only if they use ASYM_ENCRYPT
and can obtain the secret key from the coordinator node. Otherwise the following message is written to Data Grid logs:
[org.jgroups.protocols.ASYM_ENCRYPT] <hostname>: received message without encrypt header from <hostname>; dropping it
Reference
The example ASYM_ENCRYPT
configuration in this procedure shows commonly used parameters. Refer to JGroups documentation for the full set of available parameters.
10.8.3. Configuring Cluster Transport with Symmetric Encryption
Configure Data Grid clusters to encrypt JGroups messages with secret keys from keystores that you provide.
Procedure
- Create a keystore that contains a secret key.
Place the keystore on the classpath for each node in the cluster.
For Data Grid Server, you put the keystore in the $RHDG_HOME directory.
-
Add the
SYM_ENCRYPT
protocol to a JGroups stack in your Data Grid configuration.
<infinispan> <jgroups> <!-- Creates a secure JGroups stack named "encrypt-tcp" that extends the default TCP stack. --> <stack name="encrypt-tcp" extends="tcp"> <!-- Adds a keystore from which nodes obtain secret keys. --> <!-- Uses the stack.combine and stack.position attributes to insert SYM_ENCRYPT into the default TCP stack after VERIFY_SUSPECT. --> <SYM_ENCRYPT keystore_name="myKeystore.p12" keystore_type="PKCS12" store_password="changeit" key_password="changeit" alias="myKey" stack.combine="INSERT_AFTER" stack.position="VERIFY_SUSPECT"/> </stack> </jgroups> <cache-container name="default" statistics="true"> <!-- Configures the cluster to use the JGroups stack. --> <transport cluster="${infinispan.cluster.name}" stack="encrypt-tcp" node-name="${infinispan.node.name:}"/> </cache-container> </infinispan>
Verification
When you start your Data Grid cluster, the following log message indicates that the cluster is using the secure JGroups stack:
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack <encrypted_stack_name>
Data Grid nodes can join the cluster only if they use SYM_ENCRYPT
and can obtain the secret key from the shared keystore. Otherwise the following message is written to Data Grid logs:
[org.jgroups.protocols.SYM_ENCRYPT] <hostname>: received message without encrypt header from <hostname>; dropping it
Reference
The example SYM_ENCRYPT
configuration in this procedure shows commonly used parameters. Refer to JGroups documentation for the full set of available parameters.
10.9. TCP and UDP Ports for Cluster Traffic
Data Grid uses the following ports for cluster transport messages:
Default Port | Protocol | Description |
---|---|---|
| TCP/UDP | JGroups cluster bind port |
| UDP | JGroups multicast |
Cross-Site Replication
Data Grid uses the following ports for the JGroups RELAY2 protocol:
7900
- For Data Grid clusters running on OpenShift.
7800
- If using UDP for traffic between nodes and TCP for traffic between clusters.
7801
- If using TCP for traffic between nodes and TCP for traffic between clusters.
Chapter 11. Remotely Creating Data Grid Caches
Add caches to Data Grid Server so you can store data.
11.1. Cache Configuration with Data Grid Server
Caches configure the data container on Data Grid Server.
You create caches at run-time by adding definitions based on org.infinispan
templates or Data Grid configuration through the console, the Command Line Interface (CLI), the Hot Rod endpoint, or the REST endpoint.
When you create caches at run-time, Data Grid Server replicates your cache definitions across the cluster.
Configuration that you declare directly in infinispan.xml
is not automatically synchronized across Data Grid clusters. In this case you should use configuration management tooling, such as Ansible or Chef, to ensure that configuration is propagated to all nodes in your cluster.
11.2. Default Cache Manager
Data Grid Server provides a default Cache Manager configuration. When you start Data Grid Server, it instantiates the Cache Manager so you can remotely create caches at run-time.
Default Cache Manager
<!-- Creates a Cache Manager named "default" and exports metrics. --> <cache-container name="default" statistics="true"> <!-- Adds cluster transport that uses the default JGroups TCP stack. --> <transport cluster="${infinispan.cluster.name:cluster}" stack="${infinispan.cluster.stack:tcp}" node-name="${infinispan.node.name:}"/> </cache-container>
Examining the Cache Manager
After you start Data Grid Server and add user credentials, you can access the default Cache Manager through the Command Line Interface (CLI) or REST endpoint as follows:
CLI: Use the
describe
command in the default container.[//containers/default]> describe
-
REST: Navigate to
<server_hostname>:11222/rest/v2/cache-managers/default/
in any browser.
11.3. Creating Caches with the Data Grid Console
Dynamically add caches from templates or configuration files through the Data Grid console.
Prerequisites
Create a user and start at least one Data Grid server instance.
Procedure
-
Navigate to
<server_hostname>:11222/console/
in any browser. - Log in to the console.
- Open the Data Container view.
- Select Create Cache and then add a cache from a template or with Data Grid configuration in XML or JSON format.
- Return to the Data Container view and verify your Data Grid cache.
11.4. Creating Caches with the Data Grid Command Line Interface (CLI)
Use the Data Grid CLI to add caches from templates or configuration files in XML or JSON format.
Prerequisites
Create a user and start at least one Data Grid server instance.
Procedure
- Create a CLI connection to Data Grid.
Add cache definitions with the
create cache
command.Add a cache definition from an XML or JSON file with the
--file
option.[//containers/default]> create cache --file=configuration.xml mycache
Add a cache definition from a template with the
--template
option.[//containers/default]> create cache --template=org.infinispan.DIST_SYNC mycache
TipPress the tab key after the
--template=
argument to list available cache templates.
Verify the cache exists with the
ls
command.[//containers/default]> ls caches mycache
Retrieve the cache configuration with the
describe
command.[//containers/default]> describe caches/mycache
11.5. Creating Remote Caches with Hot Rod Clients
When Hot Rod Java clients attempt to access caches that do not exist, they return null
for remoteCacheManager.getCache("myCache")
invocations. To avoid this scenario, you can configure Hot Rod clients to create caches on first access using cache configuration.
Procedure
-
Use the
remoteCache()
method in theConfigurationBuilder
or use theconfiguration
andconfiguration_uri
properties inhotrod-client.properties
.
ConfigurationBuilder
File file = new File("path/to/infinispan.xml") ConfigurationBuilder builder = new ConfigurationBuilder(); builder.remoteCache("another-cache") .configuration("<distributed-cache name=\"another-cache\"/>"); builder.remoteCache("my.other.cache") .configurationURI(file.toURI());
hotrod-client.properties
infinispan.client.hotrod.cache.another-cache.configuration=<distributed-cache name=\"another-cache\"/> infinispan.client.hotrod.cache.[my.other.cache].configuration_uri=file:///path/to/infinispan.xml
When using hotrod-client.properties
with cache names that contain the .
character, you must enclose the cache name in square brackets as in the preceding example.
You can also create remote caches through the RemoteCacheManager
API in other ways, such as the following example that adds a cache configuration with the XMLStringConfiguration()
method and then calls the getOrCreateCache()
method.
However, Data Grid does not recommend this approach because it can more difficult to ensure XML validity and is generally a more cumbersome way to create caches. If you are creating complex cache configurations, you should save them to separate files in your project and reference them in your Hot Rod client configuration.
String cacheName = "CacheWithXMLConfiguration"; String xml = String.format("<distributed-cache name=\"%s\" mode=\"SYNC\">" + "<encoding media-type=\"application/x-protostream\"/>" + "<locking isolation=\"READ_COMMITTED\"/>" + "<transaction mode=\"NON_XA\"/>" + "<expiration lifespan=\"60000\" interval=\"20000\"/>" + "</distributed-cache>" , cacheName); remoteCacheManager.administration().getOrCreateCache(cacheName, new XMLStringConfiguration(xml));
Hot Rod code examples
Try some Data Grid code tutorials that show you how to create remote caches in different ways with the Hot Rod Java client.
Visit Data Grid code examples.
11.6. Creating Data Grid Caches with HTTP Clients
Add cache definitions to Data Grid servers through the REST endpoint with any suitable HTTP client.
Prerequisites
Create a user and start at least one Data Grid server instance.
Procedure
-
Create caches with
POST
requests to/rest/v2/caches/$cacheName
.
Use XML or JSON configuration by including it in the request payload.
POST /rest/v2/caches/mycache
Use the ?template=
parameter to create caches from org.infinispan
templates.
POST /rest/v2/caches/mycache?template=org.infinispan.DIST_SYNC
11.7. Cache Configuration
You can provide cache configuration in XML or JSON format.
XML
<distributed-cache name="myCache" mode="SYNC"> <encoding media-type="application/x-protostream"/> <memory max-count="1000000" when-full="REMOVE"/> </distributed-cache>
JSON
{ "distributed-cache": { "name": "myCache", "mode": "SYNC", "encoding": { "media-type": "application/x-protostream" }, "memory": { "max-count": 1000000, "when-full": "REMOVE" } } }
JSON format
Cache configuration in JSON format must follow the structure of an XML configuration. * XML elements become JSON objects. * XML attributes become JSON fields.
Chapter 12. Configuring Data Grid Server Datasources
Create managed datasources to optimize connection pooling and performance for database connections.
You can specify database connection properties as part of a JDBC cache store configuration. However, you must do this for each cache definition, which duplicates configuration and wastes resources by creating multiple distinct connection pools.
By using shared, managed datasources, you centralize connection configuration and pooling for more efficient usage.
12.1. Datasource Configuration for JDBC Cache Stores
Data Grid server configuration for datasources is composed of two sections:
-
A
connection factory
that defines how to connect to the database. -
A
connection pool
that defines how to pool and reuse connections.
<data-sources> <!-- Defines a unique name for the datasource, JNDI name, and enables statistics. --> <data-source name="ds" jndi-name="jdbc/datasource" statistics="true"> <!-- Specifies the JDBC driver that creates connections. --> <connection-factory driver="org.database.Driver" username="db_user" password="secret" url="jdbc:db://database-host:10000/dbname" new-connection-sql="SELECT 1" transaction-isolation="READ_COMMITTED"> <!-- Sets optional JDBC driver-specific connection properties. --> <connection-property name="name">value</connection-property> </connection-factory> <!-- Defines connection pool properties. --> <connection-pool initial-size="1" max-size="10" min-size="3" background-validation="1000" idle-removal="1" blocking-timeout="1000" leak-detection="10000"/> </data-source> </data-sources>
Connection pools can be tuned using the following parameters:
-
initial-size
: Initial number of connections the pool should hold. -
max-size
: Maximum number of connections in the pool. -
min-size
: Minimum number of connections the pool should hold. -
blocking-timeout
: Maximum time in milliseconds to block while waiting for a connection before throwing an exception. This will never throw an exception if creating a new connection takes an inordinately long period of time. Default is 0 meaning that a call will wait indefinitely. -
background-validation
: Time in milliseconds between background validation runs. A duration of 0 means that this feature is disabled. -
validate-on-acquisition
: Connections idle for longer than this time, specified in milliseconds, are validated before being acquired (foreground validation). A duration of 0 means that this feature is disabled. -
idle-removal
: Time in minutes a connection has to be idle before it can be removed. -
leak-detection
: Time in milliseconds a connection has to be held before a leak warning.
12.2. Using Datasources in JDBC Cache Stores
Use a shared, managed datasource in your JDBC cache store configuration instead of specifying individual connection properties for each cache definition.
Prerequisites
Create a managed datasource for JDBC cache stores in your Data Grid server configuration.
Procedure
- Reference the JNDI name of the datasource in the JDBC cache store configuration of your cache configuration, as in the following example:
<distributed-cache-configuration name="persistent-cache" xmlns:jdbc="urn:infinispan:config:store:jdbc:12.1"> <persistence> <jdbc:string-keyed-jdbc-store> <!-- Specifies the JNDI name that you provided for the datasource connection in the server configuration. --> <jdbc:data-source jndi-url="jdbc/postgres"/> <jdbc:string-keyed-table drop-on-exit="true" create-on-start="true" prefix="TBL"> <jdbc:id-column name="ID" type="VARCHAR(255)"/> <jdbc:data-column name="DATA" type="BYTEA"/> <jdbc:timestamp-column name="TS" type="BIGINT"/> <jdbc:segment-column name="S" type="INT"/> </jdbc:string-keyed-table> </jdbc:string-keyed-jdbc-store> </persistence> </distributed-cache-configuration>
12.3. Testing Data Sources
Verify that connections to data sources are functioning correctly with the CLI.
Procedure
Start the CLI.
$ bin/cli.sh [disconnected]>
List all data sources:
[//containers/default]> server datasource ls
Test a data source connection.
[//containers/default]> server datasource test my-datasource
Chapter 13. Remotely Executing Server-Side Tasks
Define and add tasks to Data Grid servers that you can invoke from the Data Grid command line interface, REST API, or from Hot Rod clients.
You can implement tasks as custom Java classes or define scripts in languages such as JavaScript.
13.1. Creating Server Tasks
Create custom task implementations and add them to Data Grid servers.
13.1.1. Server Tasks
Data Grid server tasks are classes that extend the org.infinispan.tasks.ServerTask
interface and generally include the following method calls:
setTaskContext()
- Allows access to execution context information including task parameters, cache references on which tasks are executed, and so on. In most cases, implementations store this information locally and use it when tasks are actually executed.
getName()
- Returns unique names for tasks. Clients invoke tasks with these names.
getExecutionMode()
Returns the execution mode for tasks.
-
TaskExecutionMode.ONE_NODE
only the node that handles the request executes the script. Although scripts can still invoke clustered operations. -
TaskExecutionMode.ALL_NODES
Data Grid uses clustered executors to run scripts across nodes. For example, server tasks that invoke stream processing need to be executed on a single node because stream processing is distributed to all nodes.
-
call()
-
Computes a result. This method is defined in the
java.util.concurrent.Callable
interface and is invoked with server tasks.
Server task implementations must adhere to service loader pattern requirements. For example, implementations must have a zero-argument constructors.
The following HelloTask
class implementation provides an example task that has one parameter:
package example; import org.infinispan.tasks.ServerTask; import org.infinispan.tasks.TaskContext; public class HelloTask implements ServerTask<String> { private TaskContext ctx; @Override public void setTaskContext(TaskContext ctx) { this.ctx = ctx; } @Override public String call() throws Exception { String name = (String) ctx.getParameters().get().get("name"); return "Hello " + name; } @Override public String getName() { return "hello-task"; } }
13.1.2. Deploying Server Tasks to Data Grid Servers
Add your custom server task classes to Data Grid servers.
Prerequisites
Stop any running Data Grid servers. Data Grid does not support runtime deployment of custom classes.
Procedure
Add a
META-INF/services/org.infinispan.tasks.ServerTask
file that contains the fully qualified names of server tasks, for example:example.HelloTask
- Package your server task implementation in a JAR file.
-
Copy the JAR file to the
$RHDG_HOME/server/lib
directory of your Data Grid server. - Add your classes to the deserialization allow list in your Data Grid configuration. Alternatively set the allow list using system properties.
13.2. Creating Server Scripts
Create custom scripts and add them to Data Grid servers.
13.2.1. Server Scripts
Data Grid server scripting is based on the javax.script
API and is compatible with any JVM-based ScriptEngine implementation.
Hello World Script Example
The following is a simple example that runs on a single Data Grid server, has one parameter, and uses JavaScript:
// mode=local,language=javascript,parameters=[greetee] "Hello " + greetee
When you run the preceding script, you pass a value for the greetee
parameter and Data Grid returns "Hello ${value}"
.
13.2.1.1. Script Metadata
Metadata provides additional information about scripts that Data Grid servers use when running scripts.
Script metadata are property=value
pairs that you add to comments in the first lines of scripts, such as the following example:
// name=test, language=javascript // mode=local, parameters=[a,b,c]
-
Use comment styles that match the scripting language (
//
,;;
,#
). -
Separate
property=value
pairs with commas. - Separate values with single (') or double (") quote characters.
Property | Description |
---|---|
| Defines the execution mode and has the following values:
|
| Specifies the ScriptEngine that executes the script. |
| Specifies filename extensions as an alternative method to set the ScriptEngine. |
| Specifies roles that users must have to execute scripts. |
| Specifies an array of valid parameter names for this script. Invocations which specify parameters not included in this list cause exceptions. |
| Optionally sets the MediaType (MIME type) for storing data as well as parameter and return values. This property is useful for remote clients that support particular data formats only.
Currently you can set only |
13.2.1.2. Script Bindings
Data Grid exposes internal objects as bindings for script execution.
Binding | Description |
---|---|
| Specifies the cache against which the script is run. |
| Specifies the marshaller to use for serializing data to the cache. |
|
Specifies the |
| Specifies the instance of the script manager that runs the script. You can use this binding to run other scripts from a script. |
13.2.1.3. Script Parameters
Data Grid lets you pass named parameters as bindings for running scripts.
Parameters are name,value
pairs, where name
is a string and value
is any value that the marshaller can interpret.
The following example script has two parameters, multiplicand
and multiplier
. The script takes the value of multiplicand
and multiplies it with the value of multiplier
.
// mode=local,language=javascript multiplicand * multiplier
When you run the preceding script, Data Grid responds with the result of the expression evaluation.
13.2.2. Adding Scripts to Data Grid Servers
Use the command line interface to add scripts to Data Grid servers.
Prerequisites
Data Grid Server stores scripts in the ___script_cache
cache. If you enable cache authorization, users must have CREATE
permissions to add to ___script_cache
.
Assign users the deployer
role at minimum if you use default authorization settings.
Procedure
Define scripts as required.
For example, create a file named
multiplication.js
that runs on a single Data Grid server, has two parameters, and uses JavaScript to multiply a given value:// mode=local,language=javascript multiplicand * multiplier
- Create a CLI connection to Data Grid.
Use the
task
command to upload scripts, as in the following example:[//containers/default]> task upload --file=multiplication.js multiplication
Verify that your scripts are available.
[//containers/default]> ls tasks multiplication
13.2.3. Programmatically Creating Scripts
Add scripts with the Hot Rod RemoteCache
interface as in the following example:
RemoteCache<String, String> scriptCache = cacheManager.getCache("___script_cache"); scriptCache.put("multiplication.js", "// mode=local,language=javascript\n" + "multiplicand * multiplier\n");
Reference
13.3. Running Server-Side Tasks and Scripts
Execute tasks and custom scripts on Data Grid servers.
13.3.1. Running Tasks and Scripts
Use the command line interface to run tasks and scripts on Data Grid clusters.
Procedure
- Create a CLI connection to Data Grid.
Use the
task
command to run tasks and scripts, as in the following examples:Execute a script named
multipler.js
and specify two parameters:[//containers/default]> task exec multipler.js -Pmultiplicand=10 -Pmultiplier=20 200.0
Execute a task named
@@cache@names
to retrieve a list of all available caches://containers/default]> task exec @@cache@names ["___protobuf_metadata","mycache","___script_cache"]
13.3.2. Programmatically Running Scripts
Call the execute()
method to run scripts with the Hot Rod RemoteCache
interface, as in the following example:
RemoteCache<String, Integer> cache = cacheManager.getCache(); // Create parameters for script execution. Map<String, Object> params = new HashMap<>(); params.put("multiplicand", 10); params.put("multiplier", 20); // Run the script with the parameters. Object result = cache.execute("multiplication.js", params);
Reference
13.3.3. Programmatically Running Tasks
Call the execute()
method to run tasks with the Hot Rod RemoteCache
interface, as in the following example:
// Add configuration for a locally running server. ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServer().host("127.0.0.1").port(11222); // Connect to the server. RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build()); // Retrieve the remote cache. RemoteCache<String, String> cache = cacheManager.getCache(); // Create task parameters. Map<String, String> parameters = new HashMap<>(); parameters.put("name", "developer"); // Run the server task. String greet = cache.execute("hello-task", parameters); System.out.println(greet);
Reference
Chapter 14. Enabling and Customizing Logging
Data Grid uses Apache Log4j 2 to provide configurable logging mechanisms that capture details about the environment and record cache operations for troubleshooting purposes and root cause analysis.
14.1. Server Logs
Data Grid writes server logs to the following files in the $RHDG_HOME/server/log
directory:
server.log
-
Messages in human readable format, including boot logs that relate to the server startup.
Data Grid creates this file when you start the server. server.log.json
-
Messages in JSON format that let you parse and analyze Data Grid logs.
Data Grid creates this file when you enable theJSON-FILE
appender.
14.1.1. Configuring Server Logs
Data Grid uses Apache Log4j technology to write server log messages. You can configure server logs in the log4j2.xml
file.
Procedure
-
Open
$RHDG_HOME/server/conf/log4j2.xml
with any text editor. - Change server logging as appropriate.
-
Save and close
log4j2.xml
.
Additional resources
14.1.2. Log Levels
Log levels indicate the nature and severity of messages.
Log level | Description |
---|---|
| Fine-grained debug messages, capturing the flow of individual requests through the application. |
| Messages for general debugging, not related to an individual request. |
| Messages about the overall progress of applications, including lifecycle events. |
| Events that can lead to error or degrade performance. |
| Error conditions that might prevent operations or activities from being successful but do not prevent applications from running. |
| Events that could cause critical service failure and application shutdown. |
In addition to the levels of individual messages presented above, the configuration allows two more values: ALL
to include all messages, and OFF
to exclude all messages.
14.1.3. Data Grid Log Categories
Data Grid provides categories for INFO
, WARN
, ERROR
, FATAL
level messages that organize logs by functional area.
org.infinispan.CLUSTER
- Messages specific to Data Grid clustering that include state transfer operations, rebalancing events, partitioning, and so on.
org.infinispan.CONFIG
- Messages specific to Data Grid configuration.
org.infinispan.CONTAINER
- Messages specific to the data container that include expiration and eviction operations, cache listener notifications, transactions, and so on.
org.infinispan.PERSISTENCE
- Messages specific to cache loaders and stores.
org.infinispan.SECURITY
- Messages specific to Data Grid security.
org.infinispan.SERVER
- Messages specific to Data Grid servers.
org.infinispan.XSITE
- Messages specific to cross-site replication operations.
14.1.4. Log Appenders
Log appenders define how Data Grid records log messages.
- CONSOLE
-
Write log messages to the host standard out (
stdout
) or standard error (stderr
) stream.
Uses theorg.apache.logging.log4j.core.appender.ConsoleAppender
class by default. - FILE
-
Write log messages to a file.
Uses theorg.apache.logging.log4j.core.appender.RollingFileAppender
class by default. - JSON-FILE
-
Write log messages to a file in JSON format.
Uses theorg.apache.logging.log4j.core.appender.RollingFileAppender
class by default.
14.1.5. Log Patterns
The CONSOLE
and FILE
appenders use a PatternLayout
to format the log messages according to a pattern.
An example is the default pattern in the FILE appender:%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p (%t) [%c{1}] %m%throwable%n
-
%d{yyyy-MM-dd HH:mm:ss,SSS}
adds the current time and date. -
%-5p
specifies the log level, aligned to the right. -
%t
adds the name of the current thread. -
%c{1}
adds the short name of the logging category. -
%m
adds the log message. -
%throwable
adds the exception stack trace. -
%n
adds a new line.
Patterns are fully described in the PatternLayout
documentation .
14.1.6. Enabling and Configuring the JSON Log Handler
Data Grid provides a JSON log handler to write messages in JSON format.
Prerequisites
-
Stop Data Grid Server if it is running.
You cannot dynamically enable log handlers.
Procedure
-
Open
$RHDG_HOME/server/conf/log4j2.xml
with any text editor. Uncomment the
JSON-FILE
appender and comment out theFILE
appender:<!--<AppenderRef ref="FILE"/>--> <AppenderRef ref="JSON-FILE"/>
- Optionally configure the JSON appender and JSON layout as required.
-
Save and close
log4j2.xml
.
When you start Data Grid, it writes each log message as a JSON map in the following file:$RHDG_HOME/server/log/server.log.json
Additional resources
14.2. Access Logs
Access logs record all inbound client requests for Hot Rod and REST endpoints to files in the $RHDG_HOME/server/log
directory.
org.infinispan.HOTROD_ACCESS_LOG
-
Logging category that writes Hot Rod access messages to a
hotrod-access.log
file. org.infinispan.REST_ACCESS_LOG
-
Logging category that writes REST access messages to a
rest-access.log
file.
14.2.1. Enabling Access Logs
To record Hot Rod and REST endpoint access messages, you need to enable the logging categories in log4j2.xml
.
Procedure
-
Open
$RHDG_HOME/server/conf/log4j2.xml
with any text editor. -
Change the level for the
org.infinispan.HOTROD_ACCESS_LOG
andorg.infinispan.REST_ACCESS_LOG
logging categories toTRACE
. -
Save and close
log4j2.xml
.
<Logger name="org.infinispan.HOTROD_ACCESS_LOG" additivity="false" level="TRACE"> <AppenderRef ref="HR-ACCESS-FILE"/> </Logger>
14.2.2. Access Log Properties
The default format for access logs is as follows:
%X{address} %X{user} [%d{dd/MMM/yyyy:HH:mm:ss Z}] "%X{method} %m %X{protocol}" %X{status} %X{requestSize} %X{responseSize} %X{duration}%n
The preceding format creates log entries such as the following:
127.0.0.1 - [DD/MM/YYYY:HH:MM:SS +0000] "PUT /rest/v2/caches/default/key HTTP/1.1" 404 5 77 10
Logging properties use the %X{name}
notation and let you modify the format of access logs. The following are the default logging properties:
Property | Description |
---|---|
|
Either the |
| Principal name, if using authentication. |
|
Method used. |
|
Protocol used. |
|
An HTTP status code for the REST endpoint. |
| Size, in bytes, of the request. |
| Size, in bytes, of the response. |
| Number of milliseconds that the server took to handle the request. |
Use the header name prefixed with h:
to log headers that were included in requests; for example, %X{h:User-Agent}
.
14.3. Audit Logs
Audit logs let you track changes to your Data Grid environment so you know when changes occur and which users make them. Enable and configure audit logging to record server configuration events and administrative operations.
org.infinispan.AUDIT
-
Logging category that writes security audit messages to an
audit.log
file in the$RHDG_HOME/server/log
directory.
14.3.1. Enabling Audit Logging
To record security audit messages, you need to enable the logging category in log4j2.xml
.
Procedure
-
Open
$RHDG_HOME/server/conf/log4j2.xml
with any text editor. -
Change the level for the
org.infinispan.AUDIT
logging category toINFO
. -
Save and close
log4j2.xml
.
<!-- Set to INFO to enable audit logging --> <Logger name="org.infinispan.AUDIT" additivity="false" level="INFO"> <AppenderRef ref="AUDIT-FILE"/> </Logger>
14.3.2. Configuring Audit Logging Appenders
Apache Log4j provides different appenders that you can use to send audit messages to a destination other than the default log file. For instance, if you want to send audit logs to a syslog daemon, JDBC database, or Apache Kafka server, you can configure an appender in log4j2.xml
.
Procedure
-
Open
$RHDG_HOME/server/conf/log4j2.xml
with any text editor. Comment or remove the default
AUDIT-FILE
rolling file appender.<!--RollingFile name="AUDIT-FILE" ... </RollingFile-->
Add the desired logging appender for audit messages.
For example, you could add a logging appender for a Kafka server as follows:
<Kafka name="AUDIT-KAFKA" topic="audit"> <PatternLayout pattern="%date %message"/> <Property name="bootstrap.servers">localhost:9092</Property> </Kafka>
-
Save and close
log4j2.xml
.
Additional resources
14.3.3. Using Custom Audit Logging Implementations
You can create custom implementations of the org.infinispan.security.AuditLogger
API if configuring Log4j appenders does not meet your needs.
Prerequisites
-
Implement
org.infinispan.security.AuditLogger
as required and package it in a JAR file.
Procedure
-
Add your JAR to the
server/lib
directory in your Data Grid Server installation. Specify the fully qualified class name of your custom audit logger as the value for the
audit-logger
attribute on theauthorization
element in your cache container security configuration.For example, the following configuration defines
my.package.CustomAuditLogger
as the class for logging audit messages:<infinispan> <cache-container> <security> <authorization audit-logger="my.package.CustomAuditLogger"/> </security> </cache-container> </infinispan>
Additional resources
Chapter 15. Configuring Data Grid Server Statistics
Enable statistics that Data Grid exports to a metrics
endpoint or via JMX MBeans. Registering JMX MBeans also exposes management operations that you can perform remotely.
15.1. Enabling Data Grid Statistics
Configure Data Grid to export statistics for Cache Managers and caches.
Data Grid Server enables Cache Manager statistics by default. You must explicitly enable statistics for your caches.
Procedure
Modify your configuration to enable Data Grid statistics in one of the following ways:
-
Declarative: Add the
statistics="true"
attribute. -
Programmatic: Call the
.statistics()
method.
Declarative
<!-- Enables statistics for the Cache Manager. --> <cache-container statistics="true"> <!-- Enables statistics for the named cache. --> <local-cache name="mycache" statistics="true"/> </cache-container>
Programmatic
GlobalConfiguration globalConfig = new GlobalConfigurationBuilder() //Enables statistics for the Cache Manager. .cacheContainer().statistics(true) .build(); Configuration config = new ConfigurationBuilder() //Enables statistics for the named cache. .statistics().enable() .build();
15.2. Configuring Data Grid Metrics
Configure Data Grid to export gauges and histograms via the metrics
endpoint.
Procedure
-
Turn gauges and histograms on or off in the
metrics
configuration as appropriate.
Declarative
<!-- Computes and collects statistics for the Cache Manager. --> <cache-container statistics="true"> <!-- Exports collected statistics as gauge and histogram metrics. --> <metrics gauges="true" histograms="true" /> </cache-container>
Programmatic
GlobalConfiguration globalConfig = new GlobalConfigurationBuilder() //Computes and collects statistics for the Cache Manager. .statistics().enable() //Exports collected statistics as gauge and histogram metrics. .metrics().gauges(true).histograms(true) .build();
15.3. Collecting Data Grid Metrics
Collect Data Grid metrics with monitoring tools such as Prometheus.
Prerequisites
-
Enable statistics. If you do not enable statistics, Data Grid provides
0
and-1
values for metrics. - Optionally enable histograms. By default Data Grid generates gauges but not histograms.
Procedure
Get metrics in Prometheus (OpenMetrics) format:
$ curl -v http://localhost:11222/metrics
Get metrics in MicroProfile JSON format:
$ curl --header "Accept: application/json" http://localhost:11222/metrics
Next steps
Configure monitoring applications to collect Data Grid metrics. For example, add the following to prometheus.yml
:
static_configs: - targets: ['localhost:11222']
Reference
- Prometheus Configuration
- Enabling Data Grid Statistics
15.4. Configuring Data Grid to Register JMX MBeans
Data Grid can register JMX MBeans that you can use to collect statistics and perform administrative operations. You must enable statistics separately to JMX otherwise Data Grid provides 0
values for all statistic attributes.
Procedure
Modify your cache container configuration to enable JMX in one of the following ways:
-
Declarative: Add the
<jmx enabled="true" />
element to the cache container. -
Programmatic: Call the
.jmx().enable()
method.
Declarative
<cache-container> <jmx enabled="true" /> </cache-container>
Programmatic
GlobalConfiguration globalConfig = new GlobalConfigurationBuilder() .jmx().enable() .build();
15.4.1. Data Grid MBeans
Data Grid exposes JMX MBeans that represent manageable resources.
org.infinispan:type=Cache
- Attributes and operations available for cache instances.
org.infinispan:type=CacheManager
- Attributes and operations available for cache managers, including Data Grid cache and cluster health statistics.
For a complete list of available JMX MBeans along with descriptions and available operations and attributes, see the Data Grid JMX Components documentation.
Additional resources
Chapter 16. Retrieving Health Statistics
Monitor the health of your Data Grid clusters in the following ways:
-
Programmatically with
embeddedCacheManager.getHealth()
method calls. - JMX MBeans
- Data Grid REST Server
16.1. Accessing the Health API via JMX
Retrieve Data Grid cluster health statistics via JMX.
Procedure
Connect to Data Grid server using any JMX capable tool such as JConsole and navigate to the following object:
org.infinispan:type=CacheManager,name="default",component=CacheContainerHealth
- Select available MBeans to retrieve cluster health statistics.
16.2. Accessing the Health API via REST
Get Data Grid cluster health via the REST API.
Procedure
Invoke a
GET
request to retrieve cluster health.GET /rest/v2/cache-managers/{cacheManagerName}/health
Data Grid responds with a JSON
document such as the following:
{ "cluster_health":{ "cluster_name":"ISPN", "health_status":"HEALTHY", "number_of_nodes":2, "node_names":[ "NodeA-36229", "NodeB-28703" ] }, "cache_health":[ { "status":"HEALTHY", "cache_name":"___protobuf_metadata" }, { "status":"HEALTHY", "cache_name":"cache2" }, { "status":"HEALTHY", "cache_name":"mycache" }, { "status":"HEALTHY", "cache_name":"cache1" } ] }
Get cache manager status as follows:
GET /rest/v2/cache-managers/{cacheManagerName}/health/status
Reference
See the REST v2 (version 2) API documentation for more information.
Chapter 17. Performing Rolling Upgrades for Data Grid Servers
Perform rolling upgrades of your Data Grid clusters to change between versions without downtime or data loss. Rolling upgrades migrate both your Data Grid servers and your data to the target version over Hot Rod.
17.1. Setting Up Target Clusters
Create a cluster that runs the target Data Grid version and uses a remote cache store to load data from the source cluster.
Prerequisites
- Install a Data Grid cluster with the target upgrade version.
Ensure the network properties for the target cluster do not overlap with those for the source cluster. You should specify unique names for the target and source clusters in the JGroups transport configuration. Depending on your environment you can also use different network interfaces and specify port offsets to keep the target and source clusters separate.
Procedure
Add a
RemoteCacheStore
on the target cluster for each cache you want to migrate from the source cluster.Remote cache stores use the Hot Rod protocol to retrieve data from remote Data Grid clusters. When you add the remote cache store to the target cluster, it can lazily load data from the source cluster to handle client requests.
Switch clients over to the target cluster so it starts handling all requests.
- Update client configuration with the location of the target cluster.
- Restart clients.
17.1.1. Remote Cache Stores for Rolling Upgrades
You must use specific remote cache store configuration to perform rolling upgrades, as follows:
<!-- Remote cache stores for rolling upgrades must disable passivation. --> <persistence passivation="false"> <!-- The value of the cache attribute matches the name of a cache in the source cluster. Target clusters load data from this cache using the remote cache store. --> <!-- The "protocol-version" attribute matches the Hot Rod protocol version of the source cluster. 2.5 is the minimum version and is suitable for any upgrade path. --> <!-- You should enable segmentation for remote cache stores only if the number of segments in the target cluster matches the number of segments for the cache in the source cluster. --> <remote-store xmlns="urn:infinispan:config:store:remote:12.1" cache="myDistCache" protocol-version="2.5" hotrod-wrapping="true" raw-values="true" segmented="false"> <!-- Configures authentication and encryption according to the security realm of the source cluster. --> <security> <authentication server-name="infinispan"> <digest username="admin" password="changeme" realm="default"/> </authentication> </security> <!-- Points to the location of the source cluster. --> <remote-server host="127.0.0.1" port="11222"/> </remote-store> </persistence>
17.2. Synchronizing Data to Target Clusters
When your target cluster is running and handling client requests using a remote cache store to load data on demand, you can synchronize data from the source cluster to the target cluster.
This operation reads data from the source cluster and writes it to the target cluster. Data migrates to all nodes in the target cluster in parallel, with each node receiving a subset of the data. You must perform the synchronization for each cache in your Data Grid configuration.
Procedure
Start the synchronization operation for each cache in your Data Grid configuration that you want to migrate to the target cluster.
Use the Data Grid REST API and invoke
POST
requests with the?action=sync- data
parameter. For example, to synchronize data in a cache named "myCache" from a source cluster to a target cluster, do the following:POST /v2/caches/myCache?action=sync-data
When the operation completes, Data Grid responds with the total number of entries copied to the target cluster.
Alternatively, you can use JMX by invoking
synchronizeData(migratorName=hotrod)
on theRollingUpgradeManager
MBean.Disconnect each node in the target cluster from the source cluster.
For example, to disconnect the "myCache" cache from the source cluster, invoke the following
POST
request:POST /v2/caches/myCache?action=disconnect-source
To use JMX, invoke
disconnectSource(migratorName=hotrod)
on theRollingUpgradeManager
MBean.
Next steps
After you synchronize all data from the source cluster, the rolling upgrade process is complete. You can now decommission the source cluster.
Chapter 18. Troubleshooting Data Grid Servers
Gather diagnostic information about Data Grid server deployments and perform troubleshooting steps to resolve issues.
18.1. Getting Diagnostic Reports for Data Grid Servers
Data Grid servers provide aggregated reports in tar.gz
archives that contain diagnostic information about both the Data Grid server and the host. The report provides details about CPU, memory, open files, network sockets and routing, threads, in addition to configuration and log files.
Procedure
- Create a CLI connection to Data Grid.
Use the
server report
command to download atar.gz
archive:[//containers/default]> server report Downloaded report 'infinispan-<hostname>-<timestamp>-report.tar.gz'
-
Move the
tar.gz
file to a suitable location on your filesystem. -
Extract the
tar.gz
file with any archiving tool.
18.2. Changing Data Grid Server Logging Configuration at Runtime
Modify the logging configuration for Data Grid servers at runtime to temporarily adjust logging to troubleshoot issues and perform root cause analysis.
Modifying the logging configuration through the CLI is a runtime-only operation, which means that changes:
-
Are not saved to the
log4j2.xml
file. Restarting server nodes or the entire cluster resets the logging configuration to the default properties in thelog4j2.xml
file. - Apply only to the nodes in the cluster when you invoke the CLI. Nodes that join the cluster after you change the logging configuration use the default properties.
Procedure
- Create a CLI connection to Data Grid.
Use the
logging
to make the required adjustments.- List all appenders defined on the server:
[//containers/default]> logging list-appenders
The preceding command returns:
{ "STDOUT" : { "name" : "STDOUT" }, "JSON-FILE" : { "name" : "JSON-FILE" }, "HR-ACCESS-FILE" : { "name" : "HR-ACCESS-FILE" }, "FILE" : { "name" : "FILE" }, "REST-ACCESS-FILE" : { "name" : "REST-ACCESS-FILE" } }
- List all logger configurations defined on the server:
[//containers/default]> logging list-loggers
The preceding command returns:
[ { "name" : "", "level" : "INFO", "appenders" : [ "STDOUT", "FILE" ] }, { "name" : "org.infinispan.HOTROD_ACCESS_LOG", "level" : "INFO", "appenders" : [ "HR-ACCESS-FILE" ] }, { "name" : "com.arjuna", "level" : "WARN", "appenders" : [ ] }, { "name" : "org.infinispan.REST_ACCESS_LOG", "level" : "INFO", "appenders" : [ "REST-ACCESS-FILE" ] } ]
-
Add and modify logger configurations with the
set
subcommand
For example, the following command sets the logging level for the org.infinispan
package to DEBUG
:
[//containers/default]> logging set --level=DEBUG org.infinispan
-
Remove existing logger configurations with the
remove
subcommand.
For example, the following command removes the org.infinispan
logger configuration, which means the root configuration is used instead:
[//containers/default]> logging remove org.infinispan
18.3. Resource Statistics
You can inspect server-collected statistics for some of the resources within a Data Grid server using the stats
command.
Use the stats
command either from the context of a resource which collects statistics (containers, caches) or with a path to such a resource:
[//containers/default]> stats { "statistics_enabled" : true, "number_of_entries" : 0, "hit_ratio" : 0.0, "read_write_ratio" : 0.0, "time_since_start" : 0, "time_since_reset" : 49, "current_number_of_entries" : 0, "current_number_of_entries_in_memory" : 0, "total_number_of_entries" : 0, "off_heap_memory_used" : 0, "data_memory_used" : 0, "stores" : 0, "retrievals" : 0, "hits" : 0, "misses" : 0, "remove_hits" : 0, "remove_misses" : 0, "evictions" : 0, "average_read_time" : 0, "average_read_time_nanos" : 0, "average_write_time" : 0, "average_write_time_nanos" : 0, "average_remove_time" : 0, "average_remove_time_nanos" : 0, "required_minimum_number_of_nodes" : -1 }
[//containers/default]> stats /containers/default/caches/mycache { "time_since_start" : -1, "time_since_reset" : -1, "current_number_of_entries" : -1, "current_number_of_entries_in_memory" : -1, "total_number_of_entries" : -1, "off_heap_memory_used" : -1, "data_memory_used" : -1, "stores" : -1, "retrievals" : -1, "hits" : -1, "misses" : -1, "remove_hits" : -1, "remove_misses" : -1, "evictions" : -1, "average_read_time" : -1, "average_read_time_nanos" : -1, "average_write_time" : -1, "average_write_time_nanos" : -1, "average_remove_time" : -1, "average_remove_time_nanos" : -1, "required_minimum_number_of_nodes" : -1 }
Chapter 19. Reference
19.1. Data Grid Server 8.2.3 Readme
Information about the Data Grid Server 12.1.11.Final-redhat-00001 distribution.
19.1.1. Requirements
Data Grid Server requires JDK 11 or later.
19.1.2. Starting servers
Use the server
script to run Data Grid Server instances.
Unix / Linux
$RHDG_HOME/bin/server.sh
Windows
$RHDG_HOME\bin\server.bat
Include the --help
or -h
option to view command arguments.
19.1.3. Stopping servers
Use the shutdown
command with the CLI to perform a graceful shutdown.
Alternatively, enter Ctrl-C from the terminal to interrupt the server process or kill it via the TERM signal.
19.1.4. Configuration
Server configuration extends Data Grid configuration with the following server-specific elements:
cache-container
- Defines cache containers for managing cache lifecycles.
endpoints
- Enables and configures endpoint connectors for client protocols.
security
- Configures endpoint security realms.
socket-bindings
- Maps endpoint connectors to interfaces and ports.
The default configuration file is $RHDG_HOME/server/conf/infinispan.xml
.
Use different configuration files with the -c
argument, as in the following example that starts a server without clustering capabilities:
Unix / Linux
$RHDG_HOME/bin/server.sh -c infinispan-local.xml
Windows
$RHDG_HOME\bin\server.bat -c infinispan-local.xml
19.1.5. Bind address
Data Grid Server binds to the loopback IP address localhost
on your network by default.
Use the -b
argument to set a different IP address, as in the following example that binds to all network interfaces:
Unix / Linux
$RHDG_HOME/bin/server.sh -b 0.0.0.0
Windows
$RHDG_HOME\bin\server.bat -b 0.0.0.0
19.1.6. Bind port
Data Grid Server listens on port 11222
by default.
Use the -p
argument to set an alternative port:
Unix / Linux
$RHDG_HOME/bin/server.sh -p 30000
Windows
$RHDG_HOME\bin\server.bat -p 30000
19.1.7. Clustering address
Data Grid Server configuration defines cluster transport so multiple instances on the same network discover each other and automatically form clusters.
Use the -k
argument to change the IP address for cluster traffic:
Unix / Linux
$RHDG_HOME/bin/server.sh -k 192.168.1.100
Windows
$RHDG_HOME\bin\server.bat -k 192.168.1.100
19.1.8. Cluster stacks
JGroups stacks configure the protocols for cluster transport. Data Grid Server uses the tcp
stack by default.
Use alternative cluster stacks with the -j
argument, as in the following example that uses UDP for cluster transport:
Unix / Linux
$RHDG_HOME/bin/server.sh -j udp
Windows
$RHDG_HOME\bin\server.bat -j udp
19.1.9. Authentication
Data Grid Server requires authentication.
Create a username and password with the CLI as follows:
Unix / Linux
$RHDG_HOME/bin/cli.sh user create username -p "qwer1234!"
Windows
$RHDG_HOME\bin\cli.bat user create username -p "qwer1234!"
19.1.10. Server home directory
Data Grid Server uses infinispan.server.home.path
to locate the contents of the server distribution on the host filesystem.
The server home directory, referred to as $RHDG_HOME
, contains the following folders:
├── bin ├── boot ├── docs ├── lib ├── server └── static
Folder | Description |
---|---|
| Contains scripts to start servers and CLI. |
|
Contains |
| Provides configuration examples, schemas, component licenses, and other resources. |
|
Contains |
| Provides a root folder for Data Grid Server instances. |
| Contains static resources for Data Grid Console. |
19.1.11. Server root directory
Data Grid Server uses infinispan.server.root.path
to locate configuration files and data for Data Grid Server instances.
You can create multiple server root folders in the same directory or in different directories and then specify the locations with the -s
or --server-root
argument, as in the following example:
Unix / Linux
$RHDG_HOME/bin/server.sh -s server2
Windows
$RHDG_HOME\bin\server.bat -s server2
Each server root directory contains the following folders:
├── server │ ├── conf │ ├── data │ ├── lib │ └── log
Folder | Description | System property override |
---|---|---|
| Contains server configuration files. |
|
| Contains data files organized by container name. |
|
|
Contains server extension files. |
|
| Contains server log files. |
|
19.1.12. Logging
Configure Data Grid Server logging with the log4j2.xml
file in the server/conf
folder.
Use the --logging-config=<path_to_logfile>
argument to use custom paths, as follows:
Unix / Linux
$RHDG_HOME/bin/server.sh --logging-config=/path/to/log4j2.xml
To ensure custom paths take effect, do not use the ~
shortcut.
Windows
$RHDG_HOME\bin\server.bat --logging-config=path\to\log4j2.xml