Data Grid Server Guide
Deploy, secure, and manage Data Grid Server
Abstract
Red Hat Data Grid Copy linkLink copied to clipboard!
Data Grid is a high-performance, distributed in-memory data store.
- Schemaless data structure
- Flexibility to store different objects as key-value pairs.
- Grid-based data storage
- Designed to distribute and replicate data across clusters.
- Elastic scaling
- Dynamically adjust the number of nodes to meet demand without service disruption.
- Data interoperability
- Store, retrieve, and query data in the grid from different endpoints.
Data Grid documentation Copy linkLink copied to clipboard!
Documentation for Data Grid is available on the Red Hat customer portal.
Data Grid downloads Copy linkLink copied to clipboard!
Access the Data Grid Software Downloads on the Red Hat customer portal.
You must have a Red Hat account to access and download Data Grid software.
Making open source more inclusive Copy linkLink copied to clipboard!
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. Getting Started with Data Grid Server Copy linkLink copied to clipboard!
Quickly set up Data Grid Server and learn the basics.
1.1. Data Grid Server Requirements Copy linkLink copied to clipboard!
Data Grid Server requires a Java Virtual Machine. See the Data Grid Supported Configurations for details on supported versions.
1.2. Downloading Server Distributions Copy linkLink copied to clipboard!
The Data Grid server distribution is an archive of Java libraries (JAR files), configuration files, and a data directory.
Procedure
- Access the Red Hat customer portal.
- Download Red Hat Data Grid 8.2 Server from the software downloads section.
Run the
md5sumorsha256sumcommand with the server download archive as the argument, for example:sha256sum jboss-datagrid-${version}-server.zip$ sha256sum jboss-datagrid-${version}-server.zipCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Compare with the
MD5orSHA-256checksum value on the Data Grid Software Details page.
Reference
- Data Grid Server README describes the contents of the server distribution.
1.3. Installing Data Grid Server Copy linkLink copied to clipboard!
Install the Data Grid Server distribution on a host system.
Prerequisites
Download a Data Grid Server distribution archive.
Procedure
- Use any appropriate tool to extract the Data Grid Server archive to the host filesystem.
unzip redhat-datagrid-8.2.3-server.zip
$ unzip redhat-datagrid-8.2.3-server.zip
The resulting directory is your $RHDG_HOME.
1.4. Starting Data Grid Servers Copy linkLink copied to clipboard!
Run Data Grid Server instances in a Java Virtual Machine (JVM) on any supported host.
Prerequisites
- Download and install the server distribution.
Procedure
-
Open a terminal in
$RHDG_HOME. Start Data Grid Server instances with the
serverscript.- Linux
bin/server.sh
$ bin/server.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Microsoft Windows
bin\server.bat
bin\server.batCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Data Grid Server is running successfully when it logs the following messages:
ISPN080004: Protocol SINGLE_PORT listening on 127.0.0.1:11222 ISPN080034: Server '...' listening on http://127.0.0.1:11222 ISPN080001: Data Grid Server <version> started in <mm>ms
ISPN080004: Protocol SINGLE_PORT listening on 127.0.0.1:11222
ISPN080034: Server '...' listening on http://127.0.0.1:11222
ISPN080001: Data Grid Server <version> started in <mm>ms
Verification
-
Open
127.0.0.1:11222/console/in any browser. - Enter your credentials at the prompt and continue to Data Grid Console.
1.5. Creating and Modifying Users Copy linkLink copied to clipboard!
Add Data Grid user credentials and assign permissions to control access to data.
Data Grid server installations use a property realm to authenticate users for the Hot Rod and REST endpoints. This means you need to create at least one user before you can access Data Grid.
By default, users also need roles with permissions to access caches and interact with Data Grid resources. You can assign roles to users individually or add users to groups that have role permissions.
You create users and assign roles with the user command in the Data Grid command line interface (CLI).
Run help user from a CLI session to get complete command details.
1.5.1. Adding Credentials Copy linkLink copied to clipboard!
You need an admin user for the Data Grid Console and full control over your Data Grid environment. For this reason you should create a user with admin permissions the first time you add credentials.
Procedure
-
Open a terminal in
$RHDG_HOME. Create an
adminuser with theuser createcommand in the CLI.bin/cli.sh user create myuser -p changeme -g admin
$ bin/cli.sh user create myuser -p changeme -g adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, the username "admin" automatically gets
adminpermissions.bin/cli.sh user create admin -p changeme
$ bin/cli.sh user create admin -p changemeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Open
user.propertiesandgroups.propertieswith any text editor to verify users and groups.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.5.2. Assigning Roles to Users Copy linkLink copied to clipboard!
Assign roles to users so they have the correct permissions to access data and modify Data Grid resources.
Procedure
Start a CLI session with an
adminuser.bin/cli.sh
$ bin/cli.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow Assign the
deployerrole to "katie".[//containers/default]> user roles grant --roles=deployer katie
[//containers/default]> user roles grant --roles=deployer katieCopy to Clipboard Copied! Toggle word wrap Toggle overflow List roles for "katie".
[//containers/default]> user roles ls katie ["deployer"]
[//containers/default]> user roles ls katie ["deployer"]Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.5.3. Adding Users to Groups Copy linkLink copied to clipboard!
Groups let you change permissions for multiple users. You assign a role to a group and then add users to that group. Users inherit permissions from the group role.
Procedure
-
Start a CLI session with an
adminuser. Use the
user createcommand to create a group.-
Specify "developers" as the group name with the
--groupsargument. Set a username and password for the group.
In a property realm, a group is a special type of user that also requires a username and password.
[//containers/default]> user create --groups=developers developers -p changeme
[//containers/default]> user create --groups=developers developers -p changemeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Specify "developers" as the group name with the
List groups.
[//containers/default]> user ls --groups ["developers"]
[//containers/default]> user ls --groups ["developers"]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Assign the
applicationrole to the "developers" group.[//containers/default]> user roles grant --roles=application developers
[//containers/default]> user roles grant --roles=application developersCopy to Clipboard Copied! Toggle word wrap Toggle overflow List roles for the "developers" group.
[//containers/default]> user roles ls developers ["application"]
[//containers/default]> user roles ls developers ["application"]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add existing users, one at a time, to the group as required.
[//containers/default]> user groups john --groups=developers
[//containers/default]> user groups john --groups=developersCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.5.4. User Roles and Permissions Copy linkLink copied to clipboard!
Data Grid includes a default set of roles that grant users with permissions to access data and interact with Data Grid resources.
ClusterRoleMapper is the default mechanism that Data Grid uses to associate security principals to authorization roles.
ClusterRoleMapper matches principal names to role names. A user named admin gets admin permissions automatically, a user named deployer gets deployer permissions, and so on.
| Role | Permissions | Description |
|---|---|---|
|
| ALL | Superuser with all permissions including control of the Cache Manager lifecycle. |
|
| ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR, CREATE |
Can create and delete Data Grid resources in addition to |
|
| ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR |
Has read and write access to Data Grid resources in addition to |
|
| ALL_READ, MONITOR |
Has read access to Data Grid resources in addition to |
|
| MONITOR |
Can view statistics via JMX and the |
1.6. Verifying Cluster Views Copy linkLink copied to clipboard!
Data Grid nodes on the same network automatically discover each other and form clusters.
Complete this procedure to observe cluster discovery with the MPING protocol in the default TCP stack with locally running Data Grid Server instances. If you want to adjust cluster transport for custom network requirements, see the documentation for setting up Data Grid clusters.
This procedure is intended to demonstrate the principle of cluster discovery and is not intended for production environments. Doing things like specifying a port offset on the command line is not a reliable way to configure cluster transport for production.
Prerequisites
Have one instance of Data Grid Server running.
Procedure
-
Open a terminal in
$RHDG_HOME. Copy the root directory to
server2.cp -r server server2
$ cp -r server server2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify a port offset and the
server2directory.bin/server.sh -o 100 -s server2
$ bin/server.sh -o 100 -s server2Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
You can view cluster membership in the console at 127.0.0.1:11222/console/cluster-membership.
Data Grid also logs the following messages when nodes join clusters:
Reference
1.7. Shutting Down Data Grid Server Copy linkLink copied to clipboard!
Stop individually running servers or bring down clusters gracefully.
Procedure
- Create a CLI connection to Data Grid.
Shut down Data Grid Server in one of the following ways:
Stop all nodes in a cluster with the
shutdown clustercommand, for example:[//containers/default]> shutdown cluster
[//containers/default]> shutdown clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command saves cluster state to the
datafolder for each node in the cluster. If you use a cache store, theshutdown clustercommand also persists all data in the cache.Stop individual server instances with the
shutdown servercommand and the server hostname, for example:[//containers/default]> shutdown server <my_server01>
[//containers/default]> shutdown server <my_server01>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The shutdown server command does not wait for rebalancing operations to complete, which can lead to data loss if you specify multiple hostnames at the same time.
Run help shutdown for more details about using the command.
Verification
Data Grid logs the following messages when you shut down servers:
ISPN080002: Data Grid Server stopping ISPN000080: Disconnecting JGroups channel cluster ISPN000390: Persisted state, version=<$version> timestamp=YYYY-MM-DDTHH:MM:SS ISPN080003: Data Grid Server stopped
ISPN080002: Data Grid Server stopping
ISPN000080: Disconnecting JGroups channel cluster
ISPN000390: Persisted state, version=<$version> timestamp=YYYY-MM-DDTHH:MM:SS
ISPN080003: Data Grid Server stopped
1.7.1. Restarting Data Grid Clusters Copy linkLink copied to clipboard!
When you bring Data Grid clusters back online after shutting them down, you should wait for the cluster to be available before adding or removing nodes or modifying cluster state.
If you shutdown clustered nodes with the shutdown server command, you must restart each server in reverse order.
For example, if you shutdown server1 and then shutdown server2, you should first start server2 and then start server1.
If you shutdown a cluster with the shutdown cluster command, clusters become fully operational only after all nodes rejoin.
You can restart nodes in any order but the cluster remains in DEGRADED state until all nodes that were joined before shutdown are running.
1.8. Data Grid Server Filesystem Copy linkLink copied to clipboard!
Data Grid Server uses the following folders on the host filesystem under $RHDG_HOME:
See the Data Grid Server README for descriptions of the each folder in your $RHDG_HOME directory as well as system properties you can use to customize the filesystem.
1.8.1. Server Root Directory Copy linkLink copied to clipboard!
Apart from resources in the bin and docs folders, the only folder under $RHDG_HOME that you should interact with is the server root directory, which is named server by default.
You can create multiple nodes under the same $RHDG_HOME directory or in different directories, but each Data Grid Server instance must have its own server root directory. For example, a cluster of 5 nodes could have the following server root directories on the filesystem:
├── server ├── server1 ├── server2 ├── server3 └── server4
├── server
├── server1
├── server2
├── server3
└── server4
Each server root directory should contain the following folders:
├── server │ ├── conf │ ├── data │ ├── lib │ └── log
├── server
│ ├── conf
│ ├── data
│ ├── lib
│ └── log
server/conf
Holds infinispan.xml configuration files for a Data Grid Server instance.
Data Grid separates configuration into two layers:
- Dynamic
-
Create mutable cache configurations for data scalability.
Data Grid Server permanently saves the caches you create at runtime along with the cluster state that is distributed across nodes. Each joining node receives a complete cluster state that Data Grid Server synchronizes across all nodes whenever changes occur. - Static
-
Add configuration to
infinispan.xmlfor underlying server mechanisms such as cluster transport, security, and shared datasources.
server/data
Provides internal storage that Data Grid Server uses to maintain cluster state.
Never directly delete or modify content in server/data.
Modifying files such as caches.xml while the server is running can cause corruption. Deleting content can result in an incorrect state, which means clusters cannot restart after shutdown.
server/lib
Contains extension JAR files for custom filters, custom event listeners, JDBC drivers, custom ServerTask implementations, and so on.
server/log
Holds Data Grid Server log files.
Reference
- Data Grid Server README
- What is stored in the <server>/data directory used by a RHDG server (Red Hat Knowledgebase)
Chapter 2. Network Interfaces and Endpoints Copy linkLink copied to clipboard!
Expose Data Grid Server through a network interface by binding it to an IP address. You can then configure endpoints to use the interface so Data Grid Server can handle requests from remote client applications.
By default, Data Grid Server exposes a single port that automatically detects the protocol of inbound requests.
2.1. Network Interfaces Copy linkLink copied to clipboard!
Data Grid Server multiplexes endpoints to a single TCP/IP port and automatically detects protocols of inbound client requests. You can configure how Data Grid Server binds to network interfaces to listen for client requests.
Internet Protocol (IP) address
Loopback address
Non-loopback address
Any address
Link local
Site local
Match and fallback strategies
Data Grid Server can enumerate all network interfaces on the host system and bind to an interface, host, or IP address that matches a value, which can include regular expressions for additional flexibility.
Match host
Match interface
Match address
Fallback
2.2. Socket Bindings Copy linkLink copied to clipboard!
Socket bindings map endpoint connectors to server interfaces and ports.
By default, Data Grid servers provide the following socket bindings:
<socket-bindings default-interface="public" port-offset="${infinispan.socket.binding.port-offset:0}">
<socket-binding name="default" port="${infinispan.bind.port:11222}"/>
<socket-binding name="memcached" port="11221"/>
</socket-bindings>
<socket-bindings default-interface="public" port-offset="${infinispan.socket.binding.port-offset:0}">
<socket-binding name="default" port="${infinispan.bind.port:11222}"/>
<socket-binding name="memcached" port="11221"/>
</socket-bindings>
-
socket-bindingsdeclares the default interface and port offset. -
defaultbinds to hotrod and rest connectors to the default port11222. memcachedbinds the memcached connector to port11221.NoteThe memcached endpoint is disabled by default.
To override the default interface for socket-binding declarations, specify the interface attribute.
For example, you add an interface declaration named "private":
You can then specify interface="private" in a socket-binding declaration to bind to the private IP address, as follows:
<socket-bindings default-interface="public" port-offset="${infinispan.socket.binding.port-offset:0}">
...
<socket-binding name="private_binding" interface="private" port="1234"/>
</socket-bindings>
<socket-bindings default-interface="public" port-offset="${infinispan.socket.binding.port-offset:0}">
...
<socket-binding name="private_binding" interface="private" port="1234"/>
</socket-bindings>
2.3. Changing the Default Bind Address for Data Grid Servers Copy linkLink copied to clipboard!
You can use the server -b switch or the infinispan.bind.address system property to bind to a different address.
For example, bind the public interface to 127.0.0.2 as follows:
- Linux
bin/server.sh -b 127.0.0.2
$ bin/server.sh -b 127.0.0.2
- Windows
bin\server.bat -b 127.0.0.2
bin\server.bat -b 127.0.0.2
2.4. Specifying Port Offsets Copy linkLink copied to clipboard!
Configure port offsets with Data Grid servers when running multiple instances on the same host. The default port offset is 0.
Use the -o switch with the Data Grid CLI or the infinispan.socket.binding.port-offset system property to set port offsets.
For example, start a server instance with an offset of 100 as follows. With the default configuration, this results in the Data Grid server listening on port 11322.
- Linux
bin/server.sh -o 100
$ bin/server.sh -o 100
- Windows
bin\server.bat -o 100
bin\server.bat -o 100
2.5. Data Grid Endpoints Copy linkLink copied to clipboard!
Data Grid endpoints expose the CacheManager interface over different connector protocols so you can remotely access data and perform operations to manage and maintain Data Grid clusters.
You can define multiple endpoint connectors on different socket bindings.
2.5.1. Hot Rod Copy linkLink copied to clipboard!
Hot Rod is a binary TCP client-server protocol designed to provide faster data access and improved performance in comparison to text-based protocols.
Data Grid provides Hot Rod client libraries in Java, C++, C#, Node.js and other programming languages.
Topology state transfer
Data Grid uses topology caches to provide clients with cluster views. Topology caches contain entries that map internal JGroups transport addresses to exposed Hot Rod endpoints.
When client send requests, Data Grid servers compare the topology ID in request headers with the topology ID from the cache. Data Grid servers send new topology views if client have older topology IDs.
Cluster topology views allow Hot Rod clients to immediately detect when nodes join and leave, which enables dynamic load balancing and failover.
In distributed cache modes, the consistent hashing algorithm also makes it possible to route Hot Rod client requests directly to primary owners.
2.5.2. REST Copy linkLink copied to clipboard!
Reference
Data Grid exposes a RESTful interface that allows HTTP clients to access data, monitor and maintain clusters, and perform administrative operations.
You can use standard HTTP load balancers to provide clients with load balancing and failover capabilities. However, HTTP load balancers maintain static cluster views and require manual updates when cluster topology changes occur.
2.5.3. Protocol Comparison Copy linkLink copied to clipboard!
| Hot Rod | HTTP / REST | |
|---|---|---|
| Topology-aware | Y | N |
| Hash-aware | Y | N |
| Encryption | Y | Y |
| Authentication | Y | Y |
| Conditional ops | Y | Y |
| Bulk ops | Y | N |
| Transactions | Y | N |
| Listeners | Y | N |
| Query | Y | Y |
| Execution | Y | N |
| Cross-site failover | Y | N |
2.6. Endpoint Connectors Copy linkLink copied to clipboard!
You configure Data Grid server endpoints with connector declarations that specify socket bindings, authentication mechanisms, and encryption configuration.
The default endpoint connector configuration is as follows:
<endpoints socket-binding="default" security-realm="default"/>
<endpoints socket-binding="default" security-realm="default"/>
-
endpointscontains endpoint connector declarations and defines global configuration for endpoints such as default socket bindings, security realms, and whether clients must present valid TLS certificates. -
<hotrod-connector/>declares a Hot Rod connector. -
<rest-connector/>declares a REST connector. -
<memcached-connector socket-binding="memcached"/>declares a Memcached connector that uses the memcached socket binding.
Declaring an empty <endpoints/> element implicitly enables the Hot Rod and REST connectors.
It is possible to have multiple endpoints bound to different sockets. These can use different security realms and offer different authentication and encryption configurations. The following configuration enables two endpoints on distinct socket bindings, each one with a dedicated security realm. Additionally, the public endpoint disables administrative features, such as the console and CLI.
Reference
urn:infinispan:server schema provides all available endpoint configuration.
2.6.1. Hot Rod Connectors Copy linkLink copied to clipboard!
Hot Rod connector declarations enable Hot Rod servers.
-
name="hotrod"logically names the Hot Rod connector. By default the name is derived from the socket binding name, for example hotrod-default. -
topology-state-transfertunes the state transfer operations that provide Hot Rod clients with cluster topology. -
authenticationconfigures SASL authentication mechanisms. -
encryptionconfigures TLS settings for client connections.
Reference
urn:infinispan:server schema provides all available Hot Rod connector configuration.
2.6.2. REST Connectors Copy linkLink copied to clipboard!
REST connector declarations enable REST servers.
-
name="rest"logically names the REST connector. By default the name is derived from the socket binding name, for example rest-default. -
authenticationconfigures authentication mechanisms. -
cors-rulesspecifies CORS (Cross Origin Resource Sharing) rules for cross-domain requests. -
encryptionconfigures TLS settings for client connections.
Reference
urn:infinispan:server schema provides all available REST connector configuration.
2.7. Data Grid Server Ports and Protocols Copy linkLink copied to clipboard!
Data Grid Server exposes endpoints on your network for remote client access.
| Port | Protocol | Description |
|---|---|---|
|
| TCP | Hot Rod and REST endpoint |
|
| TCP | Memcached endpoint, which is disabled by default. |
2.8. Single Port Copy linkLink copied to clipboard!
Data Grid Server exposes multiple protocols through a single TCP port, which is 11222 by default. Handling multiple protocols with a single port simplifies configuration and reduces management complexity when deploying Data Grid clusters. Using a single port also enhances security by minimizing the attack surface on the network.
Data Grid Server handles HTTP/1.1, HTTP/2, and Hot Rod protocol requests from clients via the single port in different ways.
HTTP/1.1 upgrade headers
Client requests can include the HTTP/1.1 upgrade header field to initiate HTTP/1.1 connections with Data Grid Server. Client applications can then send the Upgrade: protocol header field, where protocol is a server endpoint.
Application-Layer Protocol Negotiation (ALPN)/Transport Layer Security (TLS)
Client requests include Server Name Indication (SNI) mappings for Data Grid Server endpoints to negotiate protocols over a TLS connection.
Applications must use a TLS library that supports the ALPN extension. Data Grid uses WildFly OpenSSL bindings for Java.
Automatic Hot Rod detection
Client requests that include Hot Rod headers automatically route to Hot Rod endpoints.
2.8.1. Configuring Network Firewalls for Remote Connections Copy linkLink copied to clipboard!
Adjust any firewall rules to allow traffic between the server and external clients.
Procedure
On Red Hat Enterprise Linux (RHEL) workstations, for example, you can allow traffic to port 11222 with firewalld as follows:
firewall-cmd --add-port=11222/tcp --permanent firewall-cmd --list-ports | grep 11222
# firewall-cmd --add-port=11222/tcp --permanent
success
# firewall-cmd --list-ports | grep 11222
11222/tcp
To configure firewall rules that apply across a network, you can use the nftables utility.
Chapter 3. Security Realms Copy linkLink copied to clipboard!
Security realms define identity, encryption, authentication, and authorization configuration for Data Grid Server endpoints.
3.1. Property Realms Copy linkLink copied to clipboard!
Property realms use property files to define users and groups.
users.properties maps usernames to passwords in plain-text format. Passwords can also be pre-digested if you use the DIGEST-MD5 SASL mechanism or Digest HTTP mechanism.
myuser=a_password user2=another_password
myuser=a_password
user2=another_password
groups.properties maps users to roles.
myuser=supervisor,reader,writer user2=supervisor
myuser=supervisor,reader,writer
user2=supervisor
Endpoint authentication mechanisms
When you configure Data Grid Server to use a property realm, you can configure endpoints to use the following authentication mechanisms:
-
Hot Rod (SASL):
PLAIN,DIGEST-*, andSCRAM-* -
REST (HTTP):
BasicandDigest
Property realm configuration
3.1.1. Creating and Modifying Users Copy linkLink copied to clipboard!
Add Data Grid user credentials and assign permissions to control access to data.
Data Grid server installations use a property realm to authenticate users for the Hot Rod and REST endpoints. This means you need to create at least one user before you can access Data Grid.
By default, users also need roles with permissions to access caches and interact with Data Grid resources. You can assign roles to users individually or add users to groups that have role permissions.
You create users and assign roles with the user command in the Data Grid command line interface (CLI).
Run help user from a CLI session to get complete command details.
3.1.1.1. Adding Credentials Copy linkLink copied to clipboard!
You need an admin user for the Data Grid Console and full control over your Data Grid environment. For this reason you should create a user with admin permissions the first time you add credentials.
Procedure
-
Open a terminal in
$RHDG_HOME. Create an
adminuser with theuser createcommand in the CLI.bin/cli.sh user create myuser -p changeme -g admin
$ bin/cli.sh user create myuser -p changeme -g adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, the username "admin" automatically gets
adminpermissions.bin/cli.sh user create admin -p changeme
$ bin/cli.sh user create admin -p changemeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Open
user.propertiesandgroups.propertieswith any text editor to verify users and groups.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.1.2. Assigning Roles to Users Copy linkLink copied to clipboard!
Assign roles to users so they have the correct permissions to access data and modify Data Grid resources.
Procedure
Start a CLI session with an
adminuser.bin/cli.sh
$ bin/cli.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow Assign the
deployerrole to "katie".[//containers/default]> user roles grant --roles=deployer katie
[//containers/default]> user roles grant --roles=deployer katieCopy to Clipboard Copied! Toggle word wrap Toggle overflow List roles for "katie".
[//containers/default]> user roles ls katie ["deployer"]
[//containers/default]> user roles ls katie ["deployer"]Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.1.3. Adding Users to Groups Copy linkLink copied to clipboard!
Groups let you change permissions for multiple users. You assign a role to a group and then add users to that group. Users inherit permissions from the group role.
Procedure
-
Start a CLI session with an
adminuser. Use the
user createcommand to create a group.-
Specify "developers" as the group name with the
--groupsargument. Set a username and password for the group.
In a property realm, a group is a special type of user that also requires a username and password.
[//containers/default]> user create --groups=developers developers -p changeme
[//containers/default]> user create --groups=developers developers -p changemeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Specify "developers" as the group name with the
List groups.
[//containers/default]> user ls --groups ["developers"]
[//containers/default]> user ls --groups ["developers"]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Assign the
applicationrole to the "developers" group.[//containers/default]> user roles grant --roles=application developers
[//containers/default]> user roles grant --roles=application developersCopy to Clipboard Copied! Toggle word wrap Toggle overflow List roles for the "developers" group.
[//containers/default]> user roles ls developers ["application"]
[//containers/default]> user roles ls developers ["application"]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add existing users, one at a time, to the group as required.
[//containers/default]> user groups john --groups=developers
[//containers/default]> user groups john --groups=developersCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2. LDAP Realms Copy linkLink copied to clipboard!
LDAP realms connect to LDAP servers, such as OpenLDAP, Red Hat Directory Server, Apache Directory Server, or Microsoft Active Directory, to authenticate users and obtain membership information.
LDAP servers can have different entry layouts, depending on the type of server and deployment. It is beyond the scope of this document to provide examples for all possible configurations.
Endpoint authentication mechanisms
When you configure Data Grid Server to use an LDAP realm, you can configure endpoints to use the following authentication mechanisms:
-
Hot Rod (SASL):
PLAIN,DIGEST-*, andSCRAM-* -
REST (HTTP):
BasicandDigest
LDAP realm configuration
The principal for LDAP connections must have necessary privileges to perform LDAP queries and access specific attributes.
As an alternative to verifying user credentials with the direct-verification attribute, you can specify an LDAP password with the user-password-mapper element.
The rdn-identifier attribute specifies an LDAP attribute that finds the user entry based on a provided identifier, which is typically a username; for example, the uid or sAMAccountName attribute. Add search-recursive="true" to the configuration to search the directory recursively. By default, the search for the user entry uses the (rdn_identifier={0}) filter. Specify a different filter with the filter-name attribute.
The attribute-mapping element retrieves all the groups of which the user is a member. There are typically two ways in which membership information is stored:
-
Under group entries that usually have class
groupOfNamesin thememberattribute. In this case, you can use an attribute filter as in the preceding example configuration. This filter searches for entries that match the supplied filter, which locates groups with amemberattribute equal to the user’s DN. The filter then extracts the group entry’s CN as specified byfrom, and adds it to the user’sRoles. In the user entry in the
memberOfattribute. In this case you should use an attribute reference such as the following:<attribute-reference reference="memberOf" from="cn" to="Roles" />This reference gets all
memberOfattributes from the user’s entry, extracts the CN as specified byfrom, and adds them to the user’sRoles.
3.2.1. LDAP Realm Principal Rewriting Copy linkLink copied to clipboard!
Some SASL authentication mechanisms, such as GSSAPI, GS2-KRB5 and Negotiate, supply a username that needs to be cleaned up before you can use it to search LDAP servers.
3.3. Token Realms Copy linkLink copied to clipboard!
Token realms use external services to validate tokens and require providers that are compatible with RFC-7662 (OAuth2 Token Introspection), such as Red Hat SSO.
Endpoint authentication mechanisms
When you configure Data Grid Server to use a token realm, you must configure endpoints to use the following authentication mechanisms:
-
Hot Rod (SASL):
OAUTHBEARER -
REST (HTTP):
Bearer
Token realm configuration
3.4. Trust Store Realms Copy linkLink copied to clipboard!
Trust store realms use certificates, or certificates chains, that verify Data Grid Server and client identities when they negotiate connections.
- Keystores
- Contain server certificates that provide a Data Grid Server identity to clients. If you configure a keystore with server certificates, Data Grid Server encrypts traffic using industry standard SSL/TLS protocols.
- Trust stores
- Contain client certificates, or certificate chains, that clients present to Data Grid Server. Client trust stores are optional and allow Data Grid Server to perform client certificate authentication.
Client certificate authentication
You must add the require-ssl-client-auth="true" attribute to the endpoint configuration if you want Data Grid Server to validate or authenticate client certificates.
Endpoint authentication mechanisms
If you configure Data Grid Server with a keystore only, you can use encryption in combination with any authentication mechanism.
When you configure Data Grid Server to use a client trust store, you must configure endpoints to use the following authentication mechanisms:
-
Hot Rod (SASL):
EXTERNAL -
REST (HTTP):
CLIENT_CERT
Trust store realm configuration
Chapter 4. Configuring Endpoint Authentication Mechanisms Copy linkLink copied to clipboard!
Configure Hot Rod and REST connectors with SASL or HTTP authentication mechanisms to authenticate with clients.
Data Grid servers require user authentication to access the command line interface (CLI) and console as well as the Hot Rod and REST endpoints. Data Grid servers also automatically configure authentication mechanisms based on the security realms that you define.
4.1. Data Grid Server Authentication Copy linkLink copied to clipboard!
Data Grid servers automatically configure authentication mechanisms based on the security realm that you assign to endpoints.
SASL Authentication Mechanisms
The following SASL authentication mechanisms apply to Hot Rod endpoints:
| Security Realm | SASL Authentication Mechanism |
|---|---|
| Property Realms and LDAP Realms | SCRAM-*, DIGEST-*, CRAM-MD5 |
| Token Realms | OAUTHBEARER |
| Trust Realms | EXTERNAL |
| Kerberos Identities | GSSAPI, GS2-KRB5 |
| SSL/TLS Identities | PLAIN |
HTTP Authentication Mechanisms
The following HTTP authentication mechanisms apply to REST endpoints:
| Security Realm | HTTP Authentication Mechanism |
|---|---|
| Property Realms and LDAP Realms | DIGEST |
| Token Realms | BEARER_TOKEN |
| Trust Realms | CLIENT_CERT |
| Kerberos Identities | SPNEGO |
| SSL/TLS Identities | BASIC |
Default Configuration
Data Grid servers provide a security realm named "default" that uses a property realm with plain text credentials defined in $RHDG_HOME/server/ conf/users.properties, as shown in the following snippet:
The endpoints configuration assigns the "default" security realm to the Hot Rod and REST connectors, as follows:
<endpoints socket-binding="default" security-realm="default"> <hotrod-connector name="hotrod"/> <rest-connector name="rest"/> </endpoints>
<endpoints socket-binding="default" security-realm="default">
<hotrod-connector name="hotrod"/>
<rest-connector name="rest"/>
</endpoints>
As a result of the preceding configuration, Data Grid servers require authentication with a mechanism that the property realm supports.
4.2. Manually Configuring Hot Rod Authentication Copy linkLink copied to clipboard!
Explicitly configure Hot Rod connector authentication to override the default SASL authentication mechanisms that Data Grid servers use for security realms.
Procedure
-
Add an
authenticationdefinition to the Hot Rod connector configuration. - Specify which Data Grid security realm the Hot Rod connector uses for authentication.
- Specify the SASL authentication mechanisms for the Hot Rod endpoint to use.
- Configure SASL authentication properties as appropriate.
4.2.1. Hot Rod Authentication Configuration Copy linkLink copied to clipboard!
Hot Rod connector with SCRAM, DIGEST, and PLAIN authentication
Hot Rod connector with Kerberos authentication
4.2.2. Hot Rod Endpoint Authentication Mechanisms Copy linkLink copied to clipboard!
Data Grid supports the following SASL authentications mechanisms with the Hot Rod connector:
| Authentication mechanism | Description | Related details |
|---|---|---|
|
|
Uses credentials in plain-text format. You should use |
Similar to the |
|
|
Uses hashing algorithms and nonce values. Hot Rod connectors support |
Similar to the |
|
|
Uses salt values in addition to hashing algorithms and nonce values. Hot Rod connectors support |
Similar to the |
|
|
Uses Kerberos tickets and requires a Kerberos Domain Controller. You must add a corresponding |
Similar to the |
|
|
Uses Kerberos tickets and requires a Kerberos Domain Controller. You must add a corresponding |
Similar to the |
|
| Uses client certificates. |
Similar to the |
|
|
Uses OAuth tokens and requires a |
Similar to the |
4.2.3. SASL Quality of Protection (QoP) Copy linkLink copied to clipboard!
If SASL mechanisms support integrity and privacy protection settings, you can add them to your Hot Rod connector configuration with the qop attribute.
| QoP setting | Description |
|---|---|
|
| Authentication only. |
|
| Authentication with integrity protection. |
|
| Authentication with integrity and privacy protection. |
4.2.4. SASL Policies Copy linkLink copied to clipboard!
SASL policies let you control which authentication mechanisms Hot Rod connectors can use.
| Policy | Description | Default value |
|---|---|---|
|
| Use only SASL mechanisms that support forward secrecy between sessions. This means that breaking into one session does not automatically provide information for breaking into future sessions. | false |
|
| Use only SASL mechanisms that require client credentials. | false |
|
| Do not use SASL mechanisms that are susceptible to simple plain passive attacks. | false |
|
| Do not use SASL mechanisms that are susceptible to active, non-dictionary, attacks. | false |
|
| Do not use SASL mechanisms that are susceptible to passive dictionary attacks. | false |
|
| Do not use SASL mechanisms that accept anonymous logins. | true |
Data Grid cache authorization restricts access to caches based on roles and permissions. If you configure cache authorization, you can then set <no-anonymous value=false /> to allow anonymous login and delegate access logic to cache authorization.
Hot Rod connector with SASL policy configuration
As a result of the preceding configuration, the Hot Rod connector uses the GSSAPI mechanism because it is the only mechanism that complies with all policies.
4.3. Manually Configuring REST Authentication Copy linkLink copied to clipboard!
Explicitly configure REST connector authentication to override the default HTTP authentication mechanisms that Data Grid servers use for security realms.
Procedure
-
Add an
authenticationdefinition to the REST connector configuration. - Specify which Data Grid security realm the REST connector uses for authentication.
- Specify the authentication mechanisms for the REST endpoint to use.
4.3.1. REST Authentication Configuration Copy linkLink copied to clipboard!
REST connector with BASIC and DIGEST authentication
REST connector with Kerberos authentication
4.3.2. REST Endpoint Authentication Mechanisms Copy linkLink copied to clipboard!
Data Grid supports the following authentications mechanisms with the REST connector:
| Authentication mechanism | Description | Related details |
|---|---|---|
|
|
Uses credentials in plain-text format. You should use |
Corresponds to the |
|
|
Uses hashing algorithms and nonce values. REST connectors support |
Corresponds to the |
|
|
Uses Kerberos tickets and requires a Kerberos Domain Controller. You must add a corresponding |
Corresponds to the |
|
|
Uses OAuth tokens and requires a |
Corresponds to the |
|
| Uses client certificates. |
Similar to the |
4.4. Disabling Authentication Copy linkLink copied to clipboard!
In local development environments or on isolated networks you can configure Data Grid to allow unauthenticated client requests.
When you disable user authentication you should also disable authorization in your Data Grid security configuration.
Procedure
-
Open
infinispan.xmlfor editing. -
Remove any
security-realmattributes from theendpointsconfiguration. Ensure that the Hot Rod and REST connectors do not include any
authenticationconfiguration.For example, the following configuration allows unauthenticated access to Data Grid:
<endpoints socket-binding="default"> <hotrod-connector name="hotrod"/> <rest-connector name="rest"/> </endpoints>
<endpoints socket-binding="default"> <hotrod-connector name="hotrod"/> <rest-connector name="rest"/> </endpoints>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Remove any
authorizationelements from thesecurityconfiguration for thecache-containerand each cache configuration.
Chapter 5. Encrypting Data Grid Server Connections Copy linkLink copied to clipboard!
You can secure Data Grid Server connections using SSL/TLS encryption by configuring a keystore that contains public and private keys for Data Grid. You can also configure client certificate authentication if you require mutual TLS.
5.1. Configuring Data Grid Server Keystores Copy linkLink copied to clipboard!
Add keystores to Data Grid Server and configure it to present SSL/TLS certificates that verify its identity to clients. If a security realm contains TLS/SSL identities, it encrypts any connections to Data Grid Server endpoints that use that security realm.
Prerequisites
- Create a keystore that contains certificates, or certificate chains, for Data Grid Server.
Data Grid Server supports the following keystore formats: JKS, JCEKS, PKCS12, BKS, BCFKS, and UBER.
In production environments, server certificates should be signed by a trusted Certificate Authority, either Root or Intermediate CA.
Procedure
-
Add the keystore that contains SSL/TLS identities for Data Grid Server to the
$RHDG_HOME/server/confdirectory. -
Add a
server-identitiesdefinition to the Data Grid Server security realm. -
Specify the keystore file name with the
pathattribute. -
Provide the keystore password and certificate alias with the
keystore-passwordandaliasattributes.
Data Grid Server keystore configuration
Next steps
Configure clients with a trust store so they can verify SSL/TLS identities for Data Grid Server.
5.1.1. Automatically Generating Keystores Copy linkLink copied to clipboard!
Configure Data Grid servers to automatically generate keystores at startup.
Automatically generated keystores:
- Should not be used in production environments.
- Are generated whenever necessary; for example, while obtaining the first connection from a client.
- Contain certificates that you can use directly in Hot Rod clients.
Procedure
-
Include the
generate-self-signed-certificate-hostattribute for thekeystoreelement in the server configuration. - Specify a hostname for the server certificate as the value.
SSL server identity with a generated keystore
5.1.2. Configuring TLS versions and cipher suites Copy linkLink copied to clipboard!
When using SSL/TLS encryption to secure your deployment, you can configure Data Grid Server to use specific versions of the TLS protocol as well as specific cipher suites within the protocol.
Procedure
-
Add the
engineelement to the SSL configuration for Data Grid Server. Configure Data Grid to use one or more TLS versions with the
enabled-protocolsattribute.Data Grid Server supports TLS version 1.2 and 1.3 by default. If appropriate you can set
TLSv1.3only to restrict the security protocol for client connections. Data Grid does not recommend enablingTLSv1.1because it is an older protocol with limited support and provides weak security. You should never enable any version of TLS older than 1.1.WarningIf you modify the SSL
engineconfiguration for Data Grid Server you must explicitly configure TLS versions with theenabled-protocolsattribute. Omitting theenabled-protocolsattribute allows any TLS version.<engine enabled-protocols="TLSv1.3 TLSv1.2" />
<engine enabled-protocols="TLSv1.3 TLSv1.2" />Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure Data Grid to use one or more cipher suites with the
enabled-ciphersuitesattribute.You must ensure that you set a cipher suite that supports any protocol features you plan to use; for example
HTTP/2 ALPN.
SSL engine configuration
5.2. Configuring Client Certificate Authentication Copy linkLink copied to clipboard!
Configure Data Grid Server to use mutual TLS to secure client connections.
You can configure Data Grid to verify client identities from certificates in a trust store in two ways:
- Require a trust store that contains only the signing certificate, which is typically a Certificate Authority (CA). Any client that presents a certificate signed by the CA can connect to Data Grid.
- Require a trust store that contains all client certificates in addition to the signing certificate. Only clients that present a signed certificate that is present in the trust store can connect to Data Grid.
Alternatively to providing trust stores you can use shared system certificates.
Prerequisites
- Create a client trust store that contains either the CA certificate or all public certificates.
- Create a keystore for Data Grid Server and configure an SSL/TLS identity.
Procedure
-
Add the
require-ssl-client-auth="true"parameter to yourendpointsconfiguration. -
Add the client trust store to the
$RHDG_HOME/server/confdirectory. -
Specify the
pathandpasswordattributes for thetruststoreelement in the Data Grid Server security realm configuration. -
Add the
<truststore-realm/>element to the security realm if you want Data Grid Server to authenticate each client certificate.
Data Grid Server trust store realm configuration
Next steps
- Set up authorization with client certificates in the Data Grid Server configuration if you control access with security roles and permissions.
- Configure clients to negotiate SSL/TLS connections with Data Grid Server.
5.3. Configuring Authorization with Client Certificates Copy linkLink copied to clipboard!
Enabling client certificate authentication means you do not need to specify Data Grid user credentials in client configuration, which means you must associate roles with the Common Name (CN) field in the client certificate(s).
Prerequisites
- Provide clients with a Java keystore that contains either their public certificates or part of the certificate chain, typically a public CA certificate.
- Configure Data Grid Server to perform client certificate authentication.
Procedure
-
Enable the
common-name-role-mapperin the security authorization configuration. Assign the Common Name (
CN) from the client certificate a role with the appropriate permissions.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 6. Configuring Kerberos Identities for Data Grid Server Copy linkLink copied to clipboard!
Provide Data Grid Server endpoints with Kerberos identities to secure connections with clients.
6.1. Setting Up Kerberos Identities Copy linkLink copied to clipboard!
Kerberos identities use keytab files that contain service principal names and encrypted keys, derived from Kerberos passwords.
keytab files can contain both user and service account principals. However, Data Grid servers use service account principals only. As a result, Data Grid servers can provide identity to clients and allow clients to authenticate with Kerberos servers.
In most cases, you create unique principals for the Hot Rod and REST connectors. For example, you have a "datagrid" server in the "INFINISPAN.ORG" domain. In this case you should create the following service principals:
-
hotrod/datagrid@INFINISPAN.ORGidentifies the Hot Rod service. -
HTTP/datagrid@INFINISPAN.ORGidentifies the REST service.
Procedure
Create keytab files for the Hot Rod and REST services.
- Linux
ktutil
$ ktutil ktutil: addent -password -p datagrid@INFINISPAN.ORG -k 1 -e aes256-cts Password for datagrid@INFINISPAN.ORG: [enter your password] ktutil: wkt http.keytab ktutil: quitCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Microsoft Windows
ktpass -princ HTTP/datagrid@INFINISPAN.ORG -pass * -mapuser INFINISPAN\USER_NAME ktab -k http.keytab -a HTTP/datagrid@INFINISPAN.ORG
$ ktpass -princ HTTP/datagrid@INFINISPAN.ORG -pass * -mapuser INFINISPAN\USER_NAME $ ktab -k http.keytab -a HTTP/datagrid@INFINISPAN.ORGCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Copy the keytab files to the
$ISPN_HOME/server/confdirectory. -
Add a
server-identitiesdefinition to the Data Grid server security realm. - Specify the location of keytab files that provide service principals to Hot Rod and REST connectors.
- Name the Kerberos service principals.
6.2. Kerberos Identity Configuration Copy linkLink copied to clipboard!
The following example configures Kerberos identities for Data Grid Server:
Chapter 7. Storing Data Grid Server Credentials in Keystores Copy linkLink copied to clipboard!
External services require credentials to authenticate with Data Grid Server. To protect sensitive text strings such as passwords, add them to a credential keystore rather than directly in Data Grid Server configuration files.
You can then configure Data Grid Server to decrypt passwords for establishing connections with services such as databases or LDAP directories.
Plain-text passwords in $RHDG_HOME/server/conf are unencrypted. Any user account with read access to the host filesystem can view plain-text passwords.
While credential keystores are password-protected store encrypted passwords, any user account with write access to the host filesystem can tamper with the keystore itself.
To completely secure Data Grid Server credentials, you should grant read-write access only to user accounts that can configure and run Data Grid Server.
7.1. Setting Up Credential Keystores Copy linkLink copied to clipboard!
Create keystores that encrypt credential for Data Grid Server access.
A credential keystore contains at least one alias that is associated with an encrypted password. After you create a keystore, you specify the alias in a connection configuration such as a database connection pool. Data Grid Server then decrypts the password for that alias from the keystore when the service attempts authentication.
You can create as many credential keystores with as many aliases as required.
Procedure
-
Open a terminal in
$RHDG_HOME. Create a keystore and add credentials to it with the
credentialscommand.TipBy default, keystores are of type PKCS12. Run
help credentialsfor details on changing keystore defaults.The following example shows how to create a keystore that contains an alias of "dbpassword" for the password "changeme". When you create a keystore you also specify a password for the keystore with the
-pargument.- Linux
bin/cli.sh credentials add dbpassword -c changeme -p "secret1234!"
$ bin/cli.sh credentials add dbpassword -c changeme -p "secret1234!"Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Microsoft Windows
bin\cli.bat credentials add dbpassword -c changeme -p "secret1234!"
$ bin\cli.bat credentials add dbpassword -c changeme -p "secret1234!"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Check that the alias is added to the keystore.
bin/cli.sh credentials ls -p "secret1234!"
$ bin/cli.sh credentials ls -p "secret1234!" dbpasswordCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure Data Grid to use the credential keystore.
-
Specify the name and location of the credential keystore in the
credential-storesconfiguration. Provide the credential keystore and alias in the
credential-referenceconfiguration.TipAttributes in the
credential-referenceconfiguration are optional.-
storeis required only if you have multiple keystores. -
aliasis required only if the keystore contains multiple aliases.
-
-
Specify the name and location of the credential keystore in the
Reference
7.2. Credential Keystore Configuration Copy linkLink copied to clipboard!
Review example configurations for credential keystores in Data Grid Server configuration.
Credential keystore
Datasource connection
LDAP connection
Chapter 8. Endpoint IP Filtering Copy linkLink copied to clipboard!
Configure IP Filtering rules on the endpoints to accept or reject connections based on the client address.
8.1. Data Grid Server IP Filter Configuration Copy linkLink copied to clipboard!
Data Grid endpoints and connectors can specify one or more IP filtering rules. These rules specify the type of action to take when a client which matches a supplied CIDR block connects. IP filtering rules are applied in order up until the first one that matches.
A CIDR block is a compact representation of an IP address and its associated network mask. CIDR notation specifies an IP address, a slash ('/') character, and a decimal number. The decimal number is the count of leading 1 bits in the network mask. The number can also be thought of as the width, in bits, of the network prefix. The IP address in CIDR notation is always represented according to the standards for IPv4 or IPv6.
The address can denote a specific interface address, including a host identifier, such as 10.0.0.1/8, or it can be the beginning address of an entire network interface range using a host identifier of 0, as in 10.0.0.0/8 or 10/8.
For example:
-
192.168.100.14/24represents the IPv4 address192.168.100.14and its associated network prefix192.168.100.0, or equivalently, its subnet mask255.255.255.0, which has 24 leading 1-bits. -
the IPv4 block
192.168.100.0/22represents the 1024 IPv4 addresses from192.168.100.0to192.168.103.255. -
the IPv6 block
2001:db8::/48represents the block of IPv6 addresses from2001:db8:0:0:0:0:0:0to2001:db8:0:ffff:ffff:ffff:ffff:ffff. -
::1/128represents the IPv6 loopback address. Its prefix length is 128 which is the number of bits in the address.
As a result of the preceding configuration, Data Grid servers accept connections only from addresses in the 192.168.0.0/16 and 10.0.0.0/8 CIDR blocks. Data Grid servers reject all other connections.
8.2. Inspecting and Modifying Data Grid Server IP Filter Rules Copy linkLink copied to clipboard!
Server IP filter rules can be manipulated via the CLI.
Procedure
-
Open a terminal in
$RHDG_HOME. Inspect and modify the IP filter rules
server connector ipfiltercommand as required.List all IP filtering rules active on a connector across the cluster:
[//containers/default]> server connector ipfilter ls endpoint-default
[//containers/default]> server connector ipfilter ls endpoint-defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set IP filtering rules across the cluster.
NoteThis command replaces any existing rules.
[//containers/default]> server connector ipfilter set endpoint-default --rules=ACCEPT/192.168.0.0/16,REJECT/10.0.0.0/8`
[//containers/default]> server connector ipfilter set endpoint-default --rules=ACCEPT/192.168.0.0/16,REJECT/10.0.0.0/8`Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove all IP filtering rules on a connector across the cluster.
[//containers/default]> server connector ipfilter clear endpoint-default
[//containers/default]> server connector ipfilter clear endpoint-defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 9. Configuring User Authorization Copy linkLink copied to clipboard!
Authorization is a security feature that requires users to have certain permissions before they can access caches or interact with Data Grid resources. You assign roles to users that provide different levels of permissions, from read-only access to full, super user privileges.
9.1. Enabling Authorization in Cache Configuration Copy linkLink copied to clipboard!
Use authorization in your cache configuration to restrict user access. Before they can read or write cache entries, or create and delete caches, users must have a role with a sufficient level of permission.
Procedure
-
Open your
infinispan.xmlconfiguration for editing. If it is not already declared, add the
<authorization />tag inside thesecurityelements for thecache-container.This enables authorization for the Cache Manager and provides a global set of roles and permissions that caches can inherit.
-
Add the
<authorization />tag to each cache for which Data Grid restricts access based on user roles.
The following configuration example shows how to use implicit authorization configuration with default roles and permissions:
9.2. User Roles and Permissions Copy linkLink copied to clipboard!
Data Grid includes a default set of roles that grant users with permissions to access data and interact with Data Grid resources.
ClusterRoleMapper is the default mechanism that Data Grid uses to associate security principals to authorization roles.
ClusterRoleMapper matches principal names to role names. A user named admin gets admin permissions automatically, a user named deployer gets deployer permissions, and so on.
| Role | Permissions | Description |
|---|---|---|
|
| ALL | Superuser with all permissions including control of the Cache Manager lifecycle. |
|
| ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR, CREATE |
Can create and delete Data Grid resources in addition to |
|
| ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR |
Has read and write access to Data Grid resources in addition to |
|
| ALL_READ, MONITOR |
Has read access to Data Grid resources in addition to |
|
| MONITOR |
Can view statistics via JMX and the |
9.3. How Security Authorization Works Copy linkLink copied to clipboard!
Data Grid authorization secures your installation by restricting user access.
User applications or clients must belong to a role that is assigned with sufficient permissions before they can perform operations on Cache Managers or caches.
For example, you configure authorization on a specific cache instance so that invoking Cache.get() requires an identity to be assigned a role with read permission while Cache.put() requires a role with write permission.
In this scenario, if a user application or client with the io role attempts to write an entry, Data Grid denies the request and throws a security exception. If a user application or client with the writer role sends a write request, Data Grid validates authorization and issues a token for subsequent operations.
Identities
Identities are security Principals of type java.security.Principal. Subjects, implemented with the javax.security.auth.Subject class, represent a group of security Principals. In other words, a Subject represents a user and all groups to which it belongs.
Identities to roles
Data Grid uses role mappers so that security principals correspond to roles, which you assign one or more permissions.
The following image illustrates how security principals correspond to roles:
9.3.1. Permissions Copy linkLink copied to clipboard!
Authorization roles have different permissions with varying levels of access to Data Grid. Permissions let you restrict user access to both Cache Managers and caches.
9.3.1.1. Cache Manager permissions Copy linkLink copied to clipboard!
| Permission | Function | Description |
|---|---|---|
| CONFIGURATION |
| Defines new cache configurations. |
| LISTEN |
| Registers listeners against a Cache Manager. |
| LIFECYCLE |
| Stops the Cache Manager. |
| CREATE |
| Create and remove container resources such as caches, counters, schemas, and scripts. |
| MONITOR |
|
Allows access to JMX statistics and the |
| ALL | - | Includes all Cache Manager permissions. |
9.3.1.2. Cache permissions Copy linkLink copied to clipboard!
| Permission | Function | Description |
|---|---|---|
| READ |
| Retrieves entries from a cache. |
| WRITE |
| Writes, replaces, removes, evicts data in a cache. |
| EXEC |
| Allows code execution against a cache. |
| LISTEN |
| Registers listeners against a cache. |
| BULK_READ |
| Executes bulk retrieve operations. |
| BULK_WRITE |
| Executes bulk write operations. |
| LIFECYCLE |
| Starts and stops a cache. |
| ADMIN |
| Allows access to underlying components and internal structures. |
| MONITOR |
|
Allows access to JMX statistics and the |
| ALL | - | Includes all cache permissions. |
| ALL_READ | - | Combines the READ and BULK_READ permissions. |
| ALL_WRITE | - | Combines the WRITE and BULK_WRITE permissions. |
Reference
9.3.2. Role Mappers Copy linkLink copied to clipboard!
Data Grid includes a PrincipalRoleMapper API that maps security Principals in a Subject to authorization roles that you can assign to users.
9.3.2.1. Cluster role mappers Copy linkLink copied to clipboard!
ClusterRoleMapper uses a persistent replicated cache to dynamically store principal-to-role mappings for the default roles and permissions.
By default uses the Principal name as the role name and implements org.infinispan.security.MutableRoleMapper which exposes methods to change role mappings at runtime.
-
Java class:
org.infinispan.security.mappers.ClusterRoleMapper -
Declarative configuration:
<cluster-role-mapper />
9.3.2.2. Identity role mappers Copy linkLink copied to clipboard!
IdentityRoleMapper uses the Principal name as the role name.
-
Java class:
org.infinispan.security.mappers.IdentityRoleMapper -
Declarative configuration:
<identity-role-mapper />
9.3.2.3. CommonName role mappers Copy linkLink copied to clipboard!
CommonNameRoleMapper uses the Common Name (CN) as the role name if the Principal name is a Distinguished Name (DN).
For example this DN, cn=managers,ou=people,dc=example,dc=com, maps to the managers role.
-
Java class:
org.infinispan.security.mappers.CommonRoleMapper -
Declarative configuration:
<common-name-role-mapper />
9.3.2.4. Custom role mappers Copy linkLink copied to clipboard!
Custom role mappers are implementations of org.infinispan.security.PrincipalRoleMapper.
-
Declarative configuration:
<custom-role-mapper class="my.custom.RoleMapper" />
9.4. Access Control List (ACL) Cache Copy linkLink copied to clipboard!
Data Grid caches roles that you grant to users internally for optimal performance. Whenever you grant or deny roles to users, Data Grid flushes the ACL cache to ensure user permissions are applied correctly.
If necessary, you can disable the ACL cache or configure it with the cache-size and cache-timeout attributes.
<security cache-size="1000" cache-timeout="300000"> <authorization /> </security>
<security cache-size="1000" cache-timeout="300000">
<authorization />
</security>
Reference
9.5. Customizing Roles and Permissions Copy linkLink copied to clipboard!
You can customize authorization settings in your Data Grid configuration to use role mappers with different combinations of roles and permissions.
Procedure
-
Open your
infinispan.xmlconfiguration for editing. -
Configure authorization for the
cache-containerby declaring a role mapper and a set of roles and permissions. - Configure authorization for caches to restrict access based on user roles.
The following configuration example shows how to configure security authorization with roles and permissions:
9.6. Disabling Security Authorization Copy linkLink copied to clipboard!
In local development environments you can disable authorization so that users do not need roles and permissions. Disabling security authorization means that any user can access data and interact with Data Grid resources.
Procedure
-
Open your
infinispan.xmlconfiguration for editing. -
Remove any
authorizationelements from thesecurityconfiguration for thecache-containerand each cache configuration.
9.7. Configuring Authorization with Client Certificates Copy linkLink copied to clipboard!
Enabling client certificate authentication means you do not need to specify Data Grid user credentials in client configuration, which means you must associate roles with the Common Name (CN) field in the client certificate(s).
Prerequisites
- Provide clients with a Java keystore that contains either their public certificates or part of the certificate chain, typically a public CA certificate.
- Configure Data Grid Server to perform client certificate authentication.
Procedure
-
Enable the
common-name-role-mapperin the security authorization configuration. Assign the Common Name (
CN) from the client certificate a role with the appropriate permissions.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 10. Setting Up Data Grid Clusters Copy linkLink copied to clipboard!
Data Grid requires a transport layer so nodes can automatically join and leave clusters. The transport layer also enables Data Grid nodes to replicate or distribute data across the network and perform operations such as re-balancing and state transfer.
10.1. Default JGroups Stacks Copy linkLink copied to clipboard!
Data Grid provides default JGroups stack files, default-jgroups-*.xml, in the default-configs directory inside the infinispan-core-12.1.11.Final-redhat-00001.jar file.
You can find this JAR file in the $RHDG_HOME/lib directory.
| File name | Stack name | Description |
|---|---|---|
|
|
| Uses UDP for transport and UDP multicast for discovery. Suitable for larger clusters (over 100 nodes) or if you are using replicated caches or invalidation mode. Minimizes the number of open sockets. |
|
|
|
Uses TCP for transport and the |
|
|
|
Uses TCP for transport and |
|
|
|
Uses TCP for transport and |
|
|
|
Uses TCP for transport and |
|
|
|
Uses TCP for transport and |
10.2. Cluster Discovery Protocols Copy linkLink copied to clipboard!
Data Grid supports different protocols that allow nodes to automatically find each other on the network and form clusters.
There are two types of discovery mechanisms that Data Grid can use:
- Generic discovery protocols that work on most networks and do not rely on external services.
-
Discovery protocols that rely on external services to store and retrieve topology information for Data Grid clusters.
For instance the DNS_PING protocol performs discovery through DNS server records.
Running Data Grid on hosted platforms requires using discovery mechanisms that are adapted to network constraints that individual cloud providers impose.
10.2.1. PING Copy linkLink copied to clipboard!
PING, or UDPPING is a generic JGroups discovery mechanism that uses dynamic multicasting with the UDP protocol.
When joining, nodes send PING requests to an IP multicast address to discover other nodes already in the Data Grid cluster. Each node responds to the PING request with a packet that contains the address of the coordinator node and its own address. C=coordinator’s address and A=own address. If no nodes respond to the PING request, the joining node becomes the coordinator node in a new cluster.
PING configuration example
<PING num_discovery_runs="3"/>
<PING num_discovery_runs="3"/>
10.2.2. TCPPING Copy linkLink copied to clipboard!
TCPPING is a generic JGroups discovery mechanism that uses a list of static addresses for cluster members.
With TCPPING, you manually specify the IP address or hostname of each node in the Data Grid cluster as part of the JGroups stack, rather than letting nodes discover each other dynamically.
TCPPING configuration example
<TCP bind_port="7800" />
<TCPPING timeout="3000"
initial_hosts="${jgroups.tcpping.initial_hosts:hostname1[port1],hostname2[port2]}"
port_range="0"
num_initial_members="3"/>
<TCP bind_port="7800" />
<TCPPING timeout="3000"
initial_hosts="${jgroups.tcpping.initial_hosts:hostname1[port1],hostname2[port2]}"
port_range="0"
num_initial_members="3"/>
10.2.3. MPING Copy linkLink copied to clipboard!
MPING uses IP multicast to discover the initial membership of Data Grid clusters.
You can use MPING to replace TCPPING discovery with TCP stacks and use multicasing for discovery instead of static lists of initial hosts. However, you can also use MPING with UDP stacks.
MPING configuration example
<MPING mcast_addr="${jgroups.mcast_addr:228.6.7.8}"
mcast_port="${jgroups.mcast_port:46655}"
num_discovery_runs="3"
ip_ttl="${jgroups.udp.ip_ttl:2}"/>
<MPING mcast_addr="${jgroups.mcast_addr:228.6.7.8}"
mcast_port="${jgroups.mcast_port:46655}"
num_discovery_runs="3"
ip_ttl="${jgroups.udp.ip_ttl:2}"/>
10.2.4. TCPGOSSIP Copy linkLink copied to clipboard!
Gossip routers provide a centralized location on the network from which your Data Grid cluster can retrieve addresses of other nodes.
You inject the address (IP:PORT) of the Gossip router into Data Grid nodes as follows:
-
Pass the address as a system property to the JVM; for example,
-DGossipRouterAddress="10.10.2.4[12001]". - Reference that system property in the JGroups configuration file.
Gossip router configuration example
<TCP bind_port="7800" />
<TCPGOSSIP timeout="3000"
initial_hosts="${GossipRouterAddress}"
num_initial_members="3" />
<TCP bind_port="7800" />
<TCPGOSSIP timeout="3000"
initial_hosts="${GossipRouterAddress}"
num_initial_members="3" />
10.2.5. JDBC_PING Copy linkLink copied to clipboard!
JDBC_PING uses shared databases to store information about Data Grid clusters. This protocol supports any database that can use a JDBC connection.
Nodes write their IP addresses to the shared database so joining nodes can find the Data Grid cluster on the network. When nodes leave Data Grid clusters, they delete their IP addresses from the shared database.
JDBC_PING configuration example
<JDBC_PING connection_url="jdbc:mysql://localhost:3306/database_name"
connection_username="user"
connection_password="password"
connection_driver="com.mysql.jdbc.Driver"/>
<JDBC_PING connection_url="jdbc:mysql://localhost:3306/database_name"
connection_username="user"
connection_password="password"
connection_driver="com.mysql.jdbc.Driver"/>
Add the appropriate JDBC driver to the classpath so Data Grid can use JDBC_PING.
10.2.6. DNS_PING Copy linkLink copied to clipboard!
JGroups DNS_PING queries DNS servers to discover Data Grid cluster members in Kubernetes environments such as OKD and Red Hat OpenShift.
DNS_PING configuration example
<dns.DNS_PING dns_query="myservice.myproject.svc.cluster.local" />
<dns.DNS_PING dns_query="myservice.myproject.svc.cluster.local" />
10.2.7. Cloud Discovery Protocols Copy linkLink copied to clipboard!
Data Grid includes default JGroups stacks that use discovery protocol implementations that are specific to cloud providers.
| Discovery protocol | Default stack file | Artifact | Version |
|---|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Providing Dependencies for Cloud Discovery Protocols
To use NATIVE_S3_PING, GOOGLE_PING2, or AZURE_PING cloud discovery protocols, you need to provide dependent libraries to Data Grid.
Procedure
- Download the artifact JAR file and all dependencies.
Add the artifact JAR file and all dependencies to the
$RHDG_HOME/server/libdirectory of your Data Grid Server installation.For more details see the Downloading artifacts for JGroups cloud discover protocols for Data Grid Server (Red Hat knowledgebase article)
You can then configure the cloud discovery protocol as part of a JGroups stack file or with system properties.
10.3. Using the Default JGroups Stacks Copy linkLink copied to clipboard!
Data Grid uses JGroups protocol stacks so nodes can send each other messages on dedicated cluster channels.
Data Grid provides preconfigured JGroups stacks for UDP and TCP protocols. You can use these default stacks as a starting point for building custom cluster transport configuration that is optimized for your network requirements.
Procedure
Do one of the following to use one of the default JGroups stacks:
Use the
stackattribute in yourinfinispan.xmlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
cluster-stackargument to set the JGroups stack file when Data Grid Server starts:bin/server.sh --cluster-stack=udp
$ bin/server.sh --cluster-stack=udpCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Data Grid logs the following message to indicate which stack it uses:
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack udp
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack udp
10.4. Customizing JGroups Stacks Copy linkLink copied to clipboard!
Adjust and tune properties to create a cluster transport configuration that works for your network requirements.
Data Grid provides attributes that let you extend the default JGroups stacks for easier configuration. You can inherit properties from the default stacks while combining, removing, and replacing other properties.
Procedure
-
Create a new JGroups stack declaration in your
infinispan.xmlfile. -
Add the
extendsattribute and specify a JGroups stack to inherit properties from. -
Use the
stack.combineattribute to modify properties for protocols configured in the inherited stack. -
Use the
stack.positionattribute to define the location for your custom stack. Specify the stack name as the value for the
stackattribute in thetransportconfiguration.For example, you might evaluate using a Gossip router and symmetric encryption with the default TCP stack as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check Data Grid logs to ensure it uses the stack.
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack my-stack
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack my-stackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Reference
- JGroups cluster transport configuration for Data Grid 8.x (Red Hat knowledgebase article)
10.4.1. Inheritance Attributes Copy linkLink copied to clipboard!
When you extend a JGroups stack, inheritance attributes let you adjust protocols and properties in the stack you are extending.
-
stack.positionspecifies protocols to modify. stack.combineuses the following values to extend JGroups stacks:Expand Value Description COMBINEOverrides protocol properties.
REPLACEReplaces protocols.
INSERT_AFTERAdds a protocol into the stack after another protocol. Does not affect the protocol that you specify as the insertion point.
Protocols in JGroups stacks affect each other based on their location in the stack. For example, you should put a protocol such as
NAKACK2after theSYM_ENCRYPTorASYM_ENCRYPTprotocol so thatNAKACK2is secured.INSERT_BEFOREInserts a protocols into the stack before another protocol. Affects the protocol that you specify as the insertion point.
REMOVERemoves protocols from the stack.
10.5. Using JGroups System Properties Copy linkLink copied to clipboard!
Pass system properties to Data Grid at startup to tune cluster transport.
Procedure
-
Use
-D<property-name>=<property-value>arguments to set JGroups system properties as required.
For example, set a custom bind port and IP address as follows:
bin/server.sh -Djgroups.bind.port=1234 -Djgroups.bind.address=192.0.2.0
$ bin/server.sh -Djgroups.bind.port=1234 -Djgroups.bind.address=192.0.2.0
10.5.1. Cluster Transport Properties Copy linkLink copied to clipboard!
Use the following properties to customize JGroups cluster transport.
| System Property | Description | Default Value | Required/Optional |
|---|---|---|---|
|
| Bind address for cluster transport. |
| Optional |
|
| Bind port for the socket. |
| Optional |
|
| IP address for multicast, both discovery and inter-cluster communication. The IP address must be a valid "class D" address that is suitable for IP multicast. |
| Optional |
|
| Port for the multicast socket. |
| Optional |
|
| Time-to-live (TTL) for IP multicast packets. The value defines the number of network hops a packet can make before it is dropped. | 2 | Optional |
|
| Minimum number of threads for the thread pool. | 0 | Optional |
|
| Maximum number of threads for the thread pool. | 200 | Optional |
|
| Maximum number of milliseconds to wait for join requests to succeed. | 2000 | Optional |
|
| Number of times a thread pool needs to be full before a thread dump is logged. | 10000 | Optional |
10.5.2. System Properties for Cloud Discovery Protocols Copy linkLink copied to clipboard!
Use the following properties to configure JGroups discovery protocols for hosted platforms.
10.5.2.1. Amazon EC2 Copy linkLink copied to clipboard!
System properties for configuring NATIVE_S3_PING.
| System Property | Description | Default Value | Required/Optional |
|---|---|---|---|
|
| Name of the Amazon S3 region. | No default value. | Optional |
|
| Name of the Amazon S3 bucket. The name must exist and be unique. | No default value. | Optional |
10.5.2.2. Google Cloud Platform Copy linkLink copied to clipboard!
System properties for configuring GOOGLE_PING2.
| System Property | Description | Default Value | Required/Optional |
|---|---|---|---|
|
| Name of the Google Compute Engine bucket. The name must exist and be unique. | No default value. | Required |
10.5.2.3. Azure Copy linkLink copied to clipboard!
System properties for AZURE_PING.
| System Property | Description | Default Value | Required/Optional |
|---|---|---|---|
|
| Name of the Azure storage account. The name must exist and be unique. | No default value. | Required |
|
| Name of the Azure storage access key. | No default value. | Required |
|
| Valid DNS name of the container that stores ping information. | No default value. | Required |
10.5.2.4. OpenShift Copy linkLink copied to clipboard!
System properties for DNS_PING.
| System Property | Description | Default Value | Required/Optional |
|---|---|---|---|
|
| Sets the DNS record that returns cluster members. | No default value. | Required |
10.6. Using Inline JGroups Stacks Copy linkLink copied to clipboard!
You can insert complete JGroups stack definitions into infinispan.xml files.
Procedure
Embed a custom JGroups stack declaration in your
infinispan.xmlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.7. Using External JGroups Stacks Copy linkLink copied to clipboard!
Reference external files that define custom JGroups stacks in infinispan.xml files.
Procedure
Add custom JGroups stack files to the
$RHDG_HOME/server/confdirectory.Alternatively you can specify an absolute path when you declare the external stack file.
Reference the external stack file with the
stack-fileelement.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.8. Encrypting Cluster Transport Copy linkLink copied to clipboard!
Secure cluster transport so that nodes communicate with encrypted messages. You can also configure Data Grid clusters to perform certificate authentication so that only nodes with valid identities can join.
10.8.1. Data Grid Cluster Security Copy linkLink copied to clipboard!
To secure cluster traffic, you configure Data Grid nodes to encrypt JGroups message payloads with secret keys.
Data Grid nodes can obtain secret keys from either:
- The coordinator node (asymmetric encryption).
- A shared keystore (symmetric encryption).
Retrieving secret keys from coordinator nodes
You configure asymmetric encryption by adding the ASYM_ENCRYPT protocol to a JGroups stack in your Data Grid configuration. This allows Data Grid clusters to generate and distribute secret keys.
When using asymmetric encryption, you should also provide keystores so that nodes can perform certificate authentication and securely exchange secret keys. This protects your cluster from man-in-the-middle (MitM) attacks.
Asymmetric encryption secures cluster traffic as follows:
- The first node in the Data Grid cluster, the coordinator node, generates a secret key.
- A joining node performs certificate authentication with the coordinator to mutually verify identity.
- The joining node requests the secret key from the coordinator node. That request includes the public key for the joining node.
- The coordinator node encrypts the secret key with the public key and returns it to the joining node.
- The joining node decrypts and installs the secret key.
- The node joins the cluster, encrypting and decrypting messages with the secret key.
Retrieving secret keys from shared keystores
You configure symmetric encryption by adding the SYM_ENCRYPT protocol to a JGroups stack in your Data Grid configuration. This allows Data Grid clusters to obtain secret keys from keystores that you provide.
- Nodes install the secret key from a keystore on the Data Grid classpath at startup.
- Node join clusters, encrypting and decrypting messages with the secret key.
Comparison of asymmetric and symmetric encryption
ASYM_ENCRYPT with certificate authentication provides an additional layer of encryption in comparison with SYM_ENCRYPT. You provide keystores that encrypt the requests to coordinator nodes for the secret key. Data Grid automatically generates that secret key and handles cluster traffic, while letting you specify when to generate secret keys. For example, you can configure clusters to generate new secret keys when nodes leave. This ensures that nodes cannot bypass certificate authentication and join with old keys.
SYM_ENCRYPT, on the other hand, is faster than ASYM_ENCRYPT because nodes do not need to exchange keys with the cluster coordinator. A potential drawback to SYM_ENCRYPT is that there is no configuration to automatically generate new secret keys when cluster membership changes. Users are responsible for generating and distributing the secret keys that nodes use to encrypt cluster traffic.
10.8.2. Configuring Cluster Transport with Asymmetric Encryption Copy linkLink copied to clipboard!
Configure Data Grid clusters to generate and distribute secret keys that encrypt JGroups messages.
Procedure
- Create a keystore with certificate chains that enables Data Grid to verify node identity.
Place the keystore on the classpath for each node in the cluster.
For Data Grid Server, you put the keystore in the $RHDG_HOME directory.
Add the
SSL_KEY_EXCHANGEandASYM_ENCRYPTprotocols to a JGroups stack in your Data Grid configuration, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
When you start your Data Grid cluster, the following log message indicates that the cluster is using the secure JGroups stack:
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack <encrypted_stack_name>
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack <encrypted_stack_name>
Data Grid nodes can join the cluster only if they use ASYM_ENCRYPT and can obtain the secret key from the coordinator node. Otherwise the following message is written to Data Grid logs:
[org.jgroups.protocols.ASYM_ENCRYPT] <hostname>: received message without encrypt header from <hostname>; dropping it
[org.jgroups.protocols.ASYM_ENCRYPT] <hostname>: received message without encrypt header from <hostname>; dropping it
Reference
The example ASYM_ENCRYPT configuration in this procedure shows commonly used parameters. Refer to JGroups documentation for the full set of available parameters.
10.8.3. Configuring Cluster Transport with Symmetric Encryption Copy linkLink copied to clipboard!
Configure Data Grid clusters to encrypt JGroups messages with secret keys from keystores that you provide.
Procedure
- Create a keystore that contains a secret key.
Place the keystore on the classpath for each node in the cluster.
For Data Grid Server, you put the keystore in the $RHDG_HOME directory.
-
Add the
SYM_ENCRYPTprotocol to a JGroups stack in your Data Grid configuration.
Verification
When you start your Data Grid cluster, the following log message indicates that the cluster is using the secure JGroups stack:
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack <encrypted_stack_name>
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack <encrypted_stack_name>
Data Grid nodes can join the cluster only if they use SYM_ENCRYPT and can obtain the secret key from the shared keystore. Otherwise the following message is written to Data Grid logs:
[org.jgroups.protocols.SYM_ENCRYPT] <hostname>: received message without encrypt header from <hostname>; dropping it
[org.jgroups.protocols.SYM_ENCRYPT] <hostname>: received message without encrypt header from <hostname>; dropping it
Reference
The example SYM_ENCRYPT configuration in this procedure shows commonly used parameters. Refer to JGroups documentation for the full set of available parameters.
10.9. TCP and UDP Ports for Cluster Traffic Copy linkLink copied to clipboard!
Data Grid uses the following ports for cluster transport messages:
| Default Port | Protocol | Description |
|---|---|---|
|
| TCP/UDP | JGroups cluster bind port |
|
| UDP | JGroups multicast |
Cross-Site Replication
Data Grid uses the following ports for the JGroups RELAY2 protocol:
7900- For Data Grid clusters running on OpenShift.
7800- If using UDP for traffic between nodes and TCP for traffic between clusters.
7801- If using TCP for traffic between nodes and TCP for traffic between clusters.
Chapter 11. Remotely Creating Data Grid Caches Copy linkLink copied to clipboard!
Add caches to Data Grid Server so you can store data.
11.1. Cache Configuration with Data Grid Server Copy linkLink copied to clipboard!
Caches configure the data container on Data Grid Server.
You create caches at run-time by adding definitions based on org.infinispan templates or Data Grid configuration through the console, the Command Line Interface (CLI), the Hot Rod endpoint, or the REST endpoint.
When you create caches at run-time, Data Grid Server replicates your cache definitions across the cluster.
Configuration that you declare directly in infinispan.xml is not automatically synchronized across Data Grid clusters. In this case you should use configuration management tooling, such as Ansible or Chef, to ensure that configuration is propagated to all nodes in your cluster.
11.2. Default Cache Manager Copy linkLink copied to clipboard!
Data Grid Server provides a default Cache Manager configuration. When you start Data Grid Server, it instantiates the Cache Manager so you can remotely create caches at run-time.
Default Cache Manager
Examining the Cache Manager
After you start Data Grid Server and add user credentials, you can access the default Cache Manager through the Command Line Interface (CLI) or REST endpoint as follows:
CLI: Use the
describecommand in the default container.[//containers/default]> describe
[//containers/default]> describeCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
REST: Navigate to
<server_hostname>:11222/rest/v2/cache-managers/default/in any browser.
11.3. Creating Caches with the Data Grid Console Copy linkLink copied to clipboard!
Dynamically add caches from templates or configuration files through the Data Grid console.
Prerequisites
Create a user and start at least one Data Grid server instance.
Procedure
-
Navigate to
<server_hostname>:11222/console/in any browser. - Log in to the console.
- Open the Data Container view.
- Select Create Cache and then add a cache from a template or with Data Grid configuration in XML or JSON format.
- Return to the Data Container view and verify your Data Grid cache.
11.4. Creating Caches with the Data Grid Command Line Interface (CLI) Copy linkLink copied to clipboard!
Use the Data Grid CLI to add caches from templates or configuration files in XML or JSON format.
Prerequisites
Create a user and start at least one Data Grid server instance.
Procedure
- Create a CLI connection to Data Grid.
Add cache definitions with the
create cachecommand.Add a cache definition from an XML or JSON file with the
--fileoption.[//containers/default]> create cache --file=configuration.xml mycache
[//containers/default]> create cache --file=configuration.xml mycacheCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add a cache definition from a template with the
--templateoption.[//containers/default]> create cache --template=org.infinispan.DIST_SYNC mycache
[//containers/default]> create cache --template=org.infinispan.DIST_SYNC mycacheCopy to Clipboard Copied! Toggle word wrap Toggle overflow TipPress the tab key after the
--template=argument to list available cache templates.
Verify the cache exists with the
lscommand.[//containers/default]> ls caches mycache
[//containers/default]> ls caches mycacheCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the cache configuration with the
describecommand.[//containers/default]> describe caches/mycache
[//containers/default]> describe caches/mycacheCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.5. Creating Remote Caches with Hot Rod Clients Copy linkLink copied to clipboard!
When Hot Rod Java clients attempt to access caches that do not exist, they return null for remoteCacheManager.getCache("myCache") invocations. To avoid this scenario, you can configure Hot Rod clients to create caches on first access using cache configuration.
Procedure
-
Use the
remoteCache()method in theConfigurationBuilderor use theconfigurationandconfiguration_uriproperties inhotrod-client.properties.
ConfigurationBuilder
hotrod-client.properties
infinispan.client.hotrod.cache.another-cache.configuration=<distributed-cache name=\"another-cache\"/> infinispan.client.hotrod.cache.[my.other.cache].configuration_uri=file:///path/to/infinispan.xml
infinispan.client.hotrod.cache.another-cache.configuration=<distributed-cache name=\"another-cache\"/>
infinispan.client.hotrod.cache.[my.other.cache].configuration_uri=file:///path/to/infinispan.xml
When using hotrod-client.properties with cache names that contain the . character, you must enclose the cache name in square brackets as in the preceding example.
You can also create remote caches through the RemoteCacheManager API in other ways, such as the following example that adds a cache configuration with the XMLStringConfiguration() method and then calls the getOrCreateCache() method.
However, Data Grid does not recommend this approach because it can more difficult to ensure XML validity and is generally a more cumbersome way to create caches. If you are creating complex cache configurations, you should save them to separate files in your project and reference them in your Hot Rod client configuration.
Hot Rod code examples
Try some Data Grid code tutorials that show you how to create remote caches in different ways with the Hot Rod Java client.
Visit Data Grid code examples.
11.6. Creating Data Grid Caches with HTTP Clients Copy linkLink copied to clipboard!
Add cache definitions to Data Grid servers through the REST endpoint with any suitable HTTP client.
Prerequisites
Create a user and start at least one Data Grid server instance.
Procedure
-
Create caches with
POSTrequests to/rest/v2/caches/$cacheName.
Use XML or JSON configuration by including it in the request payload.
POST /rest/v2/caches/mycache
POST /rest/v2/caches/mycache
Use the ?template= parameter to create caches from org.infinispan templates.
POST /rest/v2/caches/mycache?template=org.infinispan.DIST_SYNC
POST /rest/v2/caches/mycache?template=org.infinispan.DIST_SYNC
11.7. Cache Configuration Copy linkLink copied to clipboard!
You can provide cache configuration in XML or JSON format.
XML
<distributed-cache name="myCache" mode="SYNC"> <encoding media-type="application/x-protostream"/> <memory max-count="1000000" when-full="REMOVE"/> </distributed-cache>
<distributed-cache name="myCache" mode="SYNC">
<encoding media-type="application/x-protostream"/>
<memory max-count="1000000" when-full="REMOVE"/>
</distributed-cache>
JSON
JSON format
Cache configuration in JSON format must follow the structure of an XML configuration. * XML elements become JSON objects. * XML attributes become JSON fields.
Chapter 12. Configuring Data Grid Server Datasources Copy linkLink copied to clipboard!
Create managed datasources to optimize connection pooling and performance for database connections.
You can specify database connection properties as part of a JDBC cache store configuration. However, you must do this for each cache definition, which duplicates configuration and wastes resources by creating multiple distinct connection pools.
By using shared, managed datasources, you centralize connection configuration and pooling for more efficient usage.
12.1. Datasource Configuration for JDBC Cache Stores Copy linkLink copied to clipboard!
Data Grid server configuration for datasources is composed of two sections:
-
A
connection factorythat defines how to connect to the database. -
A
connection poolthat defines how to pool and reuse connections.
Connection pools can be tuned using the following parameters:
-
initial-size: Initial number of connections the pool should hold. -
max-size: Maximum number of connections in the pool. -
min-size: Minimum number of connections the pool should hold. -
blocking-timeout: Maximum time in milliseconds to block while waiting for a connection before throwing an exception. This will never throw an exception if creating a new connection takes an inordinately long period of time. Default is 0 meaning that a call will wait indefinitely. -
background-validation: Time in milliseconds between background validation runs. A duration of 0 means that this feature is disabled. -
validate-on-acquisition: Connections idle for longer than this time, specified in milliseconds, are validated before being acquired (foreground validation). A duration of 0 means that this feature is disabled. -
idle-removal: Time in minutes a connection has to be idle before it can be removed. -
leak-detection: Time in milliseconds a connection has to be held before a leak warning.
12.2. Using Datasources in JDBC Cache Stores Copy linkLink copied to clipboard!
Use a shared, managed datasource in your JDBC cache store configuration instead of specifying individual connection properties for each cache definition.
Prerequisites
Create a managed datasource for JDBC cache stores in your Data Grid server configuration.
Procedure
- Reference the JNDI name of the datasource in the JDBC cache store configuration of your cache configuration, as in the following example:
12.3. Testing Data Sources Copy linkLink copied to clipboard!
Verify that connections to data sources are functioning correctly with the CLI.
Procedure
Start the CLI.
bin/cli.sh
$ bin/cli.sh [disconnected]>Copy to Clipboard Copied! Toggle word wrap Toggle overflow List all data sources:
[//containers/default]> server datasource ls
[//containers/default]> server datasource lsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Test a data source connection.
[//containers/default]> server datasource test my-datasource
[//containers/default]> server datasource test my-datasourceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 13. Remotely Executing Server-Side Tasks Copy linkLink copied to clipboard!
Define and add tasks to Data Grid servers that you can invoke from the Data Grid command line interface, REST API, or from Hot Rod clients.
You can implement tasks as custom Java classes or define scripts in languages such as JavaScript.
13.1. Creating Server Tasks Copy linkLink copied to clipboard!
Create custom task implementations and add them to Data Grid servers.
13.1.1. Server Tasks Copy linkLink copied to clipboard!
Data Grid server tasks are classes that extend the org.infinispan.tasks.ServerTask interface and generally include the following method calls:
setTaskContext()- Allows access to execution context information including task parameters, cache references on which tasks are executed, and so on. In most cases, implementations store this information locally and use it when tasks are actually executed.
getName()- Returns unique names for tasks. Clients invoke tasks with these names.
getExecutionMode()Returns the execution mode for tasks.
-
TaskExecutionMode.ONE_NODEonly the node that handles the request executes the script. Although scripts can still invoke clustered operations. -
TaskExecutionMode.ALL_NODESData Grid uses clustered executors to run scripts across nodes. For example, server tasks that invoke stream processing need to be executed on a single node because stream processing is distributed to all nodes.
-
call()-
Computes a result. This method is defined in the
java.util.concurrent.Callableinterface and is invoked with server tasks.
Server task implementations must adhere to service loader pattern requirements. For example, implementations must have a zero-argument constructors.
The following HelloTask class implementation provides an example task that has one parameter:
13.1.2. Deploying Server Tasks to Data Grid Servers Copy linkLink copied to clipboard!
Add your custom server task classes to Data Grid servers.
Prerequisites
Stop any running Data Grid servers. Data Grid does not support runtime deployment of custom classes.
Procedure
Add a
META-INF/services/org.infinispan.tasks.ServerTaskfile that contains the fully qualified names of server tasks, for example:example.HelloTask
example.HelloTaskCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Package your server task implementation in a JAR file.
-
Copy the JAR file to the
$RHDG_HOME/server/libdirectory of your Data Grid server. - Add your classes to the deserialization allow list in your Data Grid configuration. Alternatively set the allow list using system properties.
13.2. Creating Server Scripts Copy linkLink copied to clipboard!
Create custom scripts and add them to Data Grid servers.
13.2.1. Server Scripts Copy linkLink copied to clipboard!
Data Grid server scripting is based on the javax.script API and is compatible with any JVM-based ScriptEngine implementation.
Hello World Script Example
The following is a simple example that runs on a single Data Grid server, has one parameter, and uses JavaScript:
// mode=local,language=javascript,parameters=[greetee] "Hello " + greetee
// mode=local,language=javascript,parameters=[greetee]
"Hello " + greetee
When you run the preceding script, you pass a value for the greetee parameter and Data Grid returns "Hello ${value}".
13.2.1.1. Script Metadata Copy linkLink copied to clipboard!
Metadata provides additional information about scripts that Data Grid servers use when running scripts.
Script metadata are property=value pairs that you add to comments in the first lines of scripts, such as the following example:
// name=test, language=javascript // mode=local, parameters=[a,b,c]
// name=test, language=javascript
// mode=local, parameters=[a,b,c]
-
Use comment styles that match the scripting language (
//,;;,#). -
Separate
property=valuepairs with commas. - Separate values with single (') or double (") quote characters.
| Property | Description |
|---|---|
|
| Defines the execution mode and has the following values:
|
|
| Specifies the ScriptEngine that executes the script. |
|
| Specifies filename extensions as an alternative method to set the ScriptEngine. |
|
| Specifies roles that users must have to execute scripts. |
|
| Specifies an array of valid parameter names for this script. Invocations which specify parameters not included in this list cause exceptions. |
|
| Optionally sets the MediaType (MIME type) for storing data as well as parameter and return values. This property is useful for remote clients that support particular data formats only.
Currently you can set only |
13.2.1.2. Script Bindings Copy linkLink copied to clipboard!
Data Grid exposes internal objects as bindings for script execution.
| Binding | Description |
|---|---|
|
| Specifies the cache against which the script is run. |
|
| Specifies the marshaller to use for serializing data to the cache. |
|
|
Specifies the |
|
| Specifies the instance of the script manager that runs the script. You can use this binding to run other scripts from a script. |
13.2.1.3. Script Parameters Copy linkLink copied to clipboard!
Data Grid lets you pass named parameters as bindings for running scripts.
Parameters are name,value pairs, where name is a string and value is any value that the marshaller can interpret.
The following example script has two parameters, multiplicand and multiplier. The script takes the value of multiplicand and multiplies it with the value of multiplier.
// mode=local,language=javascript multiplicand * multiplier
// mode=local,language=javascript
multiplicand * multiplier
When you run the preceding script, Data Grid responds with the result of the expression evaluation.
13.2.2. Adding Scripts to Data Grid Servers Copy linkLink copied to clipboard!
Use the command line interface to add scripts to Data Grid servers.
Prerequisites
Data Grid Server stores scripts in the ___script_cache cache. If you enable cache authorization, users must have CREATE permissions to add to ___script_cache.
Assign users the deployer role at minimum if you use default authorization settings.
Procedure
Define scripts as required.
For example, create a file named
multiplication.jsthat runs on a single Data Grid server, has two parameters, and uses JavaScript to multiply a given value:// mode=local,language=javascript multiplicand * multiplier
// mode=local,language=javascript multiplicand * multiplierCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a CLI connection to Data Grid.
Use the
taskcommand to upload scripts, as in the following example:[//containers/default]> task upload --file=multiplication.js multiplication
[//containers/default]> task upload --file=multiplication.js multiplicationCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that your scripts are available.
[//containers/default]> ls tasks multiplication
[//containers/default]> ls tasks multiplicationCopy to Clipboard Copied! Toggle word wrap Toggle overflow
13.2.3. Programmatically Creating Scripts Copy linkLink copied to clipboard!
Add scripts with the Hot Rod RemoteCache interface as in the following example:
RemoteCache<String, String> scriptCache = cacheManager.getCache("___script_cache");
scriptCache.put("multiplication.js",
"// mode=local,language=javascript\n" +
"multiplicand * multiplier\n");
RemoteCache<String, String> scriptCache = cacheManager.getCache("___script_cache");
scriptCache.put("multiplication.js",
"// mode=local,language=javascript\n" +
"multiplicand * multiplier\n");
Reference
13.3. Running Server-Side Tasks and Scripts Copy linkLink copied to clipboard!
Execute tasks and custom scripts on Data Grid servers.
13.3.1. Running Tasks and Scripts Copy linkLink copied to clipboard!
Use the command line interface to run tasks and scripts on Data Grid clusters.
Procedure
- Create a CLI connection to Data Grid.
Use the
taskcommand to run tasks and scripts, as in the following examples:Execute a script named
multipler.jsand specify two parameters:[//containers/default]> task exec multipler.js -Pmultiplicand=10 -Pmultiplier=20 200.0
[//containers/default]> task exec multipler.js -Pmultiplicand=10 -Pmultiplier=20 200.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute a task named
@@cache@namesto retrieve a list of all available caches://containers/default]> task exec @@cache@names ["___protobuf_metadata","mycache","___script_cache"]
//containers/default]> task exec @@cache@names ["___protobuf_metadata","mycache","___script_cache"]Copy to Clipboard Copied! Toggle word wrap Toggle overflow
13.3.2. Programmatically Running Scripts Copy linkLink copied to clipboard!
Call the execute() method to run scripts with the Hot Rod RemoteCache interface, as in the following example:
Reference
13.3.3. Programmatically Running Tasks Copy linkLink copied to clipboard!
Call the execute() method to run tasks with the Hot Rod RemoteCache interface, as in the following example:
Reference
Chapter 14. Enabling and Customizing Logging Copy linkLink copied to clipboard!
Data Grid uses Apache Log4j 2 to provide configurable logging mechanisms that capture details about the environment and record cache operations for troubleshooting purposes and root cause analysis.
14.1. Server Logs Copy linkLink copied to clipboard!
Data Grid writes server logs to the following files in the $RHDG_HOME/server/log directory:
server.log-
Messages in human readable format, including boot logs that relate to the server startup.
Data Grid creates this file when you start the server. server.log.json-
Messages in JSON format that let you parse and analyze Data Grid logs.
Data Grid creates this file when you enable theJSON-FILEappender.
14.1.1. Configuring Server Logs Copy linkLink copied to clipboard!
Data Grid uses Apache Log4j technology to write server log messages. You can configure server logs in the log4j2.xml file.
Procedure
-
Open
$RHDG_HOME/server/conf/log4j2.xmlwith any text editor. - Change server logging as appropriate.
-
Save and close
log4j2.xml.
14.1.2. Log Levels Copy linkLink copied to clipboard!
Log levels indicate the nature and severity of messages.
| Log level | Description |
|---|---|
|
| Fine-grained debug messages, capturing the flow of individual requests through the application. |
|
| Messages for general debugging, not related to an individual request. |
|
| Messages about the overall progress of applications, including lifecycle events. |
|
| Events that can lead to error or degrade performance. |
|
| Error conditions that might prevent operations or activities from being successful but do not prevent applications from running. |
|
| Events that could cause critical service failure and application shutdown. |
In addition to the levels of individual messages presented above, the configuration allows two more values: ALL to include all messages, and OFF to exclude all messages.
14.1.3. Data Grid Log Categories Copy linkLink copied to clipboard!
Data Grid provides categories for INFO, WARN, ERROR, FATAL level messages that organize logs by functional area.
org.infinispan.CLUSTER- Messages specific to Data Grid clustering that include state transfer operations, rebalancing events, partitioning, and so on.
org.infinispan.CONFIG- Messages specific to Data Grid configuration.
org.infinispan.CONTAINER- Messages specific to the data container that include expiration and eviction operations, cache listener notifications, transactions, and so on.
org.infinispan.PERSISTENCE- Messages specific to cache loaders and stores.
org.infinispan.SECURITY- Messages specific to Data Grid security.
org.infinispan.SERVER- Messages specific to Data Grid servers.
org.infinispan.XSITE- Messages specific to cross-site replication operations.
14.1.4. Log Appenders Copy linkLink copied to clipboard!
Log appenders define how Data Grid records log messages.
- CONSOLE
-
Write log messages to the host standard out (
stdout) or standard error (stderr) stream.
Uses theorg.apache.logging.log4j.core.appender.ConsoleAppenderclass by default. - FILE
-
Write log messages to a file.
Uses theorg.apache.logging.log4j.core.appender.RollingFileAppenderclass by default. - JSON-FILE
-
Write log messages to a file in JSON format.
Uses theorg.apache.logging.log4j.core.appender.RollingFileAppenderclass by default.
14.1.5. Log Patterns Copy linkLink copied to clipboard!
The CONSOLE and FILE appenders use a PatternLayout to format the log messages according to a pattern.
An example is the default pattern in the FILE appender:%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p (%t) [%c{1}] %m%throwable%n
-
%d{yyyy-MM-dd HH:mm:ss,SSS}adds the current time and date. -
%-5pspecifies the log level, aligned to the right. -
%tadds the name of the current thread. -
%c{1}adds the short name of the logging category. -
%madds the log message. -
%throwableadds the exception stack trace. -
%nadds a new line.
Patterns are fully described in the PatternLayout documentation .
14.1.6. Enabling and Configuring the JSON Log Handler Copy linkLink copied to clipboard!
Data Grid provides a JSON log handler to write messages in JSON format.
Prerequisites
-
Stop Data Grid Server if it is running.
You cannot dynamically enable log handlers.
Procedure
-
Open
$RHDG_HOME/server/conf/log4j2.xmlwith any text editor. Uncomment the
JSON-FILEappender and comment out theFILEappender:<!--<AppenderRef ref="FILE"/>--> <AppenderRef ref="JSON-FILE"/>
<!--<AppenderRef ref="FILE"/>--> <AppenderRef ref="JSON-FILE"/>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Optionally configure the JSON appender and JSON layout as required.
-
Save and close
log4j2.xml.
When you start Data Grid, it writes each log message as a JSON map in the following file:$RHDG_HOME/server/log/server.log.json
14.2. Access Logs Copy linkLink copied to clipboard!
Access logs record all inbound client requests for Hot Rod and REST endpoints to files in the $RHDG_HOME/server/log directory.
org.infinispan.HOTROD_ACCESS_LOG-
Logging category that writes Hot Rod access messages to a
hotrod-access.logfile. org.infinispan.REST_ACCESS_LOG-
Logging category that writes REST access messages to a
rest-access.logfile.
14.2.1. Enabling Access Logs Copy linkLink copied to clipboard!
To record Hot Rod and REST endpoint access messages, you need to enable the logging categories in log4j2.xml.
Procedure
-
Open
$RHDG_HOME/server/conf/log4j2.xmlwith any text editor. -
Change the level for the
org.infinispan.HOTROD_ACCESS_LOGandorg.infinispan.REST_ACCESS_LOGlogging categories toTRACE. -
Save and close
log4j2.xml.
<Logger name="org.infinispan.HOTROD_ACCESS_LOG" additivity="false" level="TRACE"> <AppenderRef ref="HR-ACCESS-FILE"/> </Logger>
<Logger name="org.infinispan.HOTROD_ACCESS_LOG" additivity="false" level="TRACE">
<AppenderRef ref="HR-ACCESS-FILE"/>
</Logger>
14.2.2. Access Log Properties Copy linkLink copied to clipboard!
The default format for access logs is as follows:
%X{address} %X{user} [%d{dd/MMM/yyyy:HH:mm:ss Z}] "%X{method} %m
%X{protocol}" %X{status} %X{requestSize} %X{responseSize} %X{duration}%n
%X{address} %X{user} [%d{dd/MMM/yyyy:HH:mm:ss Z}] "%X{method} %m
%X{protocol}" %X{status} %X{requestSize} %X{responseSize} %X{duration}%n
The preceding format creates log entries such as the following:
127.0.0.1 - [DD/MM/YYYY:HH:MM:SS +0000] "PUT /rest/v2/caches/default/key HTTP/1.1" 404 5 77 10
Logging properties use the %X{name} notation and let you modify the format of access logs. The following are the default logging properties:
| Property | Description |
|---|---|
|
|
Either the |
|
| Principal name, if using authentication. |
|
|
Method used. |
|
|
Protocol used. |
|
|
An HTTP status code for the REST endpoint. |
|
| Size, in bytes, of the request. |
|
| Size, in bytes, of the response. |
|
| Number of milliseconds that the server took to handle the request. |
Use the header name prefixed with h: to log headers that were included in requests; for example, %X{h:User-Agent}.
14.3. Audit Logs Copy linkLink copied to clipboard!
Audit logs let you track changes to your Data Grid environment so you know when changes occur and which users make them. Enable and configure audit logging to record server configuration events and administrative operations.
org.infinispan.AUDIT-
Logging category that writes security audit messages to an
audit.logfile in the$RHDG_HOME/server/logdirectory.
14.3.1. Enabling Audit Logging Copy linkLink copied to clipboard!
To record security audit messages, you need to enable the logging category in log4j2.xml.
Procedure
-
Open
$RHDG_HOME/server/conf/log4j2.xmlwith any text editor. -
Change the level for the
org.infinispan.AUDITlogging category toINFO. -
Save and close
log4j2.xml.
<!-- Set to INFO to enable audit logging --> <Logger name="org.infinispan.AUDIT" additivity="false" level="INFO"> <AppenderRef ref="AUDIT-FILE"/> </Logger>
<!-- Set to INFO to enable audit logging -->
<Logger name="org.infinispan.AUDIT" additivity="false" level="INFO">
<AppenderRef ref="AUDIT-FILE"/>
</Logger>
14.3.2. Configuring Audit Logging Appenders Copy linkLink copied to clipboard!
Apache Log4j provides different appenders that you can use to send audit messages to a destination other than the default log file. For instance, if you want to send audit logs to a syslog daemon, JDBC database, or Apache Kafka server, you can configure an appender in log4j2.xml.
Procedure
-
Open
$RHDG_HOME/server/conf/log4j2.xmlwith any text editor. Comment or remove the default
AUDIT-FILErolling file appender.<!--RollingFile name="AUDIT-FILE" ... </RollingFile-->
<!--RollingFile name="AUDIT-FILE" ... </RollingFile-->Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the desired logging appender for audit messages.
For example, you could add a logging appender for a Kafka server as follows:
<Kafka name="AUDIT-KAFKA" topic="audit"> <PatternLayout pattern="%date %message"/> <Property name="bootstrap.servers">localhost:9092</Property> </Kafka>
<Kafka name="AUDIT-KAFKA" topic="audit"> <PatternLayout pattern="%date %message"/> <Property name="bootstrap.servers">localhost:9092</Property> </Kafka>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save and close
log4j2.xml.
14.3.3. Using Custom Audit Logging Implementations Copy linkLink copied to clipboard!
You can create custom implementations of the org.infinispan.security.AuditLogger API if configuring Log4j appenders does not meet your needs.
Prerequisites
-
Implement
org.infinispan.security.AuditLoggeras required and package it in a JAR file.
Procedure
-
Add your JAR to the
server/libdirectory in your Data Grid Server installation. Specify the fully qualified class name of your custom audit logger as the value for the
audit-loggerattribute on theauthorizationelement in your cache container security configuration.For example, the following configuration defines
my.package.CustomAuditLoggeras the class for logging audit messages:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 15. Configuring Data Grid Server Statistics Copy linkLink copied to clipboard!
Enable statistics that Data Grid exports to a metrics endpoint or via JMX MBeans. Registering JMX MBeans also exposes management operations that you can perform remotely.
15.1. Enabling Data Grid Statistics Copy linkLink copied to clipboard!
Configure Data Grid to export statistics for Cache Managers and caches.
Data Grid Server enables Cache Manager statistics by default. You must explicitly enable statistics for your caches.
Procedure
Modify your configuration to enable Data Grid statistics in one of the following ways:
-
Declarative: Add the
statistics="true"attribute. -
Programmatic: Call the
.statistics()method.
Declarative
<!-- Enables statistics for the Cache Manager. --> <cache-container statistics="true"> <!-- Enables statistics for the named cache. --> <local-cache name="mycache" statistics="true"/> </cache-container>
<!-- Enables statistics for the Cache Manager. -->
<cache-container statistics="true">
<!-- Enables statistics for the named cache. -->
<local-cache name="mycache" statistics="true"/>
</cache-container>
Programmatic
15.2. Configuring Data Grid Metrics Copy linkLink copied to clipboard!
Configure Data Grid to export gauges and histograms via the metrics endpoint.
Procedure
-
Turn gauges and histograms on or off in the
metricsconfiguration as appropriate.
Declarative
<!-- Computes and collects statistics for the Cache Manager. --> <cache-container statistics="true"> <!-- Exports collected statistics as gauge and histogram metrics. --> <metrics gauges="true" histograms="true" /> </cache-container>
<!-- Computes and collects statistics for the Cache Manager. -->
<cache-container statistics="true">
<!-- Exports collected statistics as gauge and histogram metrics. -->
<metrics gauges="true" histograms="true" />
</cache-container>
Programmatic
15.3. Collecting Data Grid Metrics Copy linkLink copied to clipboard!
Collect Data Grid metrics with monitoring tools such as Prometheus.
Prerequisites
-
Enable statistics. If you do not enable statistics, Data Grid provides
0and-1values for metrics. - Optionally enable histograms. By default Data Grid generates gauges but not histograms.
Procedure
Get metrics in Prometheus (OpenMetrics) format:
curl -v http://localhost:11222/metrics
$ curl -v http://localhost:11222/metricsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get metrics in MicroProfile JSON format:
curl --header "Accept: application/json" http://localhost:11222/metrics
$ curl --header "Accept: application/json" http://localhost:11222/metricsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
Configure monitoring applications to collect Data Grid metrics. For example, add the following to prometheus.yml:
static_configs:
- targets: ['localhost:11222']
static_configs:
- targets: ['localhost:11222']
Reference
- Prometheus Configuration
- Enabling Data Grid Statistics
15.4. Configuring Data Grid to Register JMX MBeans Copy linkLink copied to clipboard!
Data Grid can register JMX MBeans that you can use to collect statistics and perform administrative operations. You must enable statistics separately to JMX otherwise Data Grid provides 0 values for all statistic attributes.
Procedure
Modify your cache container configuration to enable JMX in one of the following ways:
-
Declarative: Add the
<jmx enabled="true" />element to the cache container. -
Programmatic: Call the
.jmx().enable()method.
Declarative
<cache-container> <jmx enabled="true" /> </cache-container>
<cache-container>
<jmx enabled="true" />
</cache-container>
Programmatic
GlobalConfiguration globalConfig = new GlobalConfigurationBuilder() .jmx().enable() .build();
GlobalConfiguration globalConfig = new GlobalConfigurationBuilder()
.jmx().enable()
.build();
15.4.1. Data Grid MBeans Copy linkLink copied to clipboard!
Data Grid exposes JMX MBeans that represent manageable resources.
org.infinispan:type=Cache- Attributes and operations available for cache instances.
org.infinispan:type=CacheManager- Attributes and operations available for cache managers, including Data Grid cache and cluster health statistics.
For a complete list of available JMX MBeans along with descriptions and available operations and attributes, see the Data Grid JMX Components documentation.
Chapter 16. Retrieving Health Statistics Copy linkLink copied to clipboard!
Monitor the health of your Data Grid clusters in the following ways:
-
Programmatically with
embeddedCacheManager.getHealth()method calls. - JMX MBeans
- Data Grid REST Server
16.1. Accessing the Health API via JMX Copy linkLink copied to clipboard!
Retrieve Data Grid cluster health statistics via JMX.
Procedure
Connect to Data Grid server using any JMX capable tool such as JConsole and navigate to the following object:
org.infinispan:type=CacheManager,name="default",component=CacheContainerHealth
org.infinispan:type=CacheManager,name="default",component=CacheContainerHealthCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Select available MBeans to retrieve cluster health statistics.
16.2. Accessing the Health API via REST Copy linkLink copied to clipboard!
Get Data Grid cluster health via the REST API.
Procedure
Invoke a
GETrequest to retrieve cluster health.GET /rest/v2/cache-managers/{cacheManagerName}/healthGET /rest/v2/cache-managers/{cacheManagerName}/healthCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Data Grid responds with a JSON document such as the following:
Get cache manager status as follows:
GET /rest/v2/cache-managers/{cacheManagerName}/health/status
GET /rest/v2/cache-managers/{cacheManagerName}/health/status
Reference
See the REST v2 (version 2) API documentation for more information.
Chapter 17. Performing Rolling Upgrades for Data Grid Servers Copy linkLink copied to clipboard!
Perform rolling upgrades of your Data Grid clusters to change between versions without downtime or data loss. Rolling upgrades migrate both your Data Grid servers and your data to the target version over Hot Rod.
17.1. Setting Up Target Clusters Copy linkLink copied to clipboard!
Create a cluster that runs the target Data Grid version and uses a remote cache store to load data from the source cluster.
Prerequisites
- Install a Data Grid cluster with the target upgrade version.
Ensure the network properties for the target cluster do not overlap with those for the source cluster. You should specify unique names for the target and source clusters in the JGroups transport configuration. Depending on your environment you can also use different network interfaces and specify port offsets to keep the target and source clusters separate.
Procedure
Add a
RemoteCacheStoreon the target cluster for each cache you want to migrate from the source cluster.Remote cache stores use the Hot Rod protocol to retrieve data from remote Data Grid clusters. When you add the remote cache store to the target cluster, it can lazily load data from the source cluster to handle client requests.
Switch clients over to the target cluster so it starts handling all requests.
- Update client configuration with the location of the target cluster.
- Restart clients.
17.1.1. Remote Cache Stores for Rolling Upgrades Copy linkLink copied to clipboard!
You must use specific remote cache store configuration to perform rolling upgrades, as follows:
17.2. Synchronizing Data to Target Clusters Copy linkLink copied to clipboard!
When your target cluster is running and handling client requests using a remote cache store to load data on demand, you can synchronize data from the source cluster to the target cluster.
This operation reads data from the source cluster and writes it to the target cluster. Data migrates to all nodes in the target cluster in parallel, with each node receiving a subset of the data. You must perform the synchronization for each cache in your Data Grid configuration.
Procedure
Start the synchronization operation for each cache in your Data Grid configuration that you want to migrate to the target cluster.
Use the Data Grid REST API and invoke
POSTrequests with the?action=sync- dataparameter. For example, to synchronize data in a cache named "myCache" from a source cluster to a target cluster, do the following:POST /v2/caches/myCache?action=sync-data
POST /v2/caches/myCache?action=sync-dataCopy to Clipboard Copied! Toggle word wrap Toggle overflow When the operation completes, Data Grid responds with the total number of entries copied to the target cluster.
Alternatively, you can use JMX by invoking
synchronizeData(migratorName=hotrod)on theRollingUpgradeManagerMBean.Disconnect each node in the target cluster from the source cluster.
For example, to disconnect the "myCache" cache from the source cluster, invoke the following
POSTrequest:POST /v2/caches/myCache?action=disconnect-source
POST /v2/caches/myCache?action=disconnect-sourceCopy to Clipboard Copied! Toggle word wrap Toggle overflow To use JMX, invoke
disconnectSource(migratorName=hotrod)on theRollingUpgradeManagerMBean.
Next steps
After you synchronize all data from the source cluster, the rolling upgrade process is complete. You can now decommission the source cluster.
Chapter 18. Troubleshooting Data Grid Servers Copy linkLink copied to clipboard!
Gather diagnostic information about Data Grid server deployments and perform troubleshooting steps to resolve issues.
18.1. Getting Diagnostic Reports for Data Grid Servers Copy linkLink copied to clipboard!
Data Grid servers provide aggregated reports in tar.gz archives that contain diagnostic information about both the Data Grid server and the host. The report provides details about CPU, memory, open files, network sockets and routing, threads, in addition to configuration and log files.
Procedure
- Create a CLI connection to Data Grid.
Use the
server reportcommand to download atar.gzarchive:[//containers/default]> server report Downloaded report 'infinispan-<hostname>-<timestamp>-report.tar.gz'
[//containers/default]> server report Downloaded report 'infinispan-<hostname>-<timestamp>-report.tar.gz'Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Move the
tar.gzfile to a suitable location on your filesystem. -
Extract the
tar.gzfile with any archiving tool.
18.2. Changing Data Grid Server Logging Configuration at Runtime Copy linkLink copied to clipboard!
Modify the logging configuration for Data Grid servers at runtime to temporarily adjust logging to troubleshoot issues and perform root cause analysis.
Modifying the logging configuration through the CLI is a runtime-only operation, which means that changes:
-
Are not saved to the
log4j2.xmlfile. Restarting server nodes or the entire cluster resets the logging configuration to the default properties in thelog4j2.xmlfile. - Apply only to the nodes in the cluster when you invoke the CLI. Nodes that join the cluster after you change the logging configuration use the default properties.
Procedure
- Create a CLI connection to Data Grid.
Use the
loggingto make the required adjustments.- List all appenders defined on the server:
[//containers/default]> logging list-appenders
[//containers/default]> logging list-appenders
The preceding command returns:
- List all logger configurations defined on the server:
[//containers/default]> logging list-loggers
[//containers/default]> logging list-loggers
The preceding command returns:
-
Add and modify logger configurations with the
setsubcommand
For example, the following command sets the logging level for the org.infinispan package to DEBUG:
[//containers/default]> logging set --level=DEBUG org.infinispan
[//containers/default]> logging set --level=DEBUG org.infinispan
-
Remove existing logger configurations with the
removesubcommand.
For example, the following command removes the org.infinispan logger configuration, which means the root configuration is used instead:
[//containers/default]> logging remove org.infinispan
[//containers/default]> logging remove org.infinispan
18.3. Resource Statistics Copy linkLink copied to clipboard!
You can inspect server-collected statistics for some of the resources within a Data Grid server using the stats command.
Use the stats command either from the context of a resource which collects statistics (containers, caches) or with a path to such a resource:
Chapter 19. Reference Copy linkLink copied to clipboard!
19.1. Data Grid Server 8.2.3 Readme Copy linkLink copied to clipboard!
Information about the Data Grid Server 12.1.11.Final-redhat-00001 distribution.
19.1.1. Requirements Copy linkLink copied to clipboard!
Data Grid Server requires JDK 11 or later.
19.1.2. Starting servers Copy linkLink copied to clipboard!
Use the server script to run Data Grid Server instances.
Unix / Linux
$RHDG_HOME/bin/server.sh
$RHDG_HOME/bin/server.sh
Windows
$RHDG_HOME\bin\server.bat
$RHDG_HOME\bin\server.bat
Include the --help or -h option to view command arguments.
19.1.3. Stopping servers Copy linkLink copied to clipboard!
Use the shutdown command with the CLI to perform a graceful shutdown.
Alternatively, enter Ctrl-C from the terminal to interrupt the server process or kill it via the TERM signal.
19.1.4. Configuration Copy linkLink copied to clipboard!
Server configuration extends Data Grid configuration with the following server-specific elements:
cache-container- Defines cache containers for managing cache lifecycles.
endpoints- Enables and configures endpoint connectors for client protocols.
security- Configures endpoint security realms.
socket-bindings- Maps endpoint connectors to interfaces and ports.
The default configuration file is $RHDG_HOME/server/conf/infinispan.xml.
Use different configuration files with the -c argument, as in the following example that starts a server without clustering capabilities:
Unix / Linux
$RHDG_HOME/bin/server.sh -c infinispan-local.xml
$RHDG_HOME/bin/server.sh -c infinispan-local.xml
Windows
$RHDG_HOME\bin\server.bat -c infinispan-local.xml
$RHDG_HOME\bin\server.bat -c infinispan-local.xml
19.1.5. Bind address Copy linkLink copied to clipboard!
Data Grid Server binds to the loopback IP address localhost on your network by default.
Use the -b argument to set a different IP address, as in the following example that binds to all network interfaces:
Unix / Linux
$RHDG_HOME/bin/server.sh -b 0.0.0.0
$RHDG_HOME/bin/server.sh -b 0.0.0.0
Windows
$RHDG_HOME\bin\server.bat -b 0.0.0.0
$RHDG_HOME\bin\server.bat -b 0.0.0.0
19.1.6. Bind port Copy linkLink copied to clipboard!
Data Grid Server listens on port 11222 by default.
Use the -p argument to set an alternative port:
Unix / Linux
$RHDG_HOME/bin/server.sh -p 30000
$RHDG_HOME/bin/server.sh -p 30000
Windows
$RHDG_HOME\bin\server.bat -p 30000
$RHDG_HOME\bin\server.bat -p 30000
19.1.7. Clustering address Copy linkLink copied to clipboard!
Data Grid Server configuration defines cluster transport so multiple instances on the same network discover each other and automatically form clusters.
Use the -k argument to change the IP address for cluster traffic:
Unix / Linux
$RHDG_HOME/bin/server.sh -k 192.168.1.100
$RHDG_HOME/bin/server.sh -k 192.168.1.100
Windows
$RHDG_HOME\bin\server.bat -k 192.168.1.100
$RHDG_HOME\bin\server.bat -k 192.168.1.100
19.1.8. Cluster stacks Copy linkLink copied to clipboard!
JGroups stacks configure the protocols for cluster transport. Data Grid Server uses the tcp stack by default.
Use alternative cluster stacks with the -j argument, as in the following example that uses UDP for cluster transport:
Unix / Linux
$RHDG_HOME/bin/server.sh -j udp
$RHDG_HOME/bin/server.sh -j udp
Windows
$RHDG_HOME\bin\server.bat -j udp
$RHDG_HOME\bin\server.bat -j udp
19.1.9. Authentication Copy linkLink copied to clipboard!
Data Grid Server requires authentication.
Create a username and password with the CLI as follows:
Unix / Linux
$RHDG_HOME/bin/cli.sh user create username -p "qwer1234!"
$RHDG_HOME/bin/cli.sh user create username -p "qwer1234!"
Windows
$RHDG_HOME\bin\cli.bat user create username -p "qwer1234!"
$RHDG_HOME\bin\cli.bat user create username -p "qwer1234!"
19.1.10. Server home directory Copy linkLink copied to clipboard!
Data Grid Server uses infinispan.server.home.path to locate the contents of the server distribution on the host filesystem.
The server home directory, referred to as $RHDG_HOME, contains the following folders:
| Folder | Description |
|---|---|
|
| Contains scripts to start servers and CLI. |
|
|
Contains |
|
| Provides configuration examples, schemas, component licenses, and other resources. |
|
|
Contains |
|
| Provides a root folder for Data Grid Server instances. |
|
| Contains static resources for Data Grid Console. |
19.1.11. Server root directory Copy linkLink copied to clipboard!
Data Grid Server uses infinispan.server.root.path to locate configuration files and data for Data Grid Server instances.
You can create multiple server root folders in the same directory or in different directories and then specify the locations with the -s or --server-root argument, as in the following example:
Unix / Linux
$RHDG_HOME/bin/server.sh -s server2
$RHDG_HOME/bin/server.sh -s server2
Windows
$RHDG_HOME\bin\server.bat -s server2
$RHDG_HOME\bin\server.bat -s server2
Each server root directory contains the following folders:
├── server │ ├── conf │ ├── data │ ├── lib │ └── log
├── server
│ ├── conf
│ ├── data
│ ├── lib
│ └── log
| Folder | Description | System property override |
|---|---|---|
|
| Contains server configuration files. |
|
|
| Contains data files organized by container name. |
|
|
|
Contains server extension files. |
|
|
| Contains server log files. |
|
19.1.12. Logging Copy linkLink copied to clipboard!
Configure Data Grid Server logging with the log4j2.xml file in the server/conf folder.
Use the --logging-config=<path_to_logfile> argument to use custom paths, as follows:
Unix / Linux
$RHDG_HOME/bin/server.sh --logging-config=/path/to/log4j2.xml
$RHDG_HOME/bin/server.sh --logging-config=/path/to/log4j2.xml
To ensure custom paths take effect, do not use the ~ shortcut.
Windows
$RHDG_HOME\bin\server.bat --logging-config=path\to\log4j2.xml
$RHDG_HOME\bin\server.bat --logging-config=path\to\log4j2.xml