Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Data Grid Library Mode
Run Data Grid as an embedded library
Abstract
Red Hat Data Grid Link kopierenLink in die Zwischenablage kopiert!
Data Grid is a high-performance, distributed in-memory data store.
- Schemaless data structure
- Flexibility to store different objects as key-value pairs.
- Grid-based data storage
- Designed to distribute and replicate data across clusters.
- Elastic scaling
- Dynamically adjust the number of nodes to meet demand without service disruption.
- Data interoperability
- Store, retrieve, and query data in the grid from different endpoints.
Data Grid documentation Link kopierenLink in die Zwischenablage kopiert!
Documentation for Data Grid is available on the Red Hat customer portal.
Data Grid downloads Link kopierenLink in die Zwischenablage kopiert!
Access the Data Grid Software Downloads on the Red Hat customer portal.
You must have a Red Hat account to access and download Data Grid software.
Making open source more inclusive Link kopierenLink in die Zwischenablage kopiert!
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. Configuring the Data Grid Maven Repository Link kopierenLink in die Zwischenablage kopiert!
Data Grid Java distributions are available from Maven.
You can download the Data Grid Maven repository from the customer portal or pull Data Grid dependencies from the public Red Hat Enterprise Maven repository.
1.1. Downloading the Data Grid Maven Repository Link kopierenLink in die Zwischenablage kopiert!
Download and install the Data Grid Maven repository to a local file system, Apache HTTP server, or Maven repository manager if you do not want to use the public Red Hat Enterprise Maven repository.
Procedure
- Log in to the Red Hat customer portal.
- Navigate to the Software Downloads for Data Grid.
- Download the Red Hat Data Grid 8.1 Maven Repository.
- Extract the archived Maven repository to your local file system.
-
Open the
README.mdfile and follow the appropriate installation instructions.
1.2. Adding Red Hat Maven Repositories Link kopierenLink in die Zwischenablage kopiert!
Include the Red Hat GA repository in your Maven build environment to get Data Grid artifacts and dependencies.
Procedure
Add the Red Hat GA repository to your Maven settings file, typically
~/.m2/settings.xml, or directly in thepom.xmlfile of your project.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Reference
1.3. Configuring Your Data Grid POM Link kopierenLink in die Zwischenablage kopiert!
Maven uses configuration files called Project Object Model (POM) files to define projects and manage builds. POM files are in XML format and describe the module and component dependencies, build order, and targets for the resulting project packaging and output.
Procedure
-
Open your project
pom.xmlfor editing. -
Define the
version.infinispanproperty with the correct Data Grid version. Include the
infinispan-bomin adependencyManagementsection.The Bill Of Materials (BOM) controls dependency versions, which avoids version conflicts and means you do not need to set the version for each Data Grid artifact you add as a dependency to your project.
-
Save and close
pom.xml.
The following example shows the Data Grid version and BOM:
Next Steps
Add Data Grid artifacts as dependencies to your pom.xml as required.
Chapter 2. Installing Data Grid in Library Mode Link kopierenLink in die Zwischenablage kopiert!
Add Data Grid as an embedded library in your project.
Procedure
-
Add the
infinispan-coreartifact as a dependency in yourpom.xmlas follows:
Chapter 3. Running Data Grid as an Embedded Library Link kopierenLink in die Zwischenablage kopiert!
Learn how to run Data Grid as an embedded data store in your project.
Procedure
- Initialize the default Cache Manager and add a cache definition as follows:
GlobalConfigurationBuilder global = GlobalConfigurationBuilder.defaultClusteredBuilder();
DefaultCacheManager cacheManager = new DefaultCacheManager(global.build());
ConfigurationBuilder builder = new ConfigurationBuilder();
builder.clustering().cacheMode(CacheMode.DIST_SYNC);
cacheManager.administration().withFlags(CacheContainerAdmin.AdminFlag.VOLATILE).getOrCreateCache("myCache", builder.build());
GlobalConfigurationBuilder global = GlobalConfigurationBuilder.defaultClusteredBuilder();
DefaultCacheManager cacheManager = new DefaultCacheManager(global.build());
ConfigurationBuilder builder = new ConfigurationBuilder();
builder.clustering().cacheMode(CacheMode.DIST_SYNC);
cacheManager.administration().withFlags(CacheContainerAdmin.AdminFlag.VOLATILE).getOrCreateCache("myCache", builder.build());
The preceding code initializes a default, clustered Cache Manager. Cache Managers contain your cache definitions and control cache lifecycles.
Data Grid does not provide default cache definitions so after initializing the default Cache Manager, you need to add at least one cache instance. This example uses the ConfigurationBuilder class to create a cache definition that uses the distributed, synchronous cache mode. You then call the getOrCreateCache() method that either creates a cache named "myCache" on all nodes in the cluster or returns it if it already exists.
Next steps
Now that you have a running Cache Manager with a cache created, you can add some more cache definitions, put some data into the cache, or configure Data Grid as needed.
Chapter 4. Setting Up Data Grid Clusters Link kopierenLink in die Zwischenablage kopiert!
Data Grid requires a transport layer so nodes can automatically join and leave clusters. The transport layer also enables Data Grid nodes to replicate or distribute data across the network and perform operations such as re-balancing and state transfer.
4.1. Getting Started with Default Stacks Link kopierenLink in die Zwischenablage kopiert!
Data Grid uses JGroups protocol stacks so nodes can send each other messages on dedicated cluster channels.
Data Grid provides preconfigured JGroups stacks for UDP and TCP protocols. You can use these default stacks as a starting point for building custom cluster transport configuration that is optimized for your network requirements.
Procedure
-
Locate the default JGroups stacks,
default-jgroups-*.xml, in thedefault-configsdirectory inside theinfinispan-core-11.0.9.Final-redhat-00001.jarfile. Do one of the following:
Use the
stackattribute in yourinfinispan.xmlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Uses
default-jgroups-udp.xmlfor cluster transport.
Use the
addProperty()method to set the JGroups stack file:GlobalConfiguration globalConfig = new GlobalConfigurationBuilder().transport() .defaultTransport() .clusterName("qa-cluster") .addProperty("configurationFile", "default-jgroups-udp.xml") .build();GlobalConfiguration globalConfig = new GlobalConfigurationBuilder().transport() .defaultTransport() .clusterName("qa-cluster") .addProperty("configurationFile", "default-jgroups-udp.xml")1 .build();Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Uses the
default-jgroups-udp.xmlstack for cluster transport.
Data Grid logs the following message to indicate which stack it uses:
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack udp
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack udp
Reference
- JGroups cluster transport configuration for Data Grid 8.x (Red Hat knowledgebase article)
4.1.1. Default JGroups Stacks Link kopierenLink in die Zwischenablage kopiert!
Learn about default JGroups stacks that configure cluster transport.
| File name | Stack name | Description |
|---|---|---|
|
|
| Uses UDP for transport and UDP multicast for discovery. Suitable for larger clusters (over 100 nodes) or if you are using replicated caches or invalidation mode. Minimizes the number of open sockets. |
|
|
|
Uses TCP for transport and the |
|
|
|
Uses TCP for transport and |
|
|
|
Uses TCP for transport and |
|
|
|
Uses TCP for transport and |
|
|
|
Uses TCP for transport and |
Reference
4.1.2. TCP and UDP Ports for Cluster Traffic Link kopierenLink in die Zwischenablage kopiert!
Data Grid uses the following ports for cluster transport messages:
| Default Port | Protocol | Description |
|---|---|---|
|
| TCP/UDP | JGroups cluster bind port |
|
| UDP | JGroups multicast |
Cross-Site Replication
Data Grid uses the following ports for the JGroups RELAY2 protocol:
7900- For Data Grid clusters running on OpenShift.
7800- If using UDP for traffic between nodes and TCP for traffic between clusters.
7801- If using TCP for traffic between nodes and TCP for traffic between clusters.
4.2. Customizing JGroups Stacks Link kopierenLink in die Zwischenablage kopiert!
Adjust and tune properties to create a cluster transport configuration that works for your network requirements.
Data Grid provides attributes that let you extend the default JGroups stacks for easier configuration. You can inherit properties from the default stacks while combining, removing, and replacing other properties.
Procedure
Create a new JGroups stack declaration in your
infinispan.xmlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Creates a custom JGroups stack named "my-stack".
Add the
extendsattribute and specify a JGroups stack to inherit properties from.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Inherits from the default TCP stack.
-
Use the
stack.combineattribute to modify properties for protocols configured in the inherited stack. Use the
stack.positionattribute to define the location for your custom stack.For example, you might evaluate using a Gossip router and symmetric encryption with the default TCP stack as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the stack name as the value for the
stackattribute in thetransportconfiguration.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Configures Data Grid to use "my-stack" for cluster transport.
Check Data Grid logs to ensure it uses the stack.
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack my-stack
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack my-stackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Reference
- JGroups cluster transport configuration for Data Grid 8.x (Red Hat knowledgebase article)
4.2.1. Inheritance Attributes Link kopierenLink in die Zwischenablage kopiert!
When you extend a JGroups stack, inheritance attributes let you adjust protocols and properties in the stack you are extending.
-
stack.positionspecifies protocols to modify. stack.combineuses the following values to extend JGroups stacks:Expand Value Description COMBINEOverrides protocol properties.
REPLACEReplaces protocols.
INSERT_AFTERAdds a protocol into the stack after another protocol. Does not affect the protocol that you specify as the insertion point.
Protocols in JGroups stacks affect each other based on their location in the stack. For example, you should put a protocol such as
NAKACK2after theSYM_ENCRYPTorASYM_ENCRYPTprotocol so thatNAKACK2is secured.REMOVERemoves protocols from the stack.
4.3. Using JGroups System Properties Link kopierenLink in die Zwischenablage kopiert!
Pass system properties to Data Grid at startup to tune cluster transport.
Procedure
-
Use
-D<property-name>=<property-value>arguments to set JGroups system properties as required.
For example, set a custom bind port and IP address as follows:
java -cp ... -Djgroups.bind.port=1234 -Djgroups.bind.address=192.0.2.0
$ java -cp ... -Djgroups.bind.port=1234 -Djgroups.bind.address=192.0.2.0
When you embed Data Grid clusters in clustered Red Hat JBoss EAP applications, JGroups system properties can clash or override each other.
For example, you do not set a unique bind address for either your Data Grid cluster or your Red Hat JBoss EAP application. In this case both Data Grid and your Red Hat JBoss EAP application use the JGroups default property and attempt to form clusters using the same bind address.
4.3.1. System Properties for JGroups Stacks Link kopierenLink in die Zwischenablage kopiert!
Set system properties that configure JGroups cluster transport stacks.
| System Property | Description | Default Value | Required/Optional |
|---|---|---|---|
|
| Bind address for cluster transport. |
| Optional |
|
| Bind port for the socket. |
| Optional |
|
| IP address for multicast, both discovery and inter-cluster communication. The IP address must be a valid "class D" address that is suitable for IP multicast. |
| Optional |
|
| Port for the multicast socket. |
| Optional |
|
| Time-to-live (TTL) for IP multicast packets. The value defines the number of network hops a packet can make before it is dropped. | 2 | Optional |
|
| Minimum number of threads for the thread pool. | 0 | Optional |
|
| Maximum number of threads for the thread pool. | 200 | Optional |
|
| Maximum number of milliseconds to wait for join requests to succeed. | 2000 | Optional |
|
| Number of times a thread pool needs to be full before a thread dump is logged. | 10000 | Optional |
Amazon EC3
The following system properties apply only to default-jgroups-ec2.xml:
| System Property | Description | Default Value | Required/Optional |
|---|---|---|---|
|
| Amazon S3 access key for an S3 bucket. | No default value. | Optional |
|
| Amazon S3 secret key used for an S3 bucket. | No default value. | Optional |
|
| Name of the Amazon S3 bucket. The name must exist and be unique. | No default value. | Optional |
Kubernetes
The following system properties apply only to default-jgroups-kubernetes.xml:
| System Property | Description | Default Value | Required/Optional |
|---|---|---|---|
|
| Sets the DNS record that returns cluster members. | No default value. | Required |
Google Cloud Platform
The following system properties apply only to default-jgroups-google.xml:
| System Property | Description | Default Value | Required/Optional |
|---|---|---|---|
|
| Name of the Google Compute Engine bucket. The name must exist and be unique. | No default value. | Required |
4.4. Using Inline JGroups Stacks Link kopierenLink in die Zwischenablage kopiert!
You can insert complete JGroups stack definitions into infinispan.xml files.
4.5. Using External JGroups Stacks Link kopierenLink in die Zwischenablage kopiert!
Reference external files that define custom JGroups stacks in infinispan.xml files.
Procedure
Put custom JGroups stack files on the application classpath.
Alternatively you can specify an absolute path when you declare the external stack file.
Reference the external stack file with the
stack-fileelement.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.6. Cluster Discovery Protocols Link kopierenLink in die Zwischenablage kopiert!
Data Grid supports different protocols that allow nodes to automatically find each other on the network and form clusters.
There are two types of discovery mechanisms that Data Grid can use:
- Generic discovery protocols that work on most networks and do not rely on external services.
-
Discovery protocols that rely on external services to store and retrieve topology information for Data Grid clusters.
For instance the DNS_PING protocol performs discovery through DNS server records.
Running Data Grid on hosted platforms requires using discovery mechanisms that are adapted to network constraints that individual cloud providers impose.
Reference
- JGroups Discovery Protocols
- JGroups cluster transport configuration for Data Grid 8.x (Red Hat knowledgebase article)
4.6.1. PING Link kopierenLink in die Zwischenablage kopiert!
PING, or UDPPING is a generic JGroups discovery mechanism that uses dynamic multicasting with the UDP protocol.
When joining, nodes send PING requests to an IP multicast address to discover other nodes already in the Data Grid cluster. Each node responds to the PING request with a packet that contains the address of the coordinator node and its own address. C=coordinator’s address and A=own address. If no nodes respond to the PING request, the joining node becomes the coordinator node in a new cluster.
PING configuration example
<config> <PING num_discovery_runs="3"/> ... </config>
<config>
<PING num_discovery_runs="3"/>
...
</config>
Reference
4.6.2. TCPPING Link kopierenLink in die Zwischenablage kopiert!
TCPPING is a generic JGroups discovery mechanism that uses a list of static addresses for cluster members.
With TCPPING, you manually specify the IP address or hostname of each node in the Data Grid cluster as part of the JGroups stack, rather than letting nodes discover each other dynamically.
TCPPING configuration example
- 1
- For reliable discovery, Red Hat recommends
port-range=0.
Reference
4.6.3. MPING Link kopierenLink in die Zwischenablage kopiert!
MPING uses IP multicast to discover the initial membership of Data Grid clusters.
You can use MPING to replace TCPPING discovery with TCP stacks and use multicasing for discovery instead of static lists of initial hosts. However, you can also use MPING with UDP stacks.
MPING configuration example
Reference
4.6.4. TCPGOSSIP Link kopierenLink in die Zwischenablage kopiert!
Gossip routers provide a centralized location on the network from which your Data Grid cluster can retrieve addresses of other nodes.
You inject the address (IP:PORT) of the Gossip router into Data Grid nodes as follows:
-
Pass the address as a system property to the JVM; for example,
-DGossipRouterAddress="10.10.2.4[12001]". - Reference that system property in the JGroups configuration file.
Gossip router configuration example
Reference
4.6.5. JDBC_PING Link kopierenLink in die Zwischenablage kopiert!
JDBC_PING uses shared databases to store information about Data Grid clusters. This protocol supports any database that can use a JDBC connection.
Nodes write their IP addresses to the shared database so joining nodes can find the Data Grid cluster on the network. When nodes leave Data Grid clusters, they delete their IP addresses from the shared database.
JDBC_PING configuration example
Add the appropriate JDBC driver to the classpath so Data Grid can use JDBC_PING.
Reference
4.6.6. DNS_PING Link kopierenLink in die Zwischenablage kopiert!
JGroups DNS_PING queries DNS servers to discover Data Grid cluster members in Kubernetes environments such as OKD and Red Hat OpenShift.
DNS_PING configuration example
<config> <dns.DNS_PING dns_query="myservice.myproject.svc.cluster.local" /> ... </config>
<config>
<dns.DNS_PING dns_query="myservice.myproject.svc.cluster.local" />
...
</config>
Reference
- JGroups DNS_PING
- DNS for Services and Pods (Kubernetes documentation for adding DNS entries)
4.7. Using Custom JChannels Link kopierenLink in die Zwischenablage kopiert!
Construct custom JGroups JChannels as in the following example:
Data Grid cannot use custom JChannels that are already connected.
Reference
4.8. Encrypting Cluster Transport Link kopierenLink in die Zwischenablage kopiert!
Secure cluster transport so that nodes communicate with encrypted messages. You can also configure Data Grid clusters to perform certificate authentication so that only nodes with valid identities can join.
4.8.1. Data Grid Cluster Security Link kopierenLink in die Zwischenablage kopiert!
To secure cluster traffic, you configure Data Grid nodes to encrypt JGroups message payloads with secret keys.
Data Grid nodes can obtain secret keys from either:
- The coordinator node (asymmetric encryption).
- A shared keystore (symmetric encryption).
Retrieving secret keys from coordinator nodes
You configure asymmetric encryption by adding the ASYM_ENCRYPT protocol to a JGroups stack in your Data Grid configuration. This allows Data Grid clusters to generate and distribute secret keys.
When using asymmetric encryption, you should also provide keystores so that nodes can perform certificate authentication and securely exchange secret keys. This protects your cluster from man-in-the-middle (MitM) attacks.
Asymmetric encryption secures cluster traffic as follows:
- The first node in the Data Grid cluster, the coordinator node, generates a secret key.
- A joining node performs certificate authentication with the coordinator to mutually verify identity.
- The joining node requests the secret key from the coordinator node. That request includes the public key for the joining node.
- The coordinator node encrypts the secret key with the public key and returns it to the joining node.
- The joining node decrypts and installs the secret key.
- The node joins the cluster, encrypting and decrypting messages with the secret key.
Retrieving secret keys from shared keystores
You configure symmetric encryption by adding the SYM_ENCRYPT protocol to a JGroups stack in your Data Grid configuration. This allows Data Grid clusters to obtain secret keys from keystores that you provide.
- Nodes install the secret key from a keystore on the Data Grid classpath at startup.
- Node join clusters, encrypting and decrypting messages with the secret key.
Comparison of asymmetric and symmetric encryption
ASYM_ENCRYPT with certificate authentication provides an additional layer of encryption in comparison with SYM_ENCRYPT. You provide keystores that encrypt the requests to coordinator nodes for the secret key. Data Grid automatically generates that secret key and handles cluster traffic, while letting you specify when to generate secret keys. For example, you can configure clusters to generate new secret keys when nodes leave. This ensures that nodes cannot bypass certificate authentication and join with old keys.
SYM_ENCRYPT, on the other hand, is faster than ASYM_ENCRYPT because nodes do not need to exchange keys with the cluster coordinator. A potential drawback to SYM_ENCRYPT is that there is no configuration to automatically generate new secret keys when cluster membership changes. Users are responsible for generating and distributing the secret keys that nodes use to encrypt cluster traffic.
4.8.2. Configuring Cluster Transport with Asymmetric Encryption Link kopierenLink in die Zwischenablage kopiert!
Configure Data Grid clusters to generate and distribute secret keys that encrypt JGroups messages.
Procedure
- Create a keystore with certificate chains that enables Data Grid to verify node identity.
Place the keystore on the classpath for each node in the cluster.
For Data Grid Server, you put the keystore in the $RHDG_HOME directory.
Add the
SSL_KEY_EXCHANGEandASYM_ENCRYPTprotocols to a JGroups stack in your Data Grid configuration, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Creates a secure JGroups stack named "encrypt-tcp" that extends the default TCP stack for Data Grid.
- 2
- Names the keystore that nodes use to perform certificate authentication.
- 3
- Specifies the keystore password.
- 4
- Uses the
stack.combineandstack.positionattributes to insertSSL_KEY_EXCHANGEinto the default TCP stack after theVERIFY_SUSPECTprotocol. - 5
- Specifies the length of the secret key that the coordinator node generates. The default value is
2048. - 6
- Specifies the cipher engine the coordinator node uses to generate secret keys. The default value is
RSA. - 7
- Configures Data Grid to generate and distribute a new secret key when the coordinator node changes.
- 8
- Configures Data Grid to generate and distribute a new secret key when nodes leave.
- 9
- Configures Data Grid nodes to use the
SSL_KEY_EXCHANGEprotocol for certificate authentication. - 10
- Uses the
stack.combineandstack.positionattributes to insertASYM_ENCRYPTinto the default TCP stack after theSSL_KEY_EXCHANGEprotocol. - 11
- Configures the Data Grid cluster to use the secure JGroups stack.
Verification
When you start your Data Grid cluster, the following log message indicates that the cluster is using the secure JGroups stack:
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack <encrypted_stack_name>
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack <encrypted_stack_name>
Data Grid nodes can join the cluster only if they use ASYM_ENCRYPT and can obtain the secret key from the coordinator node. Otherwise the following message is written to Data Grid logs:
[org.jgroups.protocols.ASYM_ENCRYPT] <hostname>: received message without encrypt header from <hostname>; dropping it
[org.jgroups.protocols.ASYM_ENCRYPT] <hostname>: received message without encrypt header from <hostname>; dropping it
Reference
The example ASYM_ENCRYPT configuration in this procedure shows commonly used parameters. Refer to JGroups documentation for the full set of available parameters.
4.8.3. Configuring Cluster Transport with Symmetric Encryption Link kopierenLink in die Zwischenablage kopiert!
Configure Data Grid clusters to encrypt JGroups messages with secret keys from keystores that you provide.
Procedure
- Create a keystore that contains a secret key.
Place the keystore on the classpath for each node in the cluster.
For Data Grid Server, you put the keystore in the $RHDG_HOME directory.
Add the
SYM_ENCRYPTprotocol to a JGroups stack in your Data Grid configuration, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Creates a secure JGroups stack named "encrypt-tcp" that extends the default TCP stack for Data Grid.
- 2
- Names the keystore from which nodes obtain secret keys.
- 3
- Specifies the keystore type. JGroups uses JCEKS by default.
- 4
- Specifies the keystore password.
- 5
- Specifies the secret key password.
- 6
- Specifies the secret key alias.
- 7
- Uses the
stack.combineandstack.positionattributes to insertSYM_ENCRYPTinto the default TCP stack after theVERIFY_SUSPECTprotocol. - 8
- Configures the Data Grid cluster to use the secure JGroups stack.
Verification
When you start your Data Grid cluster, the following log message indicates that the cluster is using the secure JGroups stack:
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack <encrypted_stack_name>
[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack <encrypted_stack_name>
Data Grid nodes can join the cluster only if they use SYM_ENCRYPT and can obtain the secret key from the shared keystore. Otherwise the following message is written to Data Grid logs:
[org.jgroups.protocols.SYM_ENCRYPT] <hostname>: received message without encrypt header from <hostname>; dropping it
[org.jgroups.protocols.SYM_ENCRYPT] <hostname>: received message without encrypt header from <hostname>; dropping it
Reference
The example SYM_ENCRYPT configuration in this procedure shows commonly used parameters. Refer to JGroups documentation for the full set of available parameters.