Chapter 12. Setting up Data Grid cluster transport


Data Grid requires a transport layer so nodes can automatically join and leave clusters. The transport layer also enables Data Grid nodes to replicate or distribute data across the network and perform operations such as re-balancing and state transfer.

12.1. Default JGroups stacks

Data Grid provides default JGroups stack files, default-jgroups-*.xml, in the default-configs directory inside the infinispan-core-14.0.21.Final-redhat-00001.jar file.

You can find this JAR file in the $RHDG_HOME/lib directory.

File nameStack nameDescription

default-jgroups-udp.xml

udp

Uses UDP for transport and UDP multicast for discovery. Suitable for larger clusters (over 100 nodes) or if you are using replicated caches or invalidation mode. Minimizes the number of open sockets.

default-jgroups-tcp.xml

tcp

Uses TCP for transport and the MPING protocol for discovery, which uses UDP multicast. Suitable for smaller clusters (under 100 nodes) only if you are using distributed caches because TCP is more efficient than UDP as a point-to-point protocol.

default-jgroups-kubernetes.xml

kubernetes

Uses TCP for transport and DNS_PING for discovery. Suitable for Kubernetes and Red Hat OpenShift nodes where UDP multicast is not always available.

default-jgroups-ec2.xml

ec2

Uses TCP for transport and aws.S3_PING for discovery. Suitable for Amazon EC2 nodes where UDP multicast is not available. Requires additional dependencies.

default-jgroups-google.xml

google

Uses TCP for transport and GOOGLE_PING2 for discovery. Suitable for Google Cloud Platform nodes where UDP multicast is not available. Requires additional dependencies.

default-jgroups-azure.xml

azure

Uses TCP for transport and AZURE_PING for discovery. Suitable for Microsoft Azure nodes where UDP multicast is not available. Requires additional dependencies.

default-jgroups-tunnel.xml

tunnel

Uses TUNNEL for transport. Suitable for environments where the Data Grid is behind a firewall and direct connection between Data Grid nodes is impossible. It requires an external and accessible service (Gossip Router) to redirect the traffic. It requires jgroups.tunnel.hosts property to be set in the format host1[port],host2[port],…​ with the Gossip Router(s) hosts and ports.

Additional resources

12.2. Cluster discovery protocols

Data Grid supports different protocols that allow nodes to automatically find each other on the network and form clusters.

There are two types of discovery mechanisms that Data Grid can use:

  • Generic discovery protocols that work on most networks and do not rely on external services.
  • Discovery protocols that rely on external services to store and retrieve topology information for Data Grid clusters.
    For instance the DNS_PING protocol performs discovery through DNS server records.
Note

Running Data Grid on hosted platforms requires using discovery mechanisms that are adapted to network constraints that individual cloud providers impose.

12.2.1. PING

PING, or UDPPING is a generic JGroups discovery mechanism that uses dynamic multicasting with the UDP protocol.

When joining, nodes send PING requests to an IP multicast address to discover other nodes already in the Data Grid cluster. Each node responds to the PING request with a packet that contains the address of the coordinator node and its own address. C=coordinator’s address and A=own address. If no nodes respond to the PING request, the joining node becomes the coordinator node in a new cluster.

PING configuration example

<PING num_discovery_runs="3"/>

Additional resources

12.2.2. TCPPING

TCPPING is a generic JGroups discovery mechanism that uses a list of static addresses for cluster members.

With TCPPING, you manually specify the IP address or hostname of each node in the Data Grid cluster as part of the JGroups stack, rather than letting nodes discover each other dynamically.

TCPPING configuration example

<TCP bind_port="7800" />
<TCPPING timeout="3000"
         initial_hosts="${jgroups.tcpping.initial_hosts:hostname1[port1],hostname2[port2]}"
         port_range="0"
         num_initial_members="3"/>

Additional resources

12.2.3. MPING

MPING uses IP multicast to discover the initial membership of Data Grid clusters.

You can use MPING to replace TCPPING discovery with TCP stacks and use multicasing for discovery instead of static lists of initial hosts. However, you can also use MPING with UDP stacks.

MPING configuration example

<MPING mcast_addr="${jgroups.mcast_addr:239.6.7.8}"
       mcast_port="${jgroups.mcast_port:46655}"
       num_discovery_runs="3"
       ip_ttl="${jgroups.udp.ip_ttl:2}"/>

Additional resources

12.2.4. TCPGOSSIP

Gossip routers provide a centralized location on the network from which your Data Grid cluster can retrieve addresses of other nodes.

You inject the address (IP:PORT) of the Gossip router into Data Grid nodes as follows:

  1. Pass the address as a system property to the JVM; for example, -DGossipRouterAddress="10.10.2.4[12001]".
  2. Reference that system property in the JGroups configuration file.

Gossip router configuration example

<TCP bind_port="7800" />
<TCPGOSSIP timeout="3000"
           initial_hosts="${GossipRouterAddress}"
           num_initial_members="3" />

Additional resources

12.2.5. JDBC_PING2

JDBC_PING2 uses shared databases to store information about Data Grid clusters. This protocol supports any database that can use a JDBC connection.

Nodes write their IP addresses to the shared database so joining nodes can find the Data Grid cluster on the network. When nodes leave Data Grid clusters, they delete their IP addresses from the shared database.

JDBC_PING2 configuration example

<JDBC_PING connection_url="jdbc:mysql://localhost:3306/database_name"
           connection_username="user"
           connection_password="password"
           connection_driver="com.mysql.jdbc.Driver"/>

Important

Add the appropriate JDBC driver to the classpath so Data Grid can use JDBC_PING2.

12.2.5.1. Using a server datasource for JDBC_PING2 discovery

Add a managed datasource to a Data Grid Server and use it to provide database connections for the cluster transport JDBC_PING2 discovery protocol.

Prerequisites

  • Install a Data Grid Server cluster.

Procedure

  1. Deploy a JDBC driver JAR to your Data Grid Server server/lib directory
  2. Create a datasource for your database.

    <server xmlns="urn:infinispan:server:15.0">
      <data-sources>
         <!-- Defines a unique name for the datasource and JNDI name that you
              reference in JDBC cache store configuration.
              Enables statistics for the datasource, if required. -->
         <data-source name="ds"
                      jndi-name="jdbc/postgres"
                      statistics="true">
            <!-- Specifies the JDBC driver that creates connections. -->
            <connection-factory driver="org.postgresql.Driver"
                                url="jdbc:postgresql://localhost:5432/postgres"
                                username="postgres"
                                password="changeme">
               <!-- Sets optional JDBC driver-specific connection properties. -->
               <connection-property name="name">value</connection-property>
            </connection-factory>
            <!-- Defines connection pool tuning properties. -->
            <connection-pool initial-size="1"
                             max-size="10"
                             min-size="3"
                             background-validation="1000"
                             idle-removal="1"
                             blocking-timeout="1000"
                             leak-detection="10000"/>
         </data-source>
      </data-sources>
    </server>
  3. Create a JGroups stack which uses the JDBC_PING2 protocol for discovery.
  4. Configure cluster transport to use the datasource by specifying the name of the datasource with the server:data-source attribute.

    <infinispan>
        <jgroups>
            <stack name="jdbc" extends="tcp">
                <JDBC_PING stack.combine="REPLACE" stack.position="MPING" />
            </stack>
        </jgroups>
        <cache-container>
            <transport stack="jdbc" server:data-source="ds" />
        </cache-container>
    </infinispan>

Additional resources

12.2.6. DNS_PING

JGroups DNS_PING queries DNS servers to discover Data Grid cluster members in Kubernetes environments such as OKD and Red Hat OpenShift.

DNS_PING configuration example

<dns.DNS_PING dns_query="myservice.myproject.svc.cluster.local" />

Additional resources

12.2.7. Cloud discovery protocols

Data Grid includes default JGroups stacks that use discovery protocol implementations that are specific to cloud providers.

Discovery protocolDefault stack fileArtifactVersion

aws.S3_PING

default-jgroups-ec2.xml

org.jgroups.aws:jgroups-aws

3.0.0.Final

GOOGLE_PING2

default-jgroups-google.xml

org.jgroups.google:jgroups-google

2.0.0.Final

azure.AZURE_PING

default-jgroups-azure.xml

org.jgroups.azure:jgroups-azure

2.0.2.Final

Providing dependencies for cloud discovery protocols

To use aws.S3_PING, GOOGLE_PING2, or azure.AZURE_PING cloud discovery protocols, you need to provide dependent libraries to Data Grid.

Procedure

  1. Download the artifact JAR file and all dependencies.
  2. Add the artifact JAR file and all dependencies to the $RHDG_HOME/server/lib directory of your Data Grid Server installation.

    For more details see the Downloading artifacts for JGroups cloud discover protocols for Data Grid Server (Red Hat knowledgebase article)

You can then configure the cloud discovery protocol as part of a JGroups stack file or with system properties.

12.3. Using the default JGroups stacks

Data Grid uses JGroups protocol stacks so nodes can send each other messages on dedicated cluster channels.

Data Grid provides preconfigured JGroups stacks for UDP and TCP protocols. You can use these default stacks as a starting point for building custom cluster transport configuration that is optimized for your network requirements.

Procedure

Do one of the following to use one of the default JGroups stacks:

  • Use the stack attribute in your infinispan.xml file.

    <infinispan>
      <cache-container default-cache="replicatedCache">
        <!-- Use the default UDP stack for cluster transport. -->
        <transport cluster="${infinispan.cluster.name}"
                   stack="udp"
                   node-name="${infinispan.node.name:}"/>
      </cache-container>
    </infinispan>
  • Use the cluster-stack argument to set the JGroups stack file when Data Grid Server starts:

    bin/server.sh --cluster-stack=udp

Verification

Data Grid logs the following message to indicate which stack it uses:

[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack udp

Additional resources

12.4. Customizing JGroups stacks

Adjust and tune properties to create a cluster transport configuration that works for your network requirements.

Data Grid provides attributes that let you extend the default JGroups stacks for easier configuration. You can inherit properties from the default stacks while combining, removing, and replacing other properties.

Procedure

  1. Create a new JGroups stack declaration in your infinispan.xml file.
  2. Add the extends attribute and specify a JGroups stack to inherit properties from.
  3. Use the stack.combine attribute to modify properties for protocols configured in the inherited stack.
  4. Use the stack.position attribute to define the location for your custom stack.
  5. Specify the stack name as the value for the stack attribute in the transport configuration.

    For example, you might evaluate using a Gossip router and symmetric encryption with the default TCP stack as follows:

    <infinispan>
      <jgroups>
        <!-- Creates a custom JGroups stack named "my-stack". -->
        <!-- Inherits properties from the default TCP stack. -->
        <stack name="my-stack" extends="tcp">
          <!-- Uses TCPGOSSIP as the discovery mechanism instead of MPING -->
          <TCPGOSSIP initial_hosts="${jgroups.tunnel.gossip_router_hosts:localhost[12001]}"
                 stack.combine="REPLACE"
                 stack.position="MPING" />
          <!-- Removes the FD_SOCK2 protocol from the stack. -->
          <FD_SOCK2 stack.combine="REMOVE"/>
          <!-- Modifies the timeout value for the VERIFY_SUSPECT2 protocol. -->
          <VERIFY_SUSPECT2 timeout="2000"/>
          <!-- Adds SYM_ENCRYPT to the stack after VERIFY_SUSPECT2. -->
          <SYM_ENCRYPT sym_algorithm="AES"
                       keystore_name="mykeystore.p12"
                       keystore_type="PKCS12"
                       store_password="changeit"
                       key_password="changeit"
                       alias="myKey"
                       stack.combine="INSERT_AFTER"
                       stack.position="VERIFY_SUSPECT2" />
        </stack>
      </jgroups>
      <cache-container name="default" statistics="true">
        <!-- Uses "my-stack" for cluster transport. -->
        <transport cluster="${infinispan.cluster.name}"
                   stack="my-stack"
                   node-name="${infinispan.node.name:}"/>
      </cache-container>
    </infinispan>
  6. Check Data Grid logs to ensure it uses the stack.

    [org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack my-stack

Reference

12.4.1. Inheritance attributes

When you extend a JGroups stack, inheritance attributes let you adjust protocols and properties in the stack you are extending.

  • stack.position specifies protocols to modify.
  • stack.combine uses the following values to extend JGroups stacks:

    ValueDescription

    COMBINE

    Overrides protocol properties.

    REPLACE

    Replaces protocols.

    INSERT_AFTER

    Adds a protocol into the stack after another protocol. Does not affect the protocol that you specify as the insertion point.

    Protocols in JGroups stacks affect each other based on their location in the stack. For example, you should put a protocol such as NAKACK2 after the SYM_ENCRYPT or ASYM_ENCRYPT protocol so that NAKACK2 is secured.

    INSERT_BEFORE

    Inserts a protocols into the stack before another protocol. Affects the protocol that you specify as the insertion point.

    REMOVE

    Removes protocols from the stack.

12.5. Using JGroups system properties

Pass system properties to Data Grid at startup to tune cluster transport.

Procedure

  • Use -D<property-name>=<property-value> arguments to set JGroups system properties as required.

For example, set a custom bind port and IP address as follows:

bin/server.sh -Djgroups.bind.port=1234 -Djgroups.bind.address=192.0.2.0

12.5.1. Cluster transport properties

Use the following properties to customize JGroups cluster transport.

System PropertyDescriptionDefault ValueRequired/Optional

jgroups.bind.address

Bind address for cluster transport.

SITE_LOCAL

Optional

jgroups.bind.port

Bind port for the socket.

7800

Optional

jgroups.mcast_addr

IP address for multicast, both discovery and inter-cluster communication. The IP address must be a valid "class D" address that is suitable for IP multicast.

239.6.7.8

Optional

jgroups.mcast_port

Port for the multicast socket.

46655

Optional

jgroups.ip_ttl

Time-to-live (TTL) for IP multicast packets. The value defines the number of network hops a packet can make before it is dropped.

2

Optional

jgroups.thread_pool.min_threads

Minimum number of threads for the thread pool.

0

Optional

jgroups.thread_pool.max_threads

Maximum number of threads for the thread pool.

200

Optional

jgroups.join_timeout

Maximum number of milliseconds to wait for join requests to succeed.

2000

Optional

jgroups.thread_dumps_threshold

Number of times a thread pool needs to be full before a thread dump is logged.

10000

Optional

jgroups.fd.port-offset

Offset from jgroups.bind.port port for the FD (failure detection protocol) socket.

50000 (port 57800 )

Optional

jgroups.frag_size

Maximum number of bytes in a message. Messages larger than that are fragmented.

60000

Optional

jgroups.diag.enabled

Enables JGroups diagnostic probing.

false

Optional

12.5.2. System properties for cloud discovery protocols

Use the following properties to configure JGroups discovery protocols for hosted platforms.

12.5.2.1. Amazon EC2

System properties for configuring aws.S3_PING.

System PropertyDescriptionDefault ValueRequired/Optional

jgroups.s3.region_name

Name of the Amazon S3 region.

No default value.

Optional

jgroups.s3.bucket_name

Name of the Amazon S3 bucket. The name must exist and be unique.

No default value.

Optional

12.5.2.2. Google Cloud Platform

System properties for configuring GOOGLE_PING2.

System PropertyDescriptionDefault ValueRequired/Optional

jgroups.google.bucket_name

Name of the Google Compute Engine bucket. The name must exist and be unique.

No default value.

Required

12.5.2.3. Azure

System properties for azure.AZURE_PING`.

System PropertyDescriptionDefault ValueRequired/Optional

jboss.jgroups.azure_ping.storage_account_name

Name of the Azure storage account. The name must exist and be unique.

No default value.

Required

jboss.jgroups.azure_ping.storage_access_key

Name of the Azure storage access key.

No default value.

Required

jboss.jgroups.azure_ping.container

Valid DNS name of the container that stores ping information.

No default value.

Required

12.5.2.4. OpenShift

System properties for DNS_PING.

System PropertyDescriptionDefault ValueRequired/Optional

jgroups.dns.query

Sets the DNS record that returns cluster members.

No default value.

Required

jgroups.dns.record

Sets the DNS record type.

A

Optional

12.6. Using inline JGroups stacks

You can insert complete JGroups stack definitions into infinispan.xml files.

Procedure

  • Embed a custom JGroups stack declaration in your infinispan.xml file.

    <infinispan>
      <!-- Contains one or more JGroups stack definitions. -->
      <jgroups>
        <!-- Defines a custom JGroups stack named "prod". -->
        <stack name="prod">
          <TCP bind_port="7800" port_range="30" recv_buf_size="20000000" send_buf_size="640000"/>
          <RED/>
          <MPING break_on_coord_rsp="true"
                 mcast_addr="${jgroups.mping.mcast_addr:239.2.4.6}"
                 mcast_port="${jgroups.mping.mcast_port:43366}"
                 num_discovery_runs="3"
                 ip_ttl="${jgroups.udp.ip_ttl:2}"/>
          <MERGE3 />
          <FD_SOCK2 />
          <FD_ALL3 timeout="3000" interval="1000" timeout_check_interval="1000" />
          <VERIFY_SUSPECT2 timeout="1000" />
          <pbcast.NAKACK2 use_mcast_xmit="false" xmit_interval="200" xmit_table_num_rows="50"
                          xmit_table_msgs_per_row="1024" xmit_table_max_compaction_time="30000" />
          <UNICAST3 conn_close_timeout="5000" xmit_interval="200" xmit_table_num_rows="50"
                    xmit_table_msgs_per_row="1024" xmit_table_max_compaction_time="30000" />
          <pbcast.STABLE desired_avg_gossip="2000" max_bytes="1M" />
          <pbcast.GMS print_local_addr="false" join_timeout="${jgroups.join_timeout:2000}" />
          <UFC max_credits="4m" min_threshold="0.40" />
          <MFC max_credits="4m" min_threshold="0.40" />
          <FRAG4 />
        </stack>
      </jgroups>
      <cache-container default-cache="replicatedCache">
        <!-- Uses "prod" for cluster transport. -->
        <transport cluster="${infinispan.cluster.name}"
               stack="prod"
               node-name="${infinispan.node.name:}"/>
      </cache-container>
    </infinispan>

12.7. Using external JGroups stacks

Reference external files that define custom JGroups stacks in infinispan.xml files.

Procedure

  1. Add custom JGroups stack files to the $RHDG_HOME/server/conf directory.

    Alternatively you can specify an absolute path when you declare the external stack file.

  2. Reference the external stack file with the stack-file element.

    <infinispan>
      <jgroups>
         <!-- Creates a "prod-tcp" stack that references an external file. -->
         <stack-file name="prod-tcp" path="prod-jgroups-tcp.xml"/>
      </jgroups>
      <cache-container default-cache="replicatedCache">
        <!-- Use the "prod-tcp" stack for cluster transport. -->
        <transport stack="prod-tcp" />
        <replicated-cache name="replicatedCache"/>
      </cache-container>
      <!-- Cache configuration goes here. -->
    </infinispan>

12.8. Encrypting cluster transport

Secure cluster transport so that nodes communicate with encrypted messages. You can also configure Data Grid clusters to perform certificate authentication so that only nodes with valid identities can join.

12.8.1. Securing cluster transport with TLS identities

Add SSL/TLS identities to a Data Grid Server security realm and use them to secure cluster transport. Nodes in the Data Grid Server cluster then exchange SSL/TLS certificates to encrypt JGroups messages, including RELAY messages if you configure cross-site replication.

Prerequisites

  • Install a Data Grid Server cluster.

Procedure

  1. Create a TLS keystore that contains a single certificate to identify Data Grid Server.

    You can also use a PEM file if it contains a private key in PKCS#1 or PKCS#8 format, a certificate, and has an empty password: password="".

    Note

    If the certificate in the keystore is not signed by a public certificate authority (CA) then you must also create a trust store that contains either the signing certificate or the public key.

  2. Add the keystore to the $RHDG_HOME/server/conf directory.
  3. Add the keystore to a new security realm in your Data Grid Server configuration.

    Important

    You should create dedicated keystores and security realms so that Data Grid Server endpoints do not use the same security realm as cluster transport.

    <server xmlns="urn:infinispan:server:15.0">
      <security>
        <security-realms>
          <security-realm name="cluster-transport">
            <server-identities>
              <ssl>
                <!-- Adds a keystore that contains a certificate that provides SSL/TLS identity to encrypt cluster transport. -->
                <keystore path="server.pfx"
                          relative-to="infinispan.server.config.path"
                          password="secret"
                          alias="server"/>
              </ssl>
            </server-identities>
          </security-realm>
        </security-realms>
      </security>
    </server>
  4. Configure cluster transport to use the security realm by specifying the name of the security realm with the server:security-realm attribute.

    <infinispan>
      <cache-container>
        <transport server:security-realm="cluster-transport"/>
      </cache-container>
    </infinispan>

Verification

When you start Data Grid Server, the following log message indicates that the cluster is using the security realm for cluster transport:

[org.infinispan.SERVER] ISPN080060: SSL Transport using realm <security_realm_name>

12.8.2. JGroups encryption protocols

To secure cluster traffic, you can configure Data Grid nodes to encrypt JGroups message payloads with secret keys.

Data Grid nodes can obtain secret keys from either:

  • The coordinator node (asymmetric encryption).
  • A shared keystore (symmetric encryption).

Retrieving secret keys from coordinator nodes

You configure asymmetric encryption by adding the ASYM_ENCRYPT protocol to a JGroups stack in your Data Grid configuration. This allows Data Grid clusters to generate and distribute secret keys.

Important

When using asymmetric encryption, you should also provide keystores so that nodes can perform certificate authentication and securely exchange secret keys. This protects your cluster from man-in-the-middle (MitM) attacks.

Asymmetric encryption secures cluster traffic as follows:

  1. The first node in the Data Grid cluster, the coordinator node, generates a secret key.
  2. A joining node performs certificate authentication with the coordinator to mutually verify identity.
  3. The joining node requests the secret key from the coordinator node. That request includes the public key for the joining node.
  4. The coordinator node encrypts the secret key with the public key and returns it to the joining node.
  5. The joining node decrypts and installs the secret key.
  6. The node joins the cluster, encrypting and decrypting messages with the secret key.

Retrieving secret keys from shared keystores

You configure symmetric encryption by adding the SYM_ENCRYPT protocol to a JGroups stack in your Data Grid configuration. This allows Data Grid clusters to obtain secret keys from keystores that you provide.

  1. Nodes install the secret key from a keystore on the Data Grid classpath at startup.
  2. Node join clusters, encrypting and decrypting messages with the secret key.

Comparison of asymmetric and symmetric encryption

ASYM_ENCRYPT with certificate authentication provides an additional layer of encryption in comparison with SYM_ENCRYPT. You provide keystores that encrypt the requests to coordinator nodes for the secret key. Data Grid automatically generates that secret key and handles cluster traffic, while letting you specify when to generate secret keys. For example, you can configure clusters to generate new secret keys when nodes leave. This ensures that nodes cannot bypass certificate authentication and join with old keys.

SYM_ENCRYPT, on the other hand, is faster than ASYM_ENCRYPT because nodes do not need to exchange keys with the cluster coordinator. A potential drawback to SYM_ENCRYPT is that there is no configuration to automatically generate new secret keys when cluster membership changes. Users are responsible for generating and distributing the secret keys that nodes use to encrypt cluster traffic.

12.8.3. Securing cluster transport with asymmetric encryption

Configure Data Grid clusters to generate and distribute secret keys that encrypt JGroups messages.

Procedure

  1. Create a keystore with certificate chains that enables Data Grid to verify node identity.
  2. Place the keystore on the classpath for each node in the cluster.

    For Data Grid Server, you put the keystore in the $RHDG_HOME directory.

  3. Add the SSL_KEY_EXCHANGE and ASYM_ENCRYPT protocols to a JGroups stack in your Data Grid configuration, as in the following example:

    <infinispan>
      <jgroups>
        <!-- Creates a secure JGroups stack named "encrypt-tcp" that extends the default TCP stack. -->
        <stack name="encrypt-tcp" extends="tcp">
          <!-- Adds a keystore that nodes use to perform certificate authentication. -->
          <!-- Uses the stack.combine and stack.position attributes to insert SSL_KEY_EXCHANGE into the default TCP stack after VERIFY_SUSPECT2. -->
          <SSL_KEY_EXCHANGE keystore_name="mykeystore.jks"
                            keystore_password="changeit"
                            stack.combine="INSERT_AFTER"
                            stack.position="VERIFY_SUSPECT2"/>
          <!-- Configures ASYM_ENCRYPT -->
          <!-- Uses the stack.combine and stack.position attributes to insert ASYM_ENCRYPT into the default TCP stack before pbcast.NAKACK2. -->
          <!-- The use_external_key_exchange = "true" attribute configures nodes to use the `SSL_KEY_EXCHANGE` protocol for certificate authentication. -->
          <ASYM_ENCRYPT asym_keylength="2048"
                        asym_algorithm="RSA"
                        change_key_on_coord_leave = "false"
                        change_key_on_leave = "false"
                        use_external_key_exchange = "true"
                        stack.combine="INSERT_BEFORE"
                        stack.position="pbcast.NAKACK2"/>
        </stack>
      </jgroups>
      <cache-container name="default" statistics="true">
        <!-- Configures the cluster to use the JGroups stack. -->
        <transport cluster="${infinispan.cluster.name}"
                   stack="encrypt-tcp"
                   node-name="${infinispan.node.name:}"/>
      </cache-container>
    </infinispan>

Verification

When you start your Data Grid cluster, the following log message indicates that the cluster is using the secure JGroups stack:

[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack <encrypted_stack_name>

Data Grid nodes can join the cluster only if they use ASYM_ENCRYPT and can obtain the secret key from the coordinator node. Otherwise the following message is written to Data Grid logs:

[org.jgroups.protocols.ASYM_ENCRYPT] <hostname>: received message without encrypt header from <hostname>; dropping it

Additional resources

12.8.4. Securing cluster transport with symmetric encryption

Configure Data Grid clusters to encrypt JGroups messages with secret keys from keystores that you provide.

Procedure

  1. Create a keystore that contains a secret key.
  2. Place the keystore on the classpath for each node in the cluster.

    For Data Grid Server, you put the keystore in the $RHDG_HOME directory.

  3. Add the SYM_ENCRYPT protocol to a JGroups stack in your Data Grid configuration.
<infinispan>
  <jgroups>
    <!-- Creates a secure JGroups stack named "encrypt-tcp" that extends the default TCP stack. -->
    <stack name="encrypt-tcp" extends="tcp">
      <!-- Adds a keystore from which nodes obtain secret keys. -->
      <!-- Uses the stack.combine and stack.position attributes to insert SYM_ENCRYPT into the default TCP stack after VERIFY_SUSPECT2. -->
      <SYM_ENCRYPT keystore_name="myKeystore.p12"
                   keystore_type="PKCS12"
                   store_password="changeit"
                   key_password="changeit"
                   alias="myKey"
                   stack.combine="INSERT_AFTER"
                   stack.position="VERIFY_SUSPECT2"/>
    </stack>
  </jgroups>
  <cache-container name="default" statistics="true">
    <!-- Configures the cluster to use the JGroups stack. -->
    <transport cluster="${infinispan.cluster.name}"
               stack="encrypt-tcp"
               node-name="${infinispan.node.name:}"/>
  </cache-container>
</infinispan>

Verification

When you start your Data Grid cluster, the following log message indicates that the cluster is using the secure JGroups stack:

[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack <encrypted_stack_name>

Data Grid nodes can join the cluster only if they use SYM_ENCRYPT and can obtain the secret key from the shared keystore. Otherwise the following message is written to Data Grid logs:

[org.jgroups.protocols.SYM_ENCRYPT] <hostname>: received message without encrypt header from <hostname>; dropping it

Additional resources

12.9. TCP and UDP ports for cluster traffic

Data Grid uses the following ports for cluster transport messages:

Default PortProtocolDescription

7800

TCP/UDP

JGroups cluster bind port

46655

UDP

JGroups multicast

Cross-site replication

Data Grid uses the following ports for the JGroups RELAY2 protocol:

7900
For Data Grid clusters running on OpenShift.
7800
If using UDP for traffic between nodes and TCP for traffic between clusters.
7801
If using TCP for traffic between nodes and TCP for traffic between clusters.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.