Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 3. Configuring Data Grid for Cross-Site Replication


Configuring Data Grid to replicate data across sites, you first set up cluster transport so Data Grid clusters can discover each other and site masters can communicate. You then add backup locations to cache definitions in your Data Grid configuration.

3.1. Configuring Cluster Transport for Cross-Site Replication

Add JGroups RELAY2 to your transport layer so that Data Grid clusters can communicate with backup locations.

Procedure

  1. Open infinispan.xml for editing.
  2. Add the RELAY2 protocol to a JGroups stack, for example:

    <jgroups>
       <stack name="xsite" extends="udp">
          <relay.RELAY2 site="LON" xmlns="urn:org:jgroups" max_site_masters="1000"/>
          <remote-sites default-stack="tcp">
             <remote-site name="LON"/>
             <remote-site name="NYC"/>
          </remote-sites>
       </stack>
    </jgroups>
    Copy to Clipboard Toggle word wrap
  3. Configure Data Grid cluster transport to use the stack, as in the following example:

    <cache-container name="default" statistics="true">
      <transport cluster="${cluster.name}" stack="xsite"/>
    </cache-container>
    Copy to Clipboard Toggle word wrap
  4. Save and close infinispan.xml.

3.1.1. JGroups RELAY2 Stacks

Data Grid clusters use JGroups RELAY2 for inter-cluster discovery and communication.

<jgroups>
   <stack name="xsite" 
1

          extends="udp"> 
2

      <relay.RELAY2 xmlns="urn:org:jgroups" 
3

                    site="LON" 
4

                    max_site_masters="1000"/> 
5

      <remote-sites default-stack="tcp"> 
6

         <remote-site name="LON"/> 
7

         <remote-site name="NYC"/>
      </remote-sites>
   </stack>
</jgroups>
Copy to Clipboard Toggle word wrap
1
Defines a stack named "xsite" that declares which protocols to use for your Data Grid cluster transport.
2
Uses the default JGroups UDP stack for intra-cluster traffic.
3
Adds RELAY2 to the stack for inter-cluster transport.
4
Names the local site. Data Grid replicates data in caches from this site to backup locations.
5
Configures a maximum of 1000 site masters for the local cluster. You should set max_site_masters >= the number of nodes in the Data Grid cluster for optimal performance with backup requests.
6
Specifies all site names and uses the default JGroups TCP stack for inter-cluster transport.
7
Names each remote site as a backup location.

3.1.2. Custom JGroups RELAY2 Stacks

<jgroups>
   <stack-file name="relay-global" path="jgroups-relay.xml"/> 
1

   <stack name="xsite" extends="udp">
      <relay.RELAY2 site="LON" xmlns="urn:org:jgroups"
                    max_site_masters="10" 
2

                    can_become_site_master="true"/>
      <remote-sites default-stack="relay-global">
         <remote-site name="LON"/>
         <remote-site name="NYC"/>
      </remote-sites>
   </stack>
</jgroups>
Copy to Clipboard Toggle word wrap
1
Adds a custom RELAY2 stack defined in jgroups-relay.xml.
2
Sets the maximum number of site masters and optionally specifies additional RELAY2 properties. See JGroups RELAY2 documentation.

Example jgroups-relay.xml

<config xmlns="urn:org:jgroups"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups-4.1.xsd">

    <!-- Use TCP for inter-cluster transport. -->
    <TCP bind_addr="127.0.0.1"
         bind_port="7200"
         port_range="30"

         thread_pool.min_threads="0"
         thread_pool.max_threads="8"
         thread_pool.keep_alive_time="5000"
    />

    <!-- Use TCPPING for inter-cluster discovery. -->
    <TCPPING timeout="3000"
             initial_hosts="127.0.0.1[7200]"
             port_range="3"
             ergonomics="false"/>

    <!-- Provide other configuration as required. -->
</config>
Copy to Clipboard Toggle word wrap

3.2. Adding Backup Locations to Caches

Specify the names of remote sites so Data Grid can back up data to those locations.

Procedure

  1. Add the backups element to your cache definition.
  2. Specify the name of each remote site with the backup element.

    As an example, in the LON configuration, specify NYC as the remote site.

  3. Repeat the preceding steps so that each site is a backup for all other sites. For example, you cannot add LON as a backup for NYC without adding NYC as a backup for LON.
Note

Cache configurations can be different across sites and use different backup strategies. Data Grid replicates data based on cache names.

Example "customers" configuration in LON

<replicated-cache name="customers">
  <backups>
    <backup site="NYC" strategy="ASYNC" />
  </backups>
</replicated-cache>
Copy to Clipboard Toggle word wrap

Example "customers" configuration in NYC

<distributed-cache name="customers">
  <backups>
    <backup site="LON" strategy="SYNC" />
  </backups>
</replicated-cache>
Copy to Clipboard Toggle word wrap

3.3. Backing Up to Caches with Different Names

By default, Data Grid replicates data between caches that have the same name.

Procedure

  • Use backup-for to replicate data from a remote site into a cache with a different name on the local site.

For example, the following configuration backs up the "customers" cache on LON to the "eu-customers" cache on NYC.

<distributed-cache name="eu-customers">
  <backups>
    <backup site="LON" strategy="SYNC" />
  </backups>
  <backup-for remote-cache="customers" remote-site="LON" />
</replicated-cache>
Copy to Clipboard Toggle word wrap

3.4. Verifying Cross-Site Views

After you configure Data Grid for cross-site replication, you should verify that Data Grid clusters successfully form cross-site views.

Procedure

  • Check log messages for ISPN000439: Received new x-site view messages.

For example, if the Data Grid cluster in LON has formed a cross-site view with the Data Grid cluster in NYC, it provides the following messages:

INFO  [org.infinispan.XSITE] (jgroups-5,${server.hostname}) ISPN000439: Received new x-site view: [NYC]
INFO  [org.infinispan.XSITE] (jgroups-7,${server.hostname}) ISPN000439: Received new x-site view: [NYC, LON]
Copy to Clipboard Toggle word wrap

3.5. Configuring Hot Rod Clients for Cross-Site Replication

Configure Hot Rod clients to use Data Grid clusters at different sites.

hotrod-client.properties

# Servers at the active site
infinispan.client.hotrod.server_list = LON_host1:11222,LON_host2:11222,LON_host3:11222

# Servers at the backup site
infinispan.client.hotrod.cluster.NYC = NYC_hostA:11222,NYC_hostB:11222,NYC_hostC:11222,NYC_hostD:11222
Copy to Clipboard Toggle word wrap

ConfigurationBuilder

ConfigurationBuilder builder = new ConfigurationBuilder();
builder.addServers("LON_host1:11222;LON_host2:11222;LON_host3:11222")
       .addCluster("NYC")
       .addClusterNodes("NYC_hostA:11222;NYC_hostB:11222;NYC_hostC:11222;NYC_hostD:11222")
Copy to Clipboard Toggle word wrap

Tip

Use the following methods to switch Hot Rod clients to the default cluster or to a cluster at a different site:

  • RemoteCacheManager.switchToDefaultCluster()
  • RemoteCacheManager.switchToCluster(${site.name})
Retour au début
Red Hat logoGithubredditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance. Découvrez nos récentes mises à jour.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez le Blog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

Theme

© 2025 Red Hat