此内容没有您所选择的语言版本。
Chapter 2. Configuring Data Grid cross-site replication
Set up cluster transport so Data Grid clusters can discover each other and relay nodes can send messages for cross-site replication. You can then add backup locations to Data Grid caches.
2.1. Configuring cluster transport for cross-site replication
Add JGroups RELAY2 to your transport layer so that Data Grid can replicate caches to backup locations.
Procedure
- Open your Data Grid configuration for editing.
- Add the RELAY2 protocol to a JGroups stack.
-
Specify the stack name with the
stack
attribute for the transport configuration so the Data Grid cluster uses it. - Save and close your Data Grid configuration.
JGroups RELAY2 stacks
The following configuration shows a JGroups RELAY2 stack that:
- Uses the default JGroups UDP stack for inter-cluster transport, which refers to communication between nodes at the local site.
- Uses the default JGroups TCP stack for cross-site replication traffic.
- Names the local site as LON.
- Specifies a maximum of 1000 nodes in the cluster that can send cross-site replication requests.
- Specifies the names of all backup locations that participate in cross-site replication.
<infinispan> <jgroups> <stack name="xsite" extends="udp"> <relay.RELAY2 xmlns="urn:org:jgroups" site="LON" max_site_masters="1000"/> <remote-sites default-stack="tcp"> <remote-site name="LON"/> <remote-site name="NYC"/> </remote-sites> </stack> </jgroups> <cache-container> <transport cluster="${cluster.name}" stack="xsite"/> </cache-container> </infinispan>
Additional resources
- JGroups RELAY2 Stacks
- Data Grid configuration schema reference
2.1.1. Custom JGroups RELAY2 stacks
You can add custom JGroups RELAY2 stacks to Data Grid clusters to use different transport properties for cross-site replication. For example, the following configuration uses TCPPING instead of MPING for discovery and extends the default TCP stack:
<infinispan> <jgroups> <stack name="relay-global" extends="tcp"> <TCPPING initial_hosts="192.0.2.0[7800]" stack.combine="REPLACE" stack.position="MPING"/> </stack> <stack name="xsite" extends="udp"> <relay.RELAY2 site="LON" xmlns="urn:org:jgroups" max_site_masters="10" can_become_site_master="true"/> <remote-sites default-stack="relay-global"> <remote-site name="LON"/> <remote-site name="NYC"/> </remote-sites> </stack> </jgroups> </infinispan>
Additional resources
2.2. Adding backup locations to caches
Specify the names of remote sites so Data Grid can replicate data to caches on those clusters.
Procedure
- Open your Data Grid configuration for editing.
-
Add the
backups
element to your cache configuration. -
Specify the name of the remote site as the backup location.
For example, in the LON configuration, specify NYC as the backup. -
Repeat the preceding steps on each cluster so that each site is a backup for other sites.
For example, if you add LON as a backup for NYC you should also add NYC as a backup for LON. - Save and close your Data Grid configuration.
Backup configuration
The following example shows the "customers" cache configuration for the LON cluster:
XML
<replicated-cache name="customers"> <backups> <backup site="NYC" strategy="ASYNC" /> </backups> </replicated-cache>
JSON
{ "replicated-cache": { "name": "customers", "backups": { "NYC": { "backup" : { "strategy" : "ASYNC" } } } } }
YAML
replicatedCache: name: "customers" backups: NYC: backup: strategy: "ASYNC"
The following example shows the "customers" cache configuration for the NYC cluster:
XML
<distributed-cache name="customers"> <backups> <backup site="LON" strategy="ASYNC" /> </backups> </distributed-cache>
JSON
{ "distributed-cache": { "name": "customers", "backups": { "LON": { "backup": { "strategy": "ASYNC" } } } } }
YAML
distributedCache: name: "customers" backups: LON: backup: strategy: "ASYNC"
Additional resources
2.3. Backing up to caches with different names
Data Grid replicates data between caches that have the same name by default. If you want Data Grid to replicate between caches with different names, you can explicitly declare the backup for each cache.
Procedure
- Open your Data Grid configuration for editing.
-
Use
backup-for
orbackupFor
to replicate data from a remote site into a cache with a different name on the local site. - Save and close your Data Grid configuration.
Backup for configuration
The following example configures the "eu-customers" cache to receive updates from the "customers" cache on the LON cluster:
XML
<distributed-cache name="eu-customers"> <backups> <backup site="LON" strategy="ASYNC" /> </backups> <backup-for remote-cache="customers" remote-site="LON" /> </distributed-cache>
JSON
{ "distributed-cache": { "name": "eu-customers", "backups": { "LON": { "backup": { "strategy": "ASYNC" } } }, "backup-for" : { "remote-cache" : "customers", "remote-site" : "LON" } } }
YAML
distributedCache: name: "eu-customers" backups: LON: backup: strategy: "ASYNC" backupFor: remoteCache: "customers" remoteSite: "LON"
2.4. Configuring cross-site state transfer
Change cross-site state transfer settings to optimize performance and specify whether operations happen manually or automatically.
Procedure
- Open your Data Grid configuration for editing.
Configure state transfer operations as appropriate.
-
Specify the number of entries to include in each state transfer operation with
chunk-size
orchunkSize
. -
Specify the time to wait, in milliseconds, for state transfer operations to complete with
timeout
. -
Set the maximum number of attempts for Data Grid to retry failed state transfers with
max-retries
ormaxRetries
. -
Specify the time to wait, in milliseconds, between retry attempts with
wait-time
orwaitTime
. -
Specify if state transfer operations happen automatically or manually with
mode
.
-
Specify the number of entries to include in each state transfer operation with
- Open your Data Grid configuration for editing.
State transfer configuration
XML
<distributed-cache name="eu-customers"> <backups> <backup site="LON" strategy="ASYNC"> <state-transfer chunk-size="600" timeout="2400000" max-retries="30" wait-time="2000" mode="AUTO"/> </backup> </backups> </distributed-cache>
JSON
{ "distributed-cache": { "name": "eu-customers", "backups": { "LON": { "backup": { "strategy": "ASYNC", "state-transfer": { "chunk-size": "600", "timeout": "2400000", "max-retries": "30", "wait-time": "2000", "mode": "AUTO" } } } } } }
YAML
distributedCache: name: "eu-customers" backups: LON: backup: strategy: "ASYNC" stateTransfer: chunkSize: "600" timeout: "2400000" maxRetries: "30" waitTime: "2000" mode: "AUTO"
2.5. Configuring conflict resolution algorithms
Configure Data Grid to use a different algorithm to resolve conflicting entries between backup locations.
Procedure
- Open your Data Grid configuration for editing.
- Specify one of the Data Grid algorithms or a custom implementation as the merge policy to resolve conflicting entries.
- Save and close your Data Grid configuration for editing.
Data Grid algorithms
Find all Data Grid algorithms and their descriptions in the org.infinispan.xsite.spi.XSiteMergePolicy
enum.
The following example configuration uses the ALWAYS_REMOVE
algorithm that deletes conflicting entries from both sites:
XML
<distributed-cache> <backups merge-policy="ALWAYS_REMOVE"> <backup site="LON" strategy="ASYNC"/> </backups> </distributed-cache>
JSON
{ "distributed-cache": { "backups": { "merge-policy": "ALWAYS_REMOVE", "LON": { "backup": { "strategy": "ASYNC" } } } } }
YAML
distributedCache: backups: mergePolicy: "ALWAYS_REMOVE" LON: backup: strategy: "ASYNC"
Custom conflict resolution algorithms
If you create a custom XSiteEntryMergePolicy
implementation, you can specify the fully qualified class name as the merge policy.
XML
<distributed-cache> <backups merge-policy="org.mycompany.MyCustomXSiteEntryMergePolicy"> <backup site="LON" strategy="ASYNC"/> </backups> </distributed-cache>
JSON
{ "distributed-cache": { "backups": { "merge-policy": "org.mycompany.MyCustomXSiteEntryMergePolicy", "LON": { "backup": { "strategy": "ASYNC" } } } } }
YAML
distributedCache: backups: mergePolicy: "org.mycompany.MyCustomXSiteEntryMergePolicy" LON: backup: strategy: "ASYNC"
2.6. Cleaning tombstones for asynchronous backups
With the asynchronous backup strategy Data Grid stores metadata, known as tombstones, when it removes keys. Data Grid periodically runs a task to remove these tombstones and reduce excessive memory usage when backup locations no longer require the metadata. You can configure the frequency for this task by defining a target size for tombstone maps as well as the maximum delay between task runs.
Procedure
- Open your Data Grid configuration for editing.
Specify the number of tombstones to store with the
tombstone-map-size
attribute.If the number of tombstones increases beyond this number then Data Grid runs the cleanup task more frequently. Likewise, if the number of tombstones is less than this number then Data Grid does not run the cleanup task as frequently.
-
Add the
max-cleanup-delay
attribute and specify the maximum delay, in milliseconds, between tombstone cleanup tasks. - Save the changes to your configuration.
Tombstone cleanup task configuration
XML
<distributed-cache> <backups tombstone-map-size="512000" max-cleanup-delay="30000"> <backup site="LON" strategy="ASYNC"/> </backups> </distributed-cache>
JSON
{ "distributed-cache": { "backups": { "tombstone-map-size": 512000, "max-cleanup-delay": 30000, "LON": { "backup": { "strategy": "ASYNC" } } } } }
YAML
distributedCache: backups: tombstoneMapSize: 512000 maxCleanupDelay: 30000 LON: backup: strategy: "ASYNC"
Additional resources
2.7. Verifying cross-site views
When you set up Data Grid to perform cross-site replication, you should check log files to ensure that Data Grid clusters have successfully formed cross-site views.
Procedure
- Open Data Grid log files with any appropriate editor.
-
Check for
ISPN000439: Received new x-site view
messages.
For example, if a Data Grid cluster in LON has formed a cross-site view with a Data Grid cluster in NYC, logs include the following messages:
INFO [org.infinispan.XSITE] (jgroups-5,<server-hostname>) ISPN000439: Received new x-site view: [NYC] INFO [org.infinispan.XSITE] (jgroups-7,<server-hostname>) ISPN000439: Received new x-site view: [LON, NYC]
2.8. Configuring Hot Rod clients for cross-site replication
Configure Hot Rod clients to use Data Grid clusters at different sites.
hotrod-client.properties
# Servers at the active site infinispan.client.hotrod.server_list = LON_host1:11222,LON_host2:11222,LON_host3:11222 # Servers at the backup site infinispan.client.hotrod.cluster.NYC = NYC_hostA:11222,NYC_hostB:11222,NYC_hostC:11222,NYC_hostD:11222
ConfigurationBuilder
ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServers("LON_host1:11222;LON_host2:11222;LON_host3:11222") .addCluster("NYC") .addClusterNodes("NYC_hostA:11222;NYC_hostB:11222;NYC_hostC:11222;NYC_hostD:11222")
Use the following methods to switch Hot Rod clients to the default cluster or to a cluster at a different site:
-
RemoteCacheManager.switchToDefaultCluster()
-
RemoteCacheManager.switchToCluster(${site.name})