이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 2. Configuring Data Grid cross-site replication
Set up cluster transport so Data Grid clusters can discover each other and relay nodes can send messages for cross-site replication. You can then add backup locations to Data Grid caches.
2.1. Configuring cluster transport for cross-site replication 링크 복사링크가 클립보드에 복사되었습니다!
Add JGroups RELAY2 to your transport layer so that Data Grid can replicate caches to backup locations.
Procedure
- Open your Data Grid configuration for editing.
- Add the RELAY2 protocol to a JGroups stack.
-
Specify the stack name with the
stack
attribute for the transport configuration so the Data Grid cluster uses it. - Save and close your Data Grid configuration.
JGroups RELAY2 stacks
The following configuration shows a JGroups RELAY2 stack that:
- Uses the default JGroups UDP stack for inter-cluster transport, which refers to communication between nodes at the local site.
- Uses the default JGroups TCP stack for cross-site replication traffic.
- Names the local site as LON.
- Specifies a maximum of 1000 nodes in the cluster that can send cross-site replication requests.
- Specifies the names of all backup locations that participate in cross-site replication.
2.1.1. Custom JGroups RELAY2 stacks 링크 복사링크가 클립보드에 복사되었습니다!
You can add custom JGroups RELAY2 stacks to Data Grid clusters to use different transport properties for cross-site replication. For example, the following configuration uses TCPPING instead of MPING for discovery and extends the default TCP stack:
2.2. Adding backup locations to caches 링크 복사링크가 클립보드에 복사되었습니다!
Specify the names of remote sites so Data Grid can replicate data to caches on those clusters.
Procedure
- Open your Data Grid configuration for editing.
-
Add the
backups
element to your cache configuration. -
Specify the name of the remote site as the backup location.
For example, in the LON configuration, specify NYC as the backup. -
Repeat the preceding steps on each cluster so that each site is a backup for other sites.
For example, if you add LON as a backup for NYC you should also add NYC as a backup for LON. - Save and close your Data Grid configuration.
Backup configuration
The following example shows the "customers" cache configuration for the LON cluster:
XML
JSON
YAML
The following example shows the "customers" cache configuration for the NYC cluster:
XML
JSON
YAML
2.3. Backing up to caches with different names 링크 복사링크가 클립보드에 복사되었습니다!
Data Grid replicates data between caches that have the same name by default. If you want Data Grid to replicate between caches with different names, you can explicitly declare the backup for each cache.
Procedure
- Open your Data Grid configuration for editing.
-
Use
backup-for
orbackupFor
to replicate data from a remote site into a cache with a different name on the local site. - Save and close your Data Grid configuration.
Backup for configuration
The following example configures the "eu-customers" cache to receive updates from the "customers" cache on the LON cluster:
XML
JSON
YAML
2.4. Configuring cross-site state transfer 링크 복사링크가 클립보드에 복사되었습니다!
Change cross-site state transfer settings to optimize performance and specify whether operations happen manually or automatically.
Procedure
- Open your Data Grid configuration for editing.
Configure state transfer operations as appropriate.
-
Specify the number of entries to include in each state transfer operation with
chunk-size
orchunkSize
. -
Specify the time to wait, in milliseconds, for state transfer operations to complete with
timeout
. -
Set the maximum number of attempts for Data Grid to retry failed state transfers with
max-retries
ormaxRetries
. -
Specify the time to wait, in milliseconds, between retry attempts with
wait-time
orwaitTime
. -
Specify if state transfer operations happen automatically or manually with
mode
.
-
Specify the number of entries to include in each state transfer operation with
- Open your Data Grid configuration for editing.
State transfer configuration
XML
JSON
YAML
2.5. Configuring conflict resolution algorithms 링크 복사링크가 클립보드에 복사되었습니다!
Configure Data Grid to use a different algorithm to resolve conflicting entries between backup locations.
Procedure
- Open your Data Grid configuration for editing.
- Specify one of the Data Grid algorithms or a custom implementation as the merge policy to resolve conflicting entries.
- Save and close your Data Grid configuration for editing.
Data Grid algorithms
Find all Data Grid algorithms and their descriptions in the org.infinispan.xsite.spi.XSiteMergePolicy
enum.
The following example configuration uses the ALWAYS_REMOVE
algorithm that deletes conflicting entries from both sites:
XML
<distributed-cache> <backups merge-policy="ALWAYS_REMOVE"> <backup site="LON" strategy="ASYNC"/> </backups> </distributed-cache>
<distributed-cache>
<backups merge-policy="ALWAYS_REMOVE">
<backup site="LON" strategy="ASYNC"/>
</backups>
</distributed-cache>
JSON
YAML
Custom conflict resolution algorithms
If you create a custom XSiteEntryMergePolicy
implementation, you can specify the fully qualified class name as the merge policy.
XML
<distributed-cache> <backups merge-policy="org.mycompany.MyCustomXSiteEntryMergePolicy"> <backup site="LON" strategy="ASYNC"/> </backups> </distributed-cache>
<distributed-cache>
<backups merge-policy="org.mycompany.MyCustomXSiteEntryMergePolicy">
<backup site="LON" strategy="ASYNC"/>
</backups>
</distributed-cache>
JSON
YAML
2.6. Cleaning tombstones for asynchronous backups 링크 복사링크가 클립보드에 복사되었습니다!
With the asynchronous backup strategy Data Grid stores metadata, known as tombstones, when it removes keys. Data Grid periodically runs a task to remove these tombstones and reduce excessive memory usage when backup locations no longer require the metadata. You can configure the frequency for this task by defining a target size for tombstone maps as well as the maximum delay between task runs.
Procedure
- Open your Data Grid configuration for editing.
Specify the number of tombstones to store with the
tombstone-map-size
attribute.If the number of tombstones increases beyond this number then Data Grid runs the cleanup task more frequently. Likewise, if the number of tombstones is less than this number then Data Grid does not run the cleanup task as frequently.
-
Add the
max-cleanup-delay
attribute and specify the maximum delay, in milliseconds, between tombstone cleanup tasks. - Save the changes to your configuration.
Tombstone cleanup task configuration
XML
<distributed-cache> <backups tombstone-map-size="512000" max-cleanup-delay="30000"> <backup site="LON" strategy="ASYNC"/> </backups> </distributed-cache>
<distributed-cache>
<backups tombstone-map-size="512000" max-cleanup-delay="30000">
<backup site="LON" strategy="ASYNC"/>
</backups>
</distributed-cache>
JSON
YAML
2.7. Verifying cross-site views 링크 복사링크가 클립보드에 복사되었습니다!
When you set up Data Grid to perform cross-site replication, you should check log files to ensure that Data Grid clusters have successfully formed cross-site views.
Procedure
- Open Data Grid log files with any appropriate editor.
-
Check for
ISPN000439: Received new x-site view
messages.
For example, if a Data Grid cluster in LON has formed a cross-site view with a Data Grid cluster in NYC, logs include the following messages:
INFO [org.infinispan.XSITE] (jgroups-5,<server-hostname>) ISPN000439: Received new x-site view: [NYC] INFO [org.infinispan.XSITE] (jgroups-7,<server-hostname>) ISPN000439: Received new x-site view: [LON, NYC]
INFO [org.infinispan.XSITE] (jgroups-5,<server-hostname>) ISPN000439: Received new x-site view: [NYC]
INFO [org.infinispan.XSITE] (jgroups-7,<server-hostname>) ISPN000439: Received new x-site view: [LON, NYC]
2.8. Configuring Hot Rod clients for cross-site replication 링크 복사링크가 클립보드에 복사되었습니다!
Configure Hot Rod clients to use Data Grid clusters at different sites.
hotrod-client.properties
Servers at the active site Servers at the backup site
# Servers at the active site
infinispan.client.hotrod.server_list = LON_host1:11222,LON_host2:11222,LON_host3:11222
# Servers at the backup site
infinispan.client.hotrod.cluster.NYC = NYC_hostA:11222,NYC_hostB:11222,NYC_hostC:11222,NYC_hostD:11222
ConfigurationBuilder
ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServers("LON_host1:11222;LON_host2:11222;LON_host3:11222") .addCluster("NYC") .addClusterNodes("NYC_hostA:11222;NYC_hostB:11222;NYC_hostC:11222;NYC_hostD:11222")
ConfigurationBuilder builder = new ConfigurationBuilder();
builder.addServers("LON_host1:11222;LON_host2:11222;LON_host3:11222")
.addCluster("NYC")
.addClusterNodes("NYC_hostA:11222;NYC_hostB:11222;NYC_hostC:11222;NYC_hostD:11222")
Use the following methods to switch Hot Rod clients to the default cluster or to a cluster at a different site:
-
RemoteCacheManager.switchToDefaultCluster()
-
RemoteCacheManager.switchToCluster(${site.name})