Chapter 37. High Availability Using Server Hinting
37.1. Server Hinting
Server Hinting helps you achieve high availability with your Red Hat JBoss Data Grid deployment.
To use Server Hinting, you provide information about the physical topology with attributes that identify servers, racks, or data centers to achieve more resilience with your data in the event that all the nodes in a given physical location become unavailable.
When you configure Server Hinting, JBoss Data Grid uses the location information you provided to distribute data across the cluster so that backup copies of data are stored on as many servers, racks, and data centers as possible.
In some cases JBoss Data Grid stores copies of data on nodes that share the same physical location. For example, if the number of owners for segments is greater than the number of distinct sites, then JBoss Data Grid assigns more than one owner for a given segment in the same site.
Server Hinting does not apply to total replication, which requires complete copies of data on every node.
Consistent Hashing controls how data is distributed across nodes. JBoss Data Grid uses TopologyAwareSyncConsistentHashFactory
if you enable Server Hinting. For details see ConsistentHashFactories.
37.2. ConsistentHashFactories
37.2.1. ConsistentHashFactories
Red Hat JBoss Data Grid offers a pluggable mechanism for selecting the consistent hashing algorithm and provides different implementations. You can also use a custom implementation.
You can configure ConsistentHashFactory
implementations in Library Mode only. In Remote Client/Server Mode, this configuration is not valid and results in a runtime error.
ConsistentHashFactory
implementations-
SyncConsistentHashFactory
guarantees that the key mapping is the same for each cache, provided the current membership is the same. This has a drawback in that a node joining the cache can cause the existing nodes to also exchange segments, resulting in either additional state transfer traffic, the distribution of the data becoming less even, or both. This is the default consistent hashing implementation without Server Hinting. -
TopologyAwareSyncConsistentHashFactory
is equivalent toSyncConsistentHashFactory
but used with Server Hinting to distribute data across the topology so that backed up copies of data are stored on different nodes in the topology than the primary owners. This is the default consistent hashing implementation with Server Hinting. -
DefaultConsistentHashFactory
keeps segments balanced evenly across all nodes, however the key mapping is not guaranteed to be same across caches as this depends on the history of each cache. -
TopologyAwareConsistentHashFactory
is equivalent toDefaultConsistentHashFactory
but used with Server Hinting to distribute data across the topology so that backed up copies of data are stored on different nodes in the topology than the primary owners.
-
You configure JBoss Data Grid to use a consistent hash implementation with the consistent-hash-factory
attribute, as in the following example:
<distributed-cache consistent-hash-factory="org.infinispan.distribution.ch.impl.SyncConsistentHashFactory">
This configuration guarantees caches with the same members have the same consistent hash, and if the machineId
, rackId
, or siteId
attributes are specified in the transport configuration it also spreads backup copies across physical machines/racks/data centers.
It has a potential drawback in that it can move a greater number of segments than necessary during re-balancing. This can be mitigated by using a larger number of segments.
Another potential drawback is that the segments are not distributed as evenly as possible, and actually using a very large number of segments can make the distribution of segments worse.
Despite the above potential drawbacks the SyncConsistentHashFactory
and TopologyAwareSyncConsistentHashFactory
both tend to reduce overhead in clustered environments, as neither of these calculate the hash based on the order that nodes have joined the cluster. In addition, both of these classes are typically faster than the default algorithms as both of these classes allow larger differences in the number of segments allocated to each node.
37.2.2. Implementing a ConsistentHashFactory
A custom ConsistentHashFactory
must implement the org.infinispan.distribution.ch.ConsistenHashFactory
interface with the following methods (all of which return an implementation of org.infinispan.distribution.ch.ConsistentHash
):
ConsistentHashFactory Methods
create(Hash hashFunction, int numOwners, int numSegments, List<Address> members,Map<Address, Float> capacityFactors) updateMembers(ConsistentHash baseCH, List<Address> newMembers, Map<Address, Float> capacityFactors) rebalance(ConsistentHash baseCH) union(ConsistentHash ch1, ConsistentHash ch2)
37.3. Key Affinity Service
37.3.1. Key Affinity Service
The key affinity service allows a value to be placed in a certain node in a distributed Red Hat JBoss Data Grid cluster. The service returns a key that is hashed to a particular node based on a supplied cluster address identifying it.
The keys returned by the key affinity service cannot hold any meaning, such as a username. These are only random identifiers that are used throughout the application for this record. The provided key generators do not guarantee that the keys returned by this service are unique. For custom key format, you can pass your own implementation of KeyGenerator.
The following is an example of how to obtain and use a reference to this service.
Key Affinity Service
EmbeddedCacheManager cacheManager = getCacheManager(); Cache cache = cacheManager.getCache(); KeyAffinityService keyAffinityService = KeyAffinityServiceFactory.newLocalKeyAffinityService( cache, new RndKeyGenerator(), Executors.newSingleThreadExecutor(), 100); Object localKey = keyAffinityService.getKeyForAddress(cacheManager.getAddress()); cache.put(localKey, "yourValue");
The following procedure is an explanation of the provided example.
Using the Key Affinity Service
- Obtain a reference to a cache manager and cache.
-
This starts the service, then uses the supplied
Executor
to generate and queue keys. -
Obtain a key from the service which will be mapped to the local node (
cacheManager.getAddress()
returns the local address). -
The entry with a key obtained from the
KeyAffinityService
is always stored on the node with the provided address. In this case, it is the local node.
37.3.2. Lifecycle
KeyAffinityService
extends Lifecycle
, which allows the key affinity service to be stopped, started, and restarted.
Key Affinity Service Lifecycle Parameter
public interface Lifecycle { void start(); void stop(); }
The service is instantiated through the KeyAffinityServiceFactory
. All factory methods have an Executor
, that is used for asynchronous key generation, so that this does not occur in the caller’s thread. The user controls the shutting down of this Executor
.
The KeyAffinityService
must be explicitly stopped when it is no longer required. This stops the background key generation, and releases other held resources. The KeyAffinityServce
will only stop itself when the cache manager with which it is registered is shut down.
37.3.3. Topology Changes
KeyAffinityService
key ownership may change when a topology change occurs. The key affinity service monitors topology changes and updates so that it doesn’t return stale keys, or keys that would map to a different node than the one specified. However, this does not guarantee that a node affinity hasn’t changed when a key is used. For example:
-
Thread (
T1
) reads a key (K1
) that maps to a node (A
). -
A topology change occurs, resulting in
K1
mapping to nodeB
. -
T1
usesK1
to add something to the cache. At this point,K1
maps toB
, a different node to the one requested at the time of read.
The above scenario is a not ideal, however it is a supported behavior for the application, as the keys that are already in use may be moved over during cluster change. The KeyAffinityService
provides an access proximity optimization for stable clusters, which does not apply during the instability of topology changes.