Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 1. Hot Rod Java Clients
Access Data Grid remotely through the Hot Rod Java client API.
1.1. Hot Rod Protocol
Hot Rod is a binary TCP protocol that Data Grid offers high-performance client-server interactions with the following capabilities:
- Load balancing. Hot Rod clients can send requests across Data Grid clusters using different strategies.
- Failover. Hot Rod clients can monitor Data Grid cluster topology changes and automatically switch to available nodes.
- Efficient data location. Hot Rod clients can find key owners and make requests directly to those nodes, which reduces latency.
1.2. Client Intelligence
Hot Rod clients use intelligence mechanisms to efficiently send requests to Data Grid Server clusters. By default, the Hot Rod protocol has the HASH_DISTRIBUTION_AWARE
intelligence mechanism enabled.
BASIC
intelligence
Clients do not receive topology change events for Data Grid clusters, such as nodes joining or leaving, and use only the list of Data Grid Server network locations that you add to the client configuration.
Enable BASIC
intelligence to use the Hot Rod client configuration when a Data Grid Server does not send internal and hidden cluster topology to the Hot Rod client.
TOPOLOGY_AWARE
intelligence
Clients receive and store topology change events for Data Grid clusters to dynamically keep track of Data Grid Servers on the network.
To receive cluster topology, clients need the network location, either IP address or host name, of at least one Hot Rod server at startup. After the client connects, Data Grid Server transmits the topology to the client. When Data Grid Server nodes join or leave the cluster, Data Grid transmits an updated topology to the client.
HASH_DISTRIBUTION_AWARE
intelligence
Clients receive and store topology change events for Data Grid clusters in addition to hashing information that enables clients to identify which nodes store specific keys.
For example, consider a put(k,v)
operation. The client calculates the hash value for the key so it can locate the exact Data Grid Server node on which the data resides. Clients can then connect directly to that node to perform read and write operations.
The benefit of HASH_DISTRIBUTION_AWARE
intelligence is that Data Grid Server does not need to look up values based on key hashes, which uses less server-side resources. Another benefit is that Data Grid Server responds to client requests more quickly because they do not need to make additional network roundtrips.
Configuration
By default, Hot Rod client uses the intelligence that you configure globally for all the Data Grid clusters.
ConfigurationBuilder
ConfigurationBuilder builder = new ConfigurationBuilder(); builder.clientIntelligence(ClientIntelligence.BASIC);
hotrod-client.properties
infinispan.client.hotrod.client_intelligence=BASIC
When you configure Hot Rod client to use multiple Data Grid clusters you can use different intelligence for each of the clusters.
ConfigurationBuilder
ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addCluster("NYC").clusterClientIntelligence(ClientIntelligence.BASIC);
hotrod-client.properties
infinispan.client.hotrod.cluster.intelligence.NYC=BASIC
Failed Server Timeout
If a server does not report the topology as BASIC, or the client is unable to connect to a server due to network issues, the client will mark the server as failed. A client does not attempt to connect to a server marked as failed until the client receives an updated topology. Because BASIC topology never sends an update, the client will not re-attempt connection.
To avoid such a situation, you can use the serverFailureTimeout
setting that clears the failed server status after a defined period of time. Data Grid will try to reconnect to the server after the defined timeout. If the server is still unreachable, it is marked as failed again and the connection will be re-attempted after the defined timeout. You can disabled reconnection attempts by setting the serverFailureTimeout
value to -1
.
ConfigurationBuilder
ConfigurationBuilder builder = new ConfigurationBuilder(); builder.serverFailureTimeout(5000).clusterClientIntelligence(ClientIntelligence.BASIC);
hotrod-client.properties
infinispan.client.hotrod.server_failure_timeout=5000 infinispan.client.hotrod.client_intelligence=BASIC
Additional resources
1.3. Request Balancing
Hot Rod Java clients balance requests to Data Grid Server clusters so that read and write operations are spread across nodes.
Clients that use BASIC
or TOPOLOGY_AWARE
intelligence use request balancing for all requests. Clients that use HASH_DISTRIBUTION_AWARE
intelligence send requests directly to the node that stores the desired key. If the node does not respond, the clients then fall back to request balancing.
The default balancing strategy is round-robin, so Hot Rod clients perform request balancing as in the following example where s1
, s2
, s3
are nodes in a Data Grid cluster:
// Connect to the Data Grid cluster RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build()); // Obtain the remote cache RemoteCache<String, String> cache = cacheManager.getCache("test"); //Hot Rod client sends a request to the "s1" node cache.put("key1", "aValue"); //Hot Rod client sends a request to the "s2" node cache.put("key2", "aValue"); //Hot Rod client sends a request to the "s3" node String value = cache.get("key1"); //Hot Rod client sends the next request to the "s1" node again cache.remove("key2");
Custom balancing policies
You can use custom FailoverRequestBalancingStrategy
implementations if you add your class in the Hot Rod client configuration.
ConfigurationBuilder
ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServer() .host("127.0.0.1") .port(11222) .balancingStrategy(new MyCustomBalancingStrategy());
hotrod-client.properties
infinispan.client.hotrod.request_balancing_strategy=my.package.MyCustomBalancingStrategy
Additional resources
1.4. Client Failover
Hot Rod clients can automatically failover when Data Grid cluster topologies change. For instance, Hot Rod clients that are topology-aware can detect when one or more Data Grid servers fail.
In addition to failover between clustered Data Grid servers, Hot Rod clients can failover between Data Grid clusters.
For example, you have a Data Grid cluster running in New York (NYC) and another cluster running in London (LON). Clients sending requests to NYC detect that no nodes are available so they switch to the cluster in LON. Clients then maintain connections to LON until you manually switch clusters or failover happens again.
Transactional Caches with Failover
Conditional operations, such as putIfAbsent()
, replace()
, remove()
, have strict method return guarantees. Likewise, some operations can require previous values to be returned.
Even though Hot Rod clients can failover, you should use transactional caches to ensure that operations do not partially complete and leave conflicting entries on different nodes.
1.5. Hot Rod client compatibility with Data Grid Server
Data Grid Server allows you to connect Hot Rod clients with different versions. For instance during a migration or upgrade to your Data Grid cluster, the Hot Rod client version might be a lower Data Grid version than Data Grid Server.
Data Grid recommends using the latest Hot Rod client version to benefit from the most recent capabilities and security enhancements.
Data Grid 8 and later
Hot Rod protocol version 3.x automatically negotiates the highest version possible for clients with Data Grid Server.
Data Grid 7.3 and earlier
Clients that use a Hot Rod protocol version that is higher than the Data Grid Server version must set the infinispan.client.hotrod.protocol_version
property.
Additional resources
- Hot Rod protocol reference
- Connecting Hot Rod clients to servers with different versions (Red Hat Knowledgebase)