Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 3. Data Grid cache configuration
Cache configuration controls how Data Grid stores your data.
As part of your cache configuration, you declare the cache mode you want to use. For instance, you can configure Data Grid clusters to use replicated caches or distributed caches.
Your configuration also defines the characteristics of your caches and enables the Data Grid capabilities that you want to use when handling data. For instance, you can configure how Data Grid encodes entries in your caches, whether replication requests happen synchronously or asynchronously between nodes, if entries are mortal or immortal, and so on.
3.1. Declarative cache configuration
You can configure caches declaratively, in XML, JSON, and YAML format, according to the Data Grid schema.
Declarative cache configuration has the following advantages over programmatic configuration:
- Portability
-
Define each configuration in a standalone file that you can use to create embedded and remote caches.
You can also use declarative configuration to create caches with Data Grid Operator for clusters running on OpenShift. - Simplicity
-
Keep markup languages separate to programming languages.
For example, to create remote caches it is generally better to not add complex XML directly to Java code.
Data Grid Server configuration extends infinispan.xml
to include cluster transport mechanisms, security realms, and endpoint configuration. If you declare caches as part of your Data Grid Server configuration you should use management tooling, such as Ansible or Chef, to keep it synchronized across the cluster.
To dynamically synchronize remote caches across Data Grid clusters, create them at runtime.
3.1.1. Cache configuration
You can create declarative cache configuration in XML, JSON, and YAML format.
All declarative caches must conform to the Data Grid schema. Configuration in JSON format must follow the structure of an XML configuration, elements correspond to objects and attributes correspond to fields.
Data Grid restricts characters to a maximum of 255
for a cache name or a cache template name. If you exceed this character limit, Data Grid throws an exception. Write succinct cache names and cache template names.
A file system might set a limitation for the length of a file name, so ensure that a cache’s name does not exceed this limitation. If a cache name exceeds a file system’s naming limitation, general operations or initialing operations towards that cache might fail. Write succinct file names.
Distributed caches
XML
<distributed-cache owners="2" segments="256" capacity-factor="1.0" l1-lifespan="5000" mode="SYNC" statistics="true"> <encoding media-type="application/x-protostream"/> <locking isolation="REPEATABLE_READ"/> <transaction mode="FULL_XA" locking="OPTIMISTIC"/> <expiration lifespan="5000" max-idle="1000" /> <memory max-count="1000000" when-full="REMOVE"/> <indexing enabled="true" storage="local-heap"> <index-reader refresh-interval="1000"/> <indexed-entities> <indexed-entity>org.infinispan.Person</indexed-entity> </indexed-entities> </indexing> <partition-handling when-split="ALLOW_READ_WRITES" merge-policy="PREFERRED_NON_NULL"/> <persistence passivation="false"> <!-- Persistent storage configuration. --> </persistence> </distributed-cache>
JSON
{ "distributed-cache": { "mode": "SYNC", "owners": "2", "segments": "256", "capacity-factor": "1.0", "l1-lifespan": "5000", "statistics": "true", "encoding": { "media-type": "application/x-protostream" }, "locking": { "isolation": "REPEATABLE_READ" }, "transaction": { "mode": "FULL_XA", "locking": "OPTIMISTIC" }, "expiration" : { "lifespan" : "5000", "max-idle" : "1000" }, "memory": { "max-count": "1000000", "when-full": "REMOVE" }, "indexing" : { "enabled" : true, "storage" : "local-heap", "index-reader" : { "refresh-interval" : "1000" }, "indexed-entities": [ "org.infinispan.Person" ] }, "partition-handling" : { "when-split" : "ALLOW_READ_WRITES", "merge-policy" : "PREFERRED_NON_NULL" }, "persistence" : { "passivation" : false } } }
YAML
distributedCache: mode: "SYNC" owners: "2" segments: "256" capacityFactor: "1.0" l1Lifespan: "5000" statistics: "true" encoding: mediaType: "application/x-protostream" locking: isolation: "REPEATABLE_READ" transaction: mode: "FULL_XA" locking: "OPTIMISTIC" expiration: lifespan: "5000" maxIdle: "1000" memory: maxCount: "1000000" whenFull: "REMOVE" indexing: enabled: "true" storage: "local-heap" indexReader: refreshInterval: "1000" indexedEntities: - "org.infinispan.Person" partitionHandling: whenSplit: "ALLOW_READ_WRITES" mergePolicy: "PREFERRED_NON_NULL" persistence: passivation: "false" # Persistent storage configuration.
Replicated caches
XML
<replicated-cache segments="256" mode="SYNC" statistics="true"> <encoding media-type="application/x-protostream"/> <locking isolation="REPEATABLE_READ"/> <transaction mode="FULL_XA" locking="OPTIMISTIC"/> <expiration lifespan="5000" max-idle="1000" /> <memory max-count="1000000" when-full="REMOVE"/> <indexing enabled="true" storage="local-heap"> <index-reader refresh-interval="1000"/> <indexed-entities> <indexed-entity>org.infinispan.Person</indexed-entity> </indexed-entities> </indexing> <partition-handling when-split="ALLOW_READ_WRITES" merge-policy="PREFERRED_NON_NULL"/> <persistence passivation="false"> <!-- Persistent storage configuration. --> </persistence> </replicated-cache>
JSON
{ "replicated-cache": { "mode": "SYNC", "segments": "256", "statistics": "true", "encoding": { "media-type": "application/x-protostream" }, "locking": { "isolation": "REPEATABLE_READ" }, "transaction": { "mode": "FULL_XA", "locking": "OPTIMISTIC" }, "expiration" : { "lifespan" : "5000", "max-idle" : "1000" }, "memory": { "max-count": "1000000", "when-full": "REMOVE" }, "indexing" : { "enabled" : true, "storage" : "local-heap", "index-reader" : { "refresh-interval" : "1000" }, "indexed-entities": [ "org.infinispan.Person" ] }, "partition-handling" : { "when-split" : "ALLOW_READ_WRITES", "merge-policy" : "PREFERRED_NON_NULL" }, "persistence" : { "passivation" : false } } }
YAML
replicatedCache: mode: "SYNC" segments: "256" statistics: "true" encoding: mediaType: "application/x-protostream" locking: isolation: "REPEATABLE_READ" transaction: mode: "FULL_XA" locking: "OPTIMISTIC" expiration: lifespan: "5000" maxIdle: "1000" memory: maxCount: "1000000" whenFull: "REMOVE" indexing: enabled: "true" storage: "local-heap" indexReader: refreshInterval: "1000" indexedEntities: - "org.infinispan.Person" partitionHandling: whenSplit: "ALLOW_READ_WRITES" mergePolicy: "PREFERRED_NON_NULL" persistence: passivation: "false" # Persistent storage configuration.
Multiple caches
XML
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:config:14.0 https://infinispan.org/schemas/infinispan-config-14.0.xsd urn:infinispan:server:14.0 https://infinispan.org/schemas/infinispan-server-14.0.xsd" xmlns="urn:infinispan:config:14.0" xmlns:server="urn:infinispan:server:14.0"> <cache-container name="default" statistics="true"> <distributed-cache name="mycacheone" mode="ASYNC" statistics="true"> <encoding media-type="application/x-protostream"/> <expiration lifespan="300000"/> <memory max-size="400MB" when-full="REMOVE"/> </distributed-cache> <distributed-cache name="mycachetwo" mode="SYNC" statistics="true"> <encoding media-type="application/x-protostream"/> <expiration lifespan="300000"/> <memory max-size="400MB" when-full="REMOVE"/> </distributed-cache> </cache-container> </infinispan>
JSON
{ "infinispan" : { "cache-container" : { "name" : "default", "statistics" : "true", "caches" : { "mycacheone" : { "distributed-cache" : { "mode": "ASYNC", "statistics": "true", "encoding": { "media-type": "application/x-protostream" }, "expiration" : { "lifespan" : "300000" }, "memory": { "max-size": "400MB", "when-full": "REMOVE" } } }, "mycachetwo" : { "distributed-cache" : { "mode": "SYNC", "statistics": "true", "encoding": { "media-type": "application/x-protostream" }, "expiration" : { "lifespan" : "300000" }, "memory": { "max-size": "400MB", "when-full": "REMOVE" } } } } } } }
YAML
infinispan: cacheContainer: name: "default" statistics: "true" caches: mycacheone: distributedCache: mode: "ASYNC" statistics: "true" encoding: mediaType: "application/x-protostream" expiration: lifespan: "300000" memory: maxSize: "400MB" whenFull: "REMOVE" mycachetwo: distributedCache: mode: "SYNC" statistics: "true" encoding: mediaType: "application/x-protostream" expiration: lifespan: "300000" memory: maxSize: "400MB" whenFull: "REMOVE"
Additional resources
3.2. Adding cache templates
The Data Grid schema includes *-cache-configuration
elements that you can use to create templates. You can then create caches on demand, using the same configuration multiple times.
Procedure
- Open your Data Grid configuration for editing.
-
Add the cache configuration with the appropriate
*-cache-configuration
element or object to the Cache Manager. - Save and close your Data Grid configuration.
Cache template example
XML
<infinispan> <cache-container> <distributed-cache-configuration name="my-dist-template" mode="SYNC" statistics="true"> <encoding media-type="application/x-protostream"/> <memory max-count="1000000" when-full="REMOVE"/> <expiration lifespan="5000" max-idle="1000"/> </distributed-cache-configuration> </cache-container> </infinispan>
JSON
{ "infinispan" : { "cache-container" : { "distributed-cache-configuration" : { "name" : "my-dist-template", "mode": "SYNC", "statistics": "true", "encoding": { "media-type": "application/x-protostream" }, "expiration" : { "lifespan" : "5000", "max-idle" : "1000" }, "memory": { "max-count": "1000000", "when-full": "REMOVE" } } } } }
YAML
infinispan: cacheContainer: distributedCacheConfiguration: name: "my-dist-template" mode: "SYNC" statistics: "true" encoding: mediaType: "application/x-protostream" expiration: lifespan: "5000" maxIdle: "1000" memory: maxCount: "1000000" whenFull: "REMOVE"
3.2.1. Creating caches from templates
Create caches from configuration templates.
Templates for remote caches are available from the Cache templates menu in Data Grid Console.
Prerequisites
- Add at least one cache template to the Cache Manager.
Procedure
- Open your Data Grid configuration for editing.
-
Specify the template from which the cache inherits with the
configuration
attribute or field. - Save and close your Data Grid configuration.
Cache configuration inherited from a template
XML
<distributed-cache configuration="my-dist-template" />
JSON
{ "distributed-cache": { "configuration": "my-dist-template" } }
YAML
distributedCache: configuration: "my-dist-template"
3.2.2. Cache template inheritance
Cache configuration templates can inherit from other templates to extend and override settings.
Cache template inheritance is hierarchical. For a child configuration template to inherit from a parent, you must include it after the parent template.
Additionally, template inheritance is additive for elements that have multiple values. A cache that inherits from another template merges the values from that template, which can override properties.
Template inheritance example
XML
<infinispan> <cache-container> <distributed-cache-configuration name="base-template"> <expiration lifespan="5000"/> </distributed-cache-configuration> <distributed-cache-configuration name="extended-template" configuration="base-template"> <encoding media-type="application/x-protostream"/> <expiration lifespan="10000" max-idle="1000"/> </distributed-cache-configuration> </cache-container> </infinispan>
JSON
{ "infinispan" : { "cache-container" : { "caches" : { "base-template" : { "distributed-cache-configuration" : { "expiration" : { "lifespan" : "5000" } } }, "extended-template" : { "distributed-cache-configuration" : { "configuration" : "base-template", "encoding": { "media-type": "application/x-protostream" }, "expiration" : { "lifespan" : "10000", "max-idle" : "1000" } } } } } } }
YAML
infinispan: cacheContainer: caches: base-template: distributedCacheConfiguration: expiration: lifespan: "5000" extended-template: distributedCacheConfiguration: configuration: "base-template" encoding: mediaType: "application/x-protostream" expiration: lifespan: "10000" maxIdle: "1000"
3.2.3. Cache template wildcards
You can add wildcards to cache configuration template names. If you then create caches where the name matches the wildcard, Data Grid applies the configuration template.
Data Grid throws exceptions if cache names match more than one wildcard.
Template wildcard example
XML
<infinispan> <cache-container> <distributed-cache-configuration name="async-dist-cache-*" mode="ASYNC" statistics="true"> <encoding media-type="application/x-protostream"/> </distributed-cache-configuration> </cache-container> </infinispan>
JSON
{ "infinispan" : { "cache-container" : { "distributed-cache-configuration" : { "name" : "async-dist-cache-*", "mode": "ASYNC", "statistics": "true", "encoding": { "media-type": "application/x-protostream" } } } } }
YAML
infinispan: cacheContainer: distributedCacheConfiguration: name: "async-dist-cache-*" mode: "ASYNC" statistics: "true" encoding: mediaType: "application/x-protostream"
Using the preceding example, if you create a cache named "async-dist-cache-prod" then Data Grid uses the configuration from the async-dist-cache-*
template.
3.2.4. Cache templates from multiple XML files
Split cache configuration templates into multiple XML files for granular flexibility and reference them with XML inclusions (XInclude).
Data Grid provides minimal support for the XInclude specification. This means you cannot use the xpointer
attribute, the xi:fallback
element, text processing, or content negotiation.
You must also add the xmlns:xi="http://www.w3.org/2001/XInclude"
namespace to infinispan.xml
to use XInclude.
Xinclude cache template
<infinispan xmlns:xi="http://www.w3.org/2001/XInclude"> <cache-container default-cache="cache-1"> <!-- References files that contain cache configuration templates. --> <xi:include href="distributed-cache-template.xml" /> <xi:include href="replicated-cache-template.xml" /> </cache-container> </infinispan>
Data Grid also provides an infinispan-config-fragment-14.0.xsd
schema that you can use with configuration fragments.
Configuration fragment schema
<local-cache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:config:14.0 https://infinispan.org/schemas/infinispan-config-fragment-14.0.xsd" xmlns="urn:infinispan:config:14.0" name="mycache"/>
Additional resources
3.3. Creating remote caches
When you create remote caches at runtime, Data Grid Server synchronizes your configuration across the cluster so that all nodes have a copy. For this reason you should always create remote caches dynamically with the following mechanisms:
- Data Grid Console
- Data Grid Command Line Interface (CLI)
- Hot Rod or HTTP clients
3.3.1. Default Cache Manager
Data Grid Server provides a default Cache Manager that controls the lifecycle of remote caches. Starting Data Grid Server automatically instantiates the Cache Manager so you can create and delete remote caches and other resources like Protobuf schema.
After you start Data Grid Server and add user credentials, you can view details about the Cache Manager and get cluster information from Data Grid Console.
-
Open
127.0.0.1:11222
in any browser.
You can also get information about the Cache Manager through the Command Line Interface (CLI) or REST API:
- CLI
Run the
describe
command in the default container.[//containers/default]> describe
- REST
-
Open
127.0.0.1:11222/rest/v2/cache-managers/default/
in any browser.
Default Cache Manager configuration
XML
<infinispan> <!-- Creates a Cache Manager named "default" and enables metrics. --> <cache-container name="default" statistics="true"> <!-- Adds cluster transport that uses the default JGroups TCP stack. --> <transport cluster="${infinispan.cluster.name:cluster}" stack="${infinispan.cluster.stack:tcp}" node-name="${infinispan.node.name:}"/> <!-- Requires user permission to access caches and perform operations. --> <security> <authorization/> </security> </cache-container> </infinispan>
JSON
{ "infinispan" : { "jgroups" : { "transport" : "org.infinispan.remoting.transport.jgroups.JGroupsTransport" }, "cache-container" : { "name" : "default", "statistics" : "true", "transport" : { "cluster" : "cluster", "node-name" : "", "stack" : "tcp" }, "security" : { "authorization" : {} } } } }
YAML
infinispan: jgroups: transport: "org.infinispan.remoting.transport.jgroups.JGroupsTransport" cacheContainer: name: "default" statistics: "true" transport: cluster: "cluster" nodeName: "" stack: "tcp" security: authorization: ~
3.3.2. Creating caches with Data Grid Console
Use Data Grid Console to create remote caches in an intuitive visual interface from any web browser.
Prerequisites
-
Create a Data Grid user with
admin
permissions. - Start at least one Data Grid Server instance.
- Have a Data Grid cache configuration.
Procedure
-
Open
127.0.0.1:11222/console/
in any browser. - Select Create Cache and follow the steps as Data Grid Console guides you through the process.
3.3.3. Creating remote caches with the Data Grid CLI
Use the Data Grid Command Line Interface (CLI) to add remote caches on Data Grid Server.
Prerequisites
-
Create a Data Grid user with
admin
permissions. - Start at least one Data Grid Server instance.
- Have a Data Grid cache configuration.
Procedure
Start the CLI.
bin/cli.sh
-
Run the
connect
command and enter your username and password when prompted. Use the
create cache
command to create remote caches.For example, create a cache named "mycache" from a file named
mycache.xml
as follows:create cache --file=mycache.xml mycache
Verification
List all remote caches with the
ls
command.ls caches mycache
View cache configuration with the
describe
command.describe caches/mycache
3.3.4. Creating remote caches from Hot Rod clients
Use the Data Grid Hot Rod API to create remote caches on Data Grid Server from Java, C++, .NET/C#, JS clients and more.
This procedure shows you how to use Hot Rod Java clients that create remote caches on first access. You can find code examples for other Hot Rod clients in the Data Grid Tutorials.
Prerequisites
-
Create a Data Grid user with
admin
permissions. - Start at least one Data Grid Server instance.
- Have a Data Grid cache configuration.
Procedure
-
Invoke the
remoteCache()
method as part of your theConfigurationBuilder
. -
Set the
configuration
orconfiguration_uri
properties in thehotrod-client.properties
file on your classpath.
ConfigurationBuilder
File file = new File("path/to/infinispan.xml") ConfigurationBuilder builder = new ConfigurationBuilder(); builder.remoteCache("another-cache") .configuration("<distributed-cache name=\"another-cache\"/>"); builder.remoteCache("my.other.cache") .configurationURI(file.toURI());
hotrod-client.properties
infinispan.client.hotrod.cache.another-cache.configuration=<distributed-cache name=\"another-cache\"/> infinispan.client.hotrod.cache.[my.other.cache].configuration_uri=file:///path/to/infinispan.xml
If the name of your remote cache contains the .
character, you must enclose it in square brackets when using hotrod-client.properties
files.
3.3.5. Creating remote caches with the REST API
Use the Data Grid REST API to create remote caches on Data Grid Server from any suitable HTTP client.
Prerequisites
-
Create a Data Grid user with
admin
permissions. - Start at least one Data Grid Server instance.
- Have a Data Grid cache configuration.
Procedure
-
Invoke
POST
requests to/rest/v2/caches/<cache_name>
with cache configuration in the payload.
Additional resources
3.4. Creating embedded caches
Data Grid provides an EmbeddedCacheManager
API that lets you control both the Cache Manager and embedded cache lifecycles programmatically.
3.4.1. Adding Data Grid to your project
Add Data Grid to your project to create embedded caches in your applications.
Prerequisites
- Configure your project to get Data Grid artifacts from the Maven repository.
Procedure
-
Add the
infinispan-core
artifact as a dependency in yourpom.xml
as follows:
<dependencies> <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-core</artifactId> </dependency> </dependencies>
3.4.2. Creating and using embedded caches
Data Grid provides a GlobalConfigurationBuilder
API that controls the Cache Manager and a ConfigurationBuilder
API that configures caches.
Prerequisites
-
Add the
infinispan-core
artifact as a dependency in yourpom.xml
.
Procedure
Initialize a
CacheManager
.NoteYou must always call the
cacheManager.start()
method to initialize aCacheManager
before you can create caches. Default constructors do this for you but there are overloaded versions of the constructors that do not.Cache Managers are also heavyweight objects and Data Grid recommends instantiating only one instance per JVM.
-
Use the
ConfigurationBuilder
API to define cache configuration. Obtain caches with
getCache()
,createCache()
, orgetOrCreateCache()
methods.Data Grid recommends using the
getOrCreateCache()
method because it either creates a cache on all nodes or returns an existing cache.-
If necessary use the
PERMANENT
flag for caches to survive restarts. -
Stop the
CacheManager
by calling thecacheManager.stop()
method to release JVM resources and gracefully shutdown any caches.
// Set up a clustered Cache Manager. GlobalConfigurationBuilder global = GlobalConfigurationBuilder.defaultClusteredBuilder(); // Initialize the default Cache Manager. DefaultCacheManager cacheManager = new DefaultCacheManager(global.build()); // Create a distributed cache with synchronous replication. ConfigurationBuilder builder = new ConfigurationBuilder(); builder.clustering().cacheMode(CacheMode.DIST_SYNC); // Obtain a volatile cache. Cache<String, String> cache = cacheManager.administration().withFlags(CacheContainerAdmin.AdminFlag.VOLATILE).getOrCreateCache("myCache", builder.build()); // Stop the Cache Manager. cacheManager.stop();
getCache()
method
Invoke the getCache(String)
method to obtain caches, as follows:
Cache<String, String> myCache = manager.getCache("myCache");
The preceding operation creates a cache named myCache
, if it does not already exist, and returns it.
Using the getCache()
method creates the cache only on the node where you invoke the method. In other words, it performs a local operation that must be invoked on each node across the cluster. Typically, applications deployed across multiple nodes obtain caches during initialization to ensure that caches are symmetric and exist on each node.
createCache()
method
Invoke the createCache()
method to create caches dynamically across the entire cluster.
Cache<String, String> myCache = manager.administration().createCache("myCache", "myTemplate");
The preceding operation also automatically creates caches on any nodes that subsequently join the cluster.
Caches that you create with the createCache()
method are ephemeral by default. If the entire cluster shuts down, the cache is not automatically created again when it restarts.
PERMANENT
flag
Use the PERMANENT flag to ensure that caches can survive restarts.
Cache<String, String> myCache = manager.administration().withFlags(AdminFlag.PERMANENT).createCache("myCache", "myTemplate");
For the PERMANENT flag to take effect, you must enable global state and set a configuration storage provider.
For more information about configuration storage providers, see GlobalStateConfigurationBuilder#configurationStorage().
3.4.3. Cache API
Data Grid provides a Cache interface that exposes simple methods for adding, retrieving and removing entries, including atomic mechanisms exposed by the JDK’s ConcurrentMap interface. Based on the cache mode used, invoking these methods will trigger a number of things to happen, potentially even including replicating an entry to a remote node or looking up an entry from a remote node, or potentially a cache store.
For simple usage, using the Cache API should be no different from using the JDK Map API, and hence migrating from simple in-memory caches based on a Map to Data Grid’s Cache should be trivial.
Performance Concerns of Certain Map Methods
Certain methods exposed in Map have certain performance consequences when used with Data Grid, such as size() , values() , keySet() and entrySet() . Specific methods on the keySet
, values
and entrySet
are fine for use please see their Javadoc for further details.
Attempting to perform these operations globally would have large performance impact as well as become a scalability bottleneck. As such, these methods should only be used for informational or debugging purposes only.
It should be noted that using certain flags with the withFlags() method can mitigate some of these concerns, please check each method’s documentation for more details.
Mortal and Immortal Data
Further to simply storing entries, Data Grid’s cache API allows you to attach mortality information to data. For example, simply using put(key, value) would create an immortal entry, i.e., an entry that lives in the cache forever, until it is removed (or evicted from memory to prevent running out of memory). If, however, you put data in the cache using put(key, value, lifespan, timeunit) , this creates a mortal entry, i.e., an entry that has a fixed lifespan and expires after that lifespan.
In addition to lifespan , Data Grid also supports maxIdle as an additional metric with which to determine expiration. Any combination of lifespans or maxIdles can be used.
putForExternalRead
operation
Data Grid’s Cache class contains a different 'put' operation called putForExternalRead . This operation is particularly useful when Data Grid is used as a temporary cache for data that is persisted elsewhere. Under heavy read scenarios, contention in the cache should not delay the real transactions at hand, since caching should just be an optimization and not something that gets in the way.
To achieve this, putForExternalRead()
acts as a put call that only operates if the key is not present in the cache, and fails fast and silently if another thread is trying to store the same key at the same time. In this particular scenario, caching data is a way to optimise the system and it’s not desirable that a failure in caching affects the on-going transaction, hence why failure is handled differently. putForExternalRead()
is considered to be a fast operation because regardless of whether it’s successful or not, it doesn’t wait for any locks, and so returns to the caller promptly.
To understand how to use this operation, let’s look at basic example. Imagine a cache of Person instances, each keyed by a PersonId , whose data originates in a separate data store. The following code shows the most common pattern of using putForExternalRead within the context of this example:
// Id of the person to look up, provided by the application PersonId id = ...; // Get a reference to the cache where person instances will be stored Cache<PersonId, Person> cache = ...; // First, check whether the cache contains the person instance // associated with with the given id Person cachedPerson = cache.get(id); if (cachedPerson == null) { // The person is not cached yet, so query the data store with the id Person person = dataStore.lookup(id); // Cache the person along with the id so that future requests can // retrieve it from memory rather than going to the data store cache.putForExternalRead(id, person); } else { // The person was found in the cache, so return it to the application return cachedPerson; }
Note that putForExternalRead should never be used as a mechanism to update the cache with a new Person instance originating from application execution (i.e. from a transaction that modifies a Person’s address). When updating cached values, please use the standard put operation, otherwise the possibility of caching corrupt data is likely.
3.4.3.1. AdvancedCache API
In addition to the simple Cache interface, Data Grid offers an AdvancedCache interface, geared towards extension authors. The AdvancedCache offers the ability to access certain internal components and to apply flags to alter the default behavior of certain cache methods. The following code snippet depicts how an AdvancedCache can be obtained:
AdvancedCache advancedCache = cache.getAdvancedCache();
3.4.3.1.1. Flags
Flags are applied to regular cache methods to alter the behavior of certain methods. For a list of all available flags, and their effects, see the Flag enumeration. Flags are applied using AdvancedCache.withFlags() . This builder method can be used to apply any number of flags to a cache invocation, for example:
advancedCache.withFlags(Flag.CACHE_MODE_LOCAL, Flag.SKIP_LOCKING) .withFlags(Flag.FORCE_SYNCHRONOUS) .put("hello", "world");
3.4.3.2. Asynchronous API
In addition to synchronous API methods like Cache.put() , Cache.remove() , etc., Data Grid also has an asynchronous, non-blocking API where you can achieve the same results in a non-blocking fashion.
These methods are named in a similar fashion to their blocking counterparts, with "Async" appended. E.g., Cache.putAsync() , Cache.removeAsync() , etc. These asynchronous counterparts return a CompletableFuture that contains the actual result of the operation.
For example, in a cache parameterized as Cache<String, String>
, Cache.put(String key, String value)
returns String
while Cache.putAsync(String key, String value)
returns CompletableFuture<String>
.
3.4.3.2.1. Why use such an API?
Non-blocking APIs are powerful in that they provide all of the guarantees of synchronous communications - with the ability to handle communication failures and exceptions - with the ease of not having to block until a call completes. This allows you to better harness parallelism in your system. For example:
Set<CompletableFuture<?>> futures = new HashSet<>(); futures.add(cache.putAsync(key1, value1)); // does not block futures.add(cache.putAsync(key2, value2)); // does not block futures.add(cache.putAsync(key3, value3)); // does not block // the remote calls for the 3 puts will effectively be executed // in parallel, particularly useful if running in distributed mode // and the 3 keys would typically be pushed to 3 different nodes // in the cluster // check that the puts completed successfully for (CompletableFuture<?> f: futures) f.get();
3.4.3.2.2. Which processes actually happen asynchronously?
There are 4 things in Data Grid that can be considered to be on the critical path of a typical write operation. These are, in order of cost:
- network calls
- marshalling
- writing to a cache store (optional)
- locking
Using the async methods will take the network calls and marshalling off the critical path. For various technical reasons, writing to a cache store and acquiring locks, however, still happens in the caller’s thread.