Chapter 2. Migrating to Data Grid 8.0


Review changes in Data Grid 8.0 that affect migration from previous releases.

2.1. Data Grid 8.0 Server

As of 8.0, Data Grid server is no longer based on Red Hat JBoss Enterprise Application Platform (EAP) and is re-designed to be lightweight and more secure with much faster start times.

Data Grid servers use $RHDG_HOME/server/conf/infinispan.xml for configuration.

Data store configuration

You configure how Data Grid stores your data through cache definitions. By default, Data Grid servers include a Cache Manager configuration that lets you create, configure, and manage your cache definitions.

<cache-container name="default" 1
                 statistics="true"> 2
  <transport cluster="${infinispan.cluster.name}" 3
             stack="${infinispan.cluster.stack:tcp}" 4
             node-name="${infinispan.node.name:}"/>
</cache-container>
1
Creates a Cache Manager named "default".
2
Exports Cache Manager statistics through the metrics endpoint.
3
Adds a JGroups cluster transport that allows Data Grid servers to automatically discover each other and form clusters.
4
Uses the default TCP stack for cluster traffic.

In the preceding configuration, there are no cache definitions. When you start 8.0 server, it instantiates the default Cache Manager so you can create cache definitions at runtime through the CLI, REST API, or from remote Hot Rod clients.

Note

Data Grid server no longer provides a domain mode as in previous versions that were based on EAP. However, Data Grid server provides a default configuration with clustering capabilities so your data is replicated across all nodes.

Server configuration

Data Grid 8.0 extends infinispan.xml with a server element that defines configuration specific to Data Grid servers.

<server>
  <interfaces>
    <interface name="public">
      <inet-address value="${infinispan.bind.address:127.0.0.1}"/> 1
    </interface>
  </interfaces>

  <socket-bindings default-interface="public"
                   port-offset="${infinispan.socket.binding.port-offset:0}">
    <socket-binding name="default"
                    port="${infinispan.bind.port:11222}"/> 2
    <socket-binding name="memcached"
                    port="11221"/> 3
  </socket-bindings>

  <security>
     <security-realms>
        <security-realm name="default"> 4
           <properties-realm groups-attribute="Roles">
              <user-properties path="users.properties" relative-to="infinispan.server.config.path" plain-text="true"/>
              <group-properties path="groups.properties" relative-to="infinispan.server.config.path" />
           </properties-realm>
        </security-realm>
     </security-realms>
  </security>

  <endpoints socket-binding="default" security-realm="default"> 5
     <hotrod-connector name="hotrod"/>
     <rest-connector name="rest"/>
  </endpoints>
</server>
1
Creates a default public interface that uses the 127.0.0.1 loopback address.
2
Creates a default socket binding that binds the public interface to port 11222.
3
Creates a socket binding for the Memcached connector. Note that the Memcached endpoint is now deprecated.
4
Defines a default security realm that uses property files to define credentials and RBAC settings.
5
Exposes the Hot Rod and REST endpoints at 127.0.0.1:11222.
Important

The REST endpoint handles administrative operations that the Data Grid command line interface (CLI) and console use. For this reason, you should never disable the REST endpoint.

Table 2.1. Cheat Sheet
7.x8.x

./standalone.sh -c clustered.xml

./server.sh

./standalone.sh

./server.sh -c infinispan-local.xml

-Djboss.default.multicast.address=234.99.54.20

-Djgroups.mcast_addr=234.99.54.20

-Djboss.bind.address=172.18.1.13

-Djgroups.bind.address=172.18.1.13

-Djboss.default.jgroups.stack=udp

-j udp

  • Use custom UDP/TCP addresses as follows:

    -Djgroups.udp.address=172.18.1.13
    -Djgroups.tcp.address=172.18.1.1

  • Enable JMX as follows:

    <cache-container name="default"
                     statistics="true"> 1
      <jmx enabled="true" /> 2
      ...
    1
    Enables statistics for the Cache Manager. This is the default.
    2
    Exports JMX MBeans.

2.2. Data Grid Caches

Except for the Cache service on OpenShift, Data Grid provides empty cache containers by default. When you start Data Grid 8.0 it instantiates a Cache Manager so you can create caches at runtime.

In Data Grid 8.0, cache definitions that you create through the CacheContainerAdmin API are permanent to ensure that they survive cluster restarts.

.administration()
   .withFlags(AdminFlag.VOLATILE) 1
   .getOrCreateCache("myTemporaryCache", "org.infinispan.DIST_SYNC"); 2
1
includes the VOLATILE flag that changes the default behavior and creates temporary caches.
2
returns a cache named "myTemporaryCache" or creates one using the DIST_SYNC configuration template.
Note

AdminFlag.PERMANENT is enabled by default to ensure that cache definitions survive restarts. You must separately add persistent storage to Data Grid for data to survive restarts, for example:

ConfigurationBuilder b = new ConfigurationBuilder();
b.persistence()
   .addSingleFileStore()
   .location("/tmp/myDataStore")
   .maxEntries(5000);

Cache Configuration Templates

Get the list of cache configuration templates as follows:

  • Use Tab auto-completion with the CLI:

    [//containers/default]> create cache --template=
  • Use the REST API:

    GET 127.0.0.1:11222/rest/v2/cache-managers/default/cache-configs/templates

2.3. Creating Caches

Add cache definitions to Data Grid to configure how it stores your data.

Library Mode

The following example initializes the Cache Manager and creates a cache definition named "myDistributedCache" that uses the distributed, synchronous cache mode:

GlobalConfigurationBuilder global = GlobalConfigurationBuilder.defaultClusteredBuilder();
      DefaultCacheManager cacheManager = new DefaultCacheManager(global.build());
      ConfigurationBuilder builder = new ConfigurationBuilder();
      builder.clustering().cacheMode(CacheMode.DIST_SYNC);
cacheManager.defineConfiguration("myDistributedCache", builder.build());

You can also use the getOrCreate() method to create your cache definition or return it if it already exists, for example:

cacheManager.administration().getOrCreateCache("myDistributedCache", builder.build());

Data Grid Server

Remotely create caches at runtime as follows:

  • Use the CLI.

    To create a cache named "myCache" with the DIST_SYNC cache template, run the following:

    [//containers/default]> create cache --template=org.infinispan.DIST_SYNC name=myDistributedCache
  • Use the REST API.

    To create a cache named "myCache", use the following POST invocation and include the cache definition in the request payload in XML or JSON format:

    POST /rest/v2/caches/myCache
  • Use Hot Rod clients.

    import org.infinispan.client.hotrod.RemoteCacheManager;
    import org.infinispan.client.hotrod.configuration.ConfigurationBuilder;
    import org.infinispan.client.hotrod.impl.ConfigurationProperties;
    import org.infinispan.commons.api.CacheContainerAdmin.AdminFlag;
    import org.infinispan.commons.configuration.XMLStringConfiguration;
    
    // Create a configuration for a locally running server.
    ConfigurationBuilder builder = new ConfigurationBuilder();
            builder.addServer().host("127.0.0.1").port(11222);
    
            manager = new RemoteCacheManager(builder.build());
        }
    
        ...
    
        private void createTemporaryCacheWithTemplate() {
                manager.administration()
                       //Override the default and create a volatile cache that
                       //does not survive cluster restarts.
                       .withFlags(AdminFlag.VOLATILE)
                       //Create a cache named myTemporaryCache that uses the
                       //distributed, synchronous cache template
                       //or return it if it already exists.
                       .getOrCreateCache("myTemporaryCache", "org.infinispan.DIST_SYNC");
            }

For more examples of creating caches with a Hot Rod Java client, see the Data Grid tutorials.

2.4. Cache Health Status

Data Grid now returns one of the following for cache health:

HEALTHY means a cache is operating as expected.
HEALTHY_REBALANCING means a cache is in the rebalancing state but otherwise operating as expected.
DEGRADED indicates a cache is not operating as expected and possibly requires troubleshooting.

2.5. Marshalling Capabilities

As of this release, the default marshaller for Data Grid is ProtoStream, which marshalls data as Protocol Buffers, a language-neutral, backwards compatible format.

To use ProtoStream, Data Grid requires serialization contexts that contain:

  • .proto schemas that provide a structured representation of your Java objects as Protobuf message types.
  • Marshaller implementations to encode your Java objects to Protobuf format.

Data Grid provides direct integration with ProtoStream libraries and can generate everything you need to initialize serialization contexts.

Important

Cache stores in previous versions of Data Grid store data in a binary format that is not compatible with ProtoStream marshallers. You must use the StoreMigrator utility to migrate your data.

  • Data Grid Library Mode does not include JBoss Marshalling by default. You must add the infinispan-jboss-marshalling dependency to your classpath.
  • Data Grid servers do support JBoss Marshalling but clients must declare the marshaller to use, as in the following Hot Rod client configuration:

    .marshaller("org.infinispan.jboss.marshalling.core.JBossUserMarshaller");

  • Spring integration does not yet support the default ProtoStream marshaller. For this reason you should use the Java Serialization Marshaller.
  • To use the Java Serialization Marshaller, you must add classes to the deserialization whitelist.

2.6. Data Grid Configuration

New and Modified Elements and Attributes

  • stack adds support for inline JGroups stack definitions.
  • stack.combine and stack.position attributes let you override and modify JGroups stack definitions.
  • metrics lets you configure how Data Grid exports metrics that are compatible with the Eclipse MicroProfile Metrics API.
  • context-initializer lets you specify a SerializationContextInitializer implementation that initializes a Protostream-based marshaller for user types.
  • key-transformers lets you register transformers that convert custom keys to String for indexing with Lucene.
  • statistics now defaults to "false".

Deprecated Elements and Attributes

The following elements and attributes are now deprecated:

  • address-count attribute for the off-heap element.
  • protocol attribute for the transaction element.
  • duplicate-domains attribute for the jmx element.
  • advanced-externalizer
  • custom-interceptors
  • state-transfer-executor
  • transaction-protocol
Note

Refer to the Configuration Schema for possible replacements or alternatives.

Removed Elements and Attributes

The following elements and attributes were deprecated in a previous release and are now removed:

  • deadlock-detection-spin
  • compatibility
  • write-skew
  • versioning
  • data-container
  • eviction
  • eviction-thread-policy

2.7. Persistence

In comparison with some previous versions of Data Grid, such as 7.1, there are changes to cache store configurations. Cache store definitions must:

  • Be contained within persistence elements.
  • Include an xlmns namespace.

As of this release, cache store configuration:

  • Defaults to segmented="true" if the cache store implementation supports segmentation.
  • Removes the singleton attribute for the store element. Use shared=true instead.

JDBC String-Based cache stores use connections factories based on Agroal to connect to databases. It is no longer possible to use c3p0.properties and hikari.properties files.

Likewise, JDBC String-Based cache store configuration that use segmentation, which is now the default, must include the segmentColumnName and segmentColumnType parameters.

MySQL Example

builder.table()
       .tableNamePrefix("ISPN")
       .idColumnName("ID_COLUMN").idColumnType(“VARCHAR(255)”)
       .dataColumnName("DATA_COLUMN").dataColumnType(“VARBINARY(1000)”)
       .timestampColumnName("TIMESTAMP_COLUMN").timestampColumnType(“BIGINT”)
       .segmentColumnName("SEGMENT_COLUMN").segmentColumnType("INTEGER")

PostgreSQL Example

builder.table()
       .tableNamePrefix("ISPN")
       .idColumnName("ID_COLUMN").idColumnType(“VARCHAR(255)”)
       .dataColumnName("DATA_COLUMN").dataColumnType(“BYTEA”)
       .timestampColumnName("TIMESTAMP_COLUMN").timestampColumnType("BIGINT”)
       .segmentColumnName("SEGMENT_COLUMN").segmentColumnType("INTEGER");

2.8. REST API

Previous versions of the Data Grid REST API were v1, which is now replaced by REST API v2.

The default context path is now 127.0.0.1:11222/rest/v2/. You must update any clients or scripts to use REST API v2.

2.9. Hot Rod Client Authentication

Hot Rod clients now use SCRAM-SHA-512 as the default authentication mechanism instead of DIGEST-MD5.

Note

If you use property security realms, you must use the PLAIN authentication mechanism.

2.10. Java Distributions Available in Maven

Data Grid no longer provides Java artifacts outside the Maven repository, with the exception of the Data Grid server distribution. For information on adding required dependencies for the Data Grid Library, Hot Rod Java client, and utilities such as StoreMigrator, see the relevant documentation.

2.11. Red Hat JBoss Enterprise Application Platform (EAP) Modules

Data Grid no longer provides modules for applications running on EAP. Instead, EAP will provide direct integration with Data Grid in a future release.

However, until EAP provides functionality for handling the infinispan subsystem, you must package Data Grid 8.0 artifacts in your EAP deployments.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.