Search

Chapter 3. Migrating data between cache stores

download PDF

Data Grid provides a Java utility for migrating persisted data between cache stores.

In the case of upgrading Data Grid, functional differences between major versions do not allow backwards compatibility between cache stores. You can use StoreMigrator to convert your data so that it is compatible with the target version.

For example, upgrading to Data Grid 8.0 changes the default marshaller to Protostream. In previous Data Grid versions, cache stores use a binary format that is not compatible with the changes to marshalling. This means that Data Grid 8.0 cannot read from cache stores with previous Data Grid versions.

In other cases Data Grid versions deprecate or remove cache store implementations, such as JDBC Mixed and Binary stores. You can use StoreMigrator in these cases to convert to different cache store implementations.

3.1. Cache store migrator

Data Grid provides the StoreMigrator.java utility that recreates data for the latest Data Grid cache store implementations.

StoreMigrator takes a cache store from a previous version of Data Grid as source and uses a cache store implementation as target.

When you run StoreMigrator, it creates the target cache with the cache store type that you define using the EmbeddedCacheManager interface. StoreMigrator then loads entries from the source store into memory and then puts them into the target cache.

StoreMigrator also lets you migrate data from one type of cache store to another. For example, you can migrate from a JDBC string-based cache store to a RocksDB cache store.

Important

StoreMigrator cannot migrate data from segmented cache stores to:

  • Non-segmented cache store.
  • Segmented cache stores that have a different number of segments.

3.2. Getting the cache store migrator

StoreMigrator is available as part of the Data Grid tools library, infinispan-tools, and is included in the Maven repository.

Procedure

  • Configure your pom.xml for StoreMigrator as follows:

    <?xml version="1.0" encoding="UTF-8"?>
    <project xmlns="http://maven.apache.org/POM/4.0.0"
             xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
        <modelVersion>4.0.0</modelVersion>
    
        <groupId>org.infinispan.example</groupId>
        <artifactId>jdbc-migrator-example</artifactId>
        <version>1.0-SNAPSHOT</version>
    
        <dependencies>
          <dependency>
            <groupId>org.infinispan</groupId>
            <artifactId>infinispan-tools</artifactId>
          </dependency>
          <!-- Additional dependencies -->
        </dependencies>
    
        <build>
          <plugins>
            <plugin>
              <groupId>org.codehaus.mojo</groupId>
              <artifactId>exec-maven-plugin</artifactId>
              <version>1.2.1</version>
              <executions>
                <execution>
                  <goals>
                    <goal>java</goal>
                  </goals>
                </execution>
              </executions>
              <configuration>
                <mainClass>org.infinispan.tools.store.migrator.StoreMigrator</mainClass>
                <arguments>
                  <argument>path/to/migrator.properties</argument>
                </arguments>
              </configuration>
            </plugin>
          </plugins>
        </build>
    </project>

3.3. Configuring the cache store migrator

Set properties for source and target cache stores in a migrator.properties file.

Procedure

  1. Create a migrator.properties file.
  2. Configure the source cache store in migrator.properties.

    1. Prepend all configuration properties with source. as in the following example:

      source.type=SOFT_INDEX_FILE_STORE
      source.cache_name=myCache
      source.location=/path/to/source/sifs
      source.version=<version>
  3. Configure the target cache store in migrator.properties.

    1. Prepend all configuration properties with target. as in the following example:

      target.type=SINGLE_FILE_STORE
      target.cache_name=myCache
      target.location=/path/to/target/sfs.dat

3.3.1. Configuration properties for the cache store migrator

Configure source and target cache stores in a StoreMigrator properties.

Table 3.1. Cache Store Type Property
PropertyDescriptionRequired/Optional

type

Specifies the type of cache store type for a source or target.

.type=JDBC_STRING

.type=JDBC_BINARY

.type=JDBC_MIXED

.type=LEVELDB

.type=ROCKSDB

.type=SINGLE_FILE_STORE

.type=SOFT_INDEX_FILE_STORE

.type=JDBC_MIXED

Required

Table 3.2. Common Properties
PropertyDescriptionExample ValueRequired/Optional

cache_name

Names the cache that the store backs.

.cache_name=myCache

Required

segment_count

Specifies the number of segments for target cache stores that can use segmentation.

The number of segments must match clustering.hash.numSegments in the Data Grid configuration.

In other words, the number of segments for a cache store must match the number of segments for the corresponding cache. If the number of segments is not the same, Data Grid cannot read data from the cache store.

.segment_count=256

Optional

Table 3.3. JDBC Properties
PropertyDescriptionRequired/Optional

dialect

Specifies the dialect of the underlying database.

Required

version

Specifies the marshaller version for source cache stores.
Set one of the following values:

* 8 for Data Grid 7.2.x

* 9 for Data Grid 7.3.x

* 10 for Data Grid 8.0.x

* 11 for Data Grid 8.1.x

* 12 for Data Grid 8.2.x

* 13 for Data Grid 8.3.x

Required for source stores only.

marshaller.class

Specifies a custom marshaller class.

Required if using custom marshallers.

marshaller.externalizers

Specifies a comma-separated list of custom AdvancedExternalizer implementations to load in this format: [id]:<Externalizer class>

Optional

connection_pool.connection_url

Specifies the JDBC connection URL.

Required

connection_pool.driver_class

Specifies the class of the JDBC driver.

Required

connection_pool.username

Specifies a database username.

Required

connection_pool.password

Specifies a password for the database username.

Required

db.major_version

Sets the database major version.

Optional

db.minor_version

Sets the database minor version.

Optional

db.disable_upsert

Disables database upsert.

Optional

db.disable_indexing

Specifies if table indexes are created.

Optional

table.string.table_name_prefix

Specifies additional prefixes for the table name.

Optional

table.string.<id|data|timestamp>.name

Specifies the column name.

Required

table.string.<id|data|timestamp>.type

Specifies the column type.

Required

key_to_string_mapper

Specifies the TwoWayKey2StringMapper class.

Optional

Note

To migrate from Binary cache stores in older Data Grid versions, change table.string.* to table.binary.\* in the following properties:

  • source.table.binary.table_name_prefix
  • source.table.binary.<id\|data\|timestamp>.name
  • source.table.binary.<id\|data\|timestamp>.type
# Example configuration for migrating to a JDBC String-Based cache store
target.type=STRING
target.cache_name=myCache
target.dialect=POSTGRES
target.marshaller.class=org.example.CustomMarshaller
target.marshaller.externalizers=25:Externalizer1,org.example.Externalizer2
target.connection_pool.connection_url=jdbc:postgresql:postgres
target.connection_pool.driver_class=org.postrgesql.Driver
target.connection_pool.username=postgres
target.connection_pool.password=redhat
target.db.major_version=9
target.db.minor_version=5
target.db.disable_upsert=false
target.db.disable_indexing=false
target.table.string.table_name_prefix=tablePrefix
target.table.string.id.name=id_column
target.table.string.data.name=datum_column
target.table.string.timestamp.name=timestamp_column
target.table.string.id.type=VARCHAR
target.table.string.data.type=bytea
target.table.string.timestamp.type=BIGINT
target.key_to_string_mapper=org.infinispan.persistence.keymappers. DefaultTwoWayKey2StringMapper
Table 3.4. RocksDB Properties
PropertyDescriptionRequired/Optional

location

Sets the database directory.

Required

compression

Specifies the compression type to use.

Optional

# Example configuration for migrating from a RocksDB cache store.
source.type=ROCKSDB
source.cache_name=myCache
source.location=/path/to/rocksdb/database
source.compression=SNAPPY
Table 3.5. SingleFileStore Properties
PropertyDescriptionRequired/Optional

location

Sets the directory that contains the cache store .dat file.

Required

# Example configuration for migrating to a Single File cache store.
target.type=SINGLE_FILE_STORE
target.cache_name=myCache
target.location=/path/to/sfs.dat
Table 3.6. SoftIndexFileStore Properties
PropertyDescriptionValue

Required/Optional

location

Sets the database directory.

Required

index_location

Sets the database index directory.

# Example configuration for migrating to a Soft-Index File cache store.
target.type=SOFT_INDEX_FILE_STORE
target.cache_name=myCache
target.location=path/to/sifs/database
target.location=path/to/sifs/index

3.4. Migrating Data Grid cache stores

Run StoreMigrator to migrate data from one cache store to another.

Prerequisites

  • Get infinispan-tools.jar.
  • Create a migrator.properties file that configures the source and target cache stores.

Procedure

  • If you build infinispan-tools.jar from source, do the following:

    1. Add infinispan-tools.jar and dependencies for your source and target databases, such as JDBC drivers, to your classpath.
    2. Specify migrator.properties file as an argument for StoreMigrator.
  • If you pull infinispan-tools.jar from the Maven repository, run the following command:

    mvn exec:java

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.