Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 6. Configuring persistent storage


Data Grid uses cache stores and loaders to interact with persistent storage.

Durability
Adding cache stores allows you to persist data to non-volatile storage so it survives restarts.
Write-through caching
Configuring Data Grid as a caching layer in front of persistent storage simplifies data access for applications because Data Grid handles all interactions with the external storage.
Data overflow
Using eviction and passivation techniques ensures that Data Grid keeps only frequently used data in-memory and writes older entries to persistent storage.

6.1. Passivation

Passivation configures Data Grid to write entries to cache stores when it evicts those entries from memory. In this way, passivation prevents unnecessary and potentially expensive writes to persistent storage.

Activation is the process of restoring entries to memory from the cache store when there is an attempt to access passivated entries. For this reason, when you enable passivation, you must configure cache stores that implement both CacheWriter and CacheLoader interfaces so they can write and load entries from persistent storage.

When Data Grid evicts an entry from the cache, it notifies cache listeners that the entry is passivated then stores the entry in the cache store. When Data Grid gets an access request for an evicted entry, it lazily loads the entry from the cache store into memory and then notifies cache listeners that the entry is activated while keeping the value still in the store.

Note
  • Passivation uses the first cache loader in the Data Grid configuration and ignores all others.
  • Passivation is not supported with:

    • Transactional stores. Passivation writes and removes entries from the store outside the scope of the actual Data Grid commit boundaries.
    • Shared stores. Shared cache stores require entries to always exist in the store for other owners. For this reason, passivation is not supported because entries cannot be removed.

If you enable passivation with transactional stores or shared stores, Data Grid throws an exception.

6.1.1. How passivation works

Passivation disabled

Writes to data in memory result in writes to persistent storage.

If Data Grid evicts data from memory, then data in persistent storage includes entries that are evicted from memory. In this way persistent storage is a superset of the in-memory cache. This is recommended when you require highest consistency as the store will be able to be read again after a crash.

If you do not configure eviction, then data in persistent storage provides a copy of data in memory.

Passivation enabled

Data Grid adds data to persistent storage only when it evicts data from memory, an entry is removed or upon shutting down the node.

When Data Grid activates entries, it restores data in memory but keeps the data in the store still. This allows for writes to be just as fast as without a store, and still maintains consistency. When an entry is created or updated only the in memory will be updated and thus the store will be outdated for the time being.

Note

Passivation is not supported when a store is also configured as shared. This is due to entries can become out of sync between nodes depending on when a write is evicted versus read.

To gurantee data consistency any store that is not shared should always have purgeOnStartup enabled. This is true for both passivation enabled or disabled since a store could hold an outdated entry while down and resurrect it at a later point.

The following table shows data in memory and in persistent storage after a series of operations:

OperationPassivation disabledPassivation enabled

Insert k1.

Memory: k1
Disk: k1

Memory: k1
Disk: -

Insert k2.

Memory: k1, k2
Disk: k1, k2

Memory: k1, k2
Disk: -

Eviction thread runs and evicts k1.

Memory: k2
Disk: k1, k2

Memory: k2
Disk: k1

Read k1.

Memory: k1, k2
Disk: k1, k2

Memory: k1, k2
Disk: k1

Eviction thread runs and evicts k2.

Memory: k1
Disk: k1, k2

Memory: k1
Disk: k1, k2

Remove k2.

Memory: k1
Disk: k1

Memory: k1
Disk: k1

6.2. Write-through cache stores

Write-through is a cache writing mode where writes to memory and writes to cache stores are synchronous. When a client application updates a cache entry, in most cases by invoking Cache.put(), Data Grid does not return the call until it updates the cache store. This cache writing mode results in updates to the cache store concluding within the boundaries of the client thread.

The primary advantage of write-through mode is that the cache and cache store are updated simultaneously, which ensures that the cache store is always consistent with the cache.

However, write-through mode can potentially decrease performance because the need to access and update cache stores directly adds latency to cache operations.

Write-through configuration

Data Grid uses write-through mode unless you explicitly add write-behind configuration to your caches. There is no separate element or method for configuring write-through mode.

For example, the following configuration adds a file-based store to the cache that implicitly uses write-through mode:

<distributed-cache>
  <persistence passivation="false">
    <file-store>
      <index path="path/to/index" />
      <data path="path/to/data" />
    </file-store>
  </persistence>
</distributed-cache>

6.3. Write-behind cache stores

Write-behind is a cache writing mode where writes to memory are synchronous and writes to cache stores are asynchronous.

When clients send write requests, Data Grid adds those operations to a modification queue. Data Grid processes operations as they join the queue so that the calling thread is not blocked and the operation completes immediately.

If the number of write operations in the modification queue increases beyond the size of the queue, Data Grid adds those additional operations to the queue. However, those operations do not complete until Data Grid processes operations that are already in the queue.

For example, calling Cache.putAsync returns immediately and the Stage also completes immediately if the modification queue is not full. If the modification queue is full, or if Data Grid is currently processing a batch of write operations, then Cache.putAsync returns immediately and the Stage completes later.

Write-behind mode provides a performance advantage over write-through mode because cache operations do not need to wait for updates to the underlying cache store to complete. However, data in the cache store remains inconsistent with data in the cache until the modification queue is processed. For this reason, write-behind mode is suitable for cache stores with low latency, such as unshared and local file-based cache stores, where the time between the write to the cache and the write to the cache store is as small as possible.

Write-behind configuration

XML

<distributed-cache>
  <persistence>
    <table-jdbc-store xmlns="urn:infinispan:config:store:sql:15.0"
                      dialect="H2"
                      shared="true"
                      table-name="books">
      <connection-pool connection-url="jdbc:h2:mem:infinispan"
                       username="sa"
                       password="changeme"
                       driver="org.h2.Driver"/>
      <write-behind modification-queue-size="2048"
                    fail-silently="true"/>
    </table-jdbc-store>
  </persistence>
</distributed-cache>

JSON

{
  "distributed-cache": {
    "persistence" : {
      "table-jdbc-store": {
        "dialect": "H2",
        "shared": "true",
        "table-name": "books",
        "connection-pool": {
          "connection-url": "jdbc:h2:mem:infinispan",
          "driver": "org.h2.Driver",
          "username": "sa",
          "password": "changeme"
        },
        "write-behind" : {
          "modification-queue-size" : "2048",
          "fail-silently" : true
        }
      }
    }
  }
}

YAML

distributedCache:
  persistence:
    tableJdbcStore:
      dialect: "H2"
      shared: "true"
      tableName: "books"
      connectionPool:
        connectionUrl: "jdbc:h2:mem:infinispan"
        driver: "org.h2.Driver"
        username: "sa"
        password: "changeme"
      writeBehind:
        modificationQueueSize: "2048"
        failSilently: "true"

ConfigurationBuilder

ConfigurationBuilder builder = new ConfigurationBuilder();
builder.persistence()
       .async()
       .modificationQueueSize(2048)
       .failSilently(true);

Failing silently

Write-behind configuration includes a fail-silently parameter that controls what happens when either the cache store is unavailable or the modification queue is full.

  • If fail-silently="true" then Data Grid logs WARN messages and rejects write operations.
  • If fail-silently="false" then Data Grid throws exceptions if it detects the cache store is unavailable during a write operation. Likewise if the modification queue becomes full, Data Grid throws an exception.

    In some cases, data loss can occur if Data Grid restarts and write operations exist in the modification queue. For example the cache store goes offline but, during the time it takes to detect that the cache store is unavailable, write operations are added to the modification queue because it is not full. If Data Grid restarts or otherwise becomes unavailable before the cache store comes back online, then the write operations in the modification queue are lost because they were not persisted.

6.4. Segmented cache stores

Cache stores can organize data into hash space segments to which keys map.

Segmented stores increase read performance for bulk operations; for example, streaming over data (Cache.size, Cache.entrySet.stream), pre-loading the cache, and doing state transfer operations.

However, segmented stores can also result in loss of performance for write operations. This performance loss applies particularly to batch write operations that can take place with transactions or write-behind stores. For this reason, you should evaluate the overhead for write operations before you enable segmented stores. The performance gain for bulk read operations might not be acceptable if there is a significant performance loss for write operations.

Important

The number of segments you configure for cache stores must match the number of segments you define in the Data Grid configuration with the clustering.hash.numSegments parameter.

If you change the numSegments parameter in the configuration after you add a segmented cache store, Data Grid cannot read data from that cache store.

6.5. Shared cache stores

Data Grid cache stores can be local to a given node or shared across all nodes in the cluster. By default, cache stores are local (shared="false").

  • Local cache stores are unique to each node; for example, a file-based cache store that persists data to the host filesystem.

    Local cache stores should use "purge on startup" to avoid loading stale entries from persistent storage.

  • Shared cache stores allow multiple nodes to use the same persistent storage; for example, a JDBC cache store that allows multiple nodes to access the same database.

    Shared cache stores ensure that only the primary owner write to persistent storage, instead of backup nodes performing write operations for every modification.

Important

Purging deletes data, which is not typically the desired behavior for persistent storage.

Local cache store

<persistence>
  <store shared="false"
         purge="true"/>
</persistence>

Shared cache store

<persistence>
  <store shared="true"
         purge="false"/>
</persistence>

Additional resources

6.6. Transactions with persistent cache stores

Data Grid supports transactional operations with JDBC-based cache stores only. To configure caches as transactional, you set transactional=true to keep data in persistent storage synchronized with data in memory.

For all other cache stores, Data Grid does not enlist cache loaders in transactional operations. This can result in data inconsistency if transactions succeed in modifying data in memory but do not completely apply changes to data in the cache store. In these cases manual recovery is not possible with cache stores.

6.7. Global persistent location

Data Grid preserves global state so that it can restore cluster topology and cached data after restart.

Data Grid uses file locking to prevent concurrent access to the global persistent location. The lock is acquired on startup and released on a node shutdown. The presence of a dangling lock file indicates that the node was not shutdown cleanly, either because of a crash or external termination. In the default configuration, Data Grid will refuse to start up to avoid data corruption with the following message:

ISPN000693: Dangling lock file '%s' in persistent global state, probably left behind by an unclean shutdown

The behavior can be changed by configuring the global state unclean-shutdown-action setting to one of the following:

  • FAIL: Prevents startup of the cache manager if a dangling lock file is found in the persistent global state. This is the default behavior.
  • PURGE: Clears the persistent global state if a dangling lock file is found in the persistent global state.
  • IGNORE: Ignores the presence of a dangling lock file in the persistent global state.

Remote caches

Data Grid Server saves cluster state to the $RHDG_HOME/server/data directory.

Important

You should never delete or modify the server/data directory or its content. Data Grid restores cluster state from this directory when you restart your server instances.

Changing the default configuration or directly modifying the server/data directory can cause unexpected behavior and lead to data loss.

Embedded caches

Data Grid defaults to the user.dir system property as the global persistent location. In most cases this is the directory where your application starts.

For clustered embedded caches, such as replicated or distributed, you should always enable and configure a global persistent location to restore cluster topology.

You should never configure an absolute path for a file-based cache store that is outside the global persistent location. If you do, Data Grid writes the following exception to logs:

ISPN000558: "The store location 'foo' is not a child of the global persistent location 'bar'"

6.7.1. Configuring the global persistent location

Enable and configure the location where Data Grid stores global state for clustered embedded caches.

Note

Data Grid Server enables global persistence and configures a default location. You should not disable global persistence or change the default configuration for remote caches.

Prerequisites

  • Add Data Grid to your project.

Procedure

  1. Enable global state in one of the following ways:

    • Add the global-state element to your Data Grid configuration.
    • Call the globalState().enable() methods in the GlobalConfigurationBuilder API.
  2. Define whether the global persistent location is unique to each node or shared between the cluster.

    Location typeConfiguration

    Unique to each node

    persistent-location element or persistentLocation() method

    Shared between the cluster

    shared-persistent-location element or sharedPersistentLocation(String) method

  3. Set the path where Data Grid stores cluster state.

    For example, file-based cache stores the path is a directory on the host filesystem.

    Values can be:

    • Absolute and contain the full location including the root.
    • Relative to a root location.
  4. If you specify a relative value for the path, you must also specify a system property that resolves to a root location.

    For example, on a Linux host system you set global/state as the path. You also set the my.data property that resolves to the /opt/data root location. In this case Data Grid uses /opt/data/global/state as the global persistent location.

Global persistent location configuration

XML

<infinispan>
  <cache-container>
    <global-state>
      <persistent-location path="global/state" relative-to="my.data"/>
    </global-state>
  </cache-container>
</infinispan>

JSON

{
  "infinispan" : {
    "cache-container" : {
      "global-state": {
        "persistent-location" : {
          "path" : "global/state",
          "relative-to" : "my.data"
        }
      }
    }
  }
}

YAML

cacheContainer:
  globalState:
      persistentLocation:
        path: "global/state"
        relativeTo : "my.data"

GlobalConfigurationBuilder

new GlobalConfigurationBuilder().globalState()
                                .enable()
                                .persistentLocation("global/state", "my.data");

6.8. File-based cache stores

File-based cache stores provide persistent storage on the local host filesystem where Data Grid is running. For clustered caches, file-based cache stores are unique to each Data Grid node.

Warning

Never use filesystem-based cache stores on shared file systems, such as an NFS or Samba share, because they do not provide file locking capabilities and data corruption can occur.

Additionally if you attempt to use transactional caches with shared file systems, unrecoverable failures can happen when writing to files during the commit phase.

Soft-Index File Stores

SoftIndexFileStore is the default implementation for file-based cache stores and stores data in a set of append-only files.

When append-only files:

  • Reach their maximum size, Data Grid creates a new file and starts writing to it.
  • Reach the compaction threshold of less than 50% usage, Data Grid overwrites the entries to a new file and then deletes the old file.
Note

Using SoftIndexFileStore in a clustered cache should enable purge on startup to ensure stale entries are not resurrected.

B+ trees

To improve performance, append-only files in a SoftIndexFileStore are indexed using a B+ Tree that can be stored both on disk and in memory. The in-memory index uses Java soft references to ensure it can be rebuilt if removed by Garbage Collection (GC) then requested again.

Because SoftIndexFileStore uses Java soft references to keep indexes in memory, it helps prevent out-of-memory exceptions. GC removes indexes before they consume too much memory while still falling back to disk.

SoftIndexFileStore creates a B+ tree per configured cache segment. This provides an additional "index" as it only has so many elements and provides additional parallelism for index updates. Currently we allow for a parallel amount based on one sixteenth of the number of cache segments.

Each entry in the B+ tree is a node. By default, the size of each node is limited to 4096 bytes. SoftIndexFileStore throws an exception if keys are longer after serialization occurs.

File limits

SoftIndexFileStore will use two plus the configured openFilesLimit amount of files at a given time. The two additional file pointers are reserved for the log appender for newly updated data and another for the compactor which writes compacted entries into a new file.

The amount of open allocated files allocated for indexing is one tenth of the total number of the configured openFilesLimit. This number has a minimum of 1 or the number of cache segments. Any number remaning from configured limit is allocated for open data files themselves.

Segmentation

Soft-index file stores are always segmented. The append log(s) are not directly segmented and segmentation is handled directly by the index.

Expiration

The SoftIndexFileStore has full support for expired entries and their requirements.

Single File Cache Stores

Note

Single file cache stores are now deprecated and planned for removal.

Single File cache stores, SingleFileStore, persist data to file. Data Grid also maintains an in-memory index of keys while keys and values are stored in the file.

Because SingleFileStore keeps an in-memory index of keys and the location of values, it requires additional memory, depending on the key size and the number of keys. For this reason, SingleFileStore is not recommended for use cases where the keys are larger or there can be a larger number of them.

In some cases, SingleFileStore can also become fragmented. If the size of values continually increases, available space in the single file is not used but the entry is appended to the end of the file. Available space in the file is used only if an entry can fit within it. Likewise, if you remove all entries from memory, the single file store does not decrease in size or become defragmented.

Segmentation

Single file cache stores are segmented by default with a separate instance per segment, which results in multiple directories. Each directory is a number that represents the segment to which the data maps.

6.8.1. Configuring file-based cache stores

Add file-based cache stores to Data Grid to persist data on the host filesystem.

Prerequisites

  • Enable global state and configure a global persistent location if you are configuring embedded caches.

Procedure

  1. Add the persistence element to your cache configuration.
  2. Optionally specify true as the value for the passivation attribute to write to the file-based cache store only when data is evicted from memory.
  3. Include the file-store element and configure attributes as appropriate.
  4. Specify false as the value for the shared attribute.

    File-based cache stores should always be unique to each Data Grid instance. If you want to use the same persistent across a cluster, configure shared storage such as a JDBC string-based cache store .

  5. Configure the index and data elements to specify the location where Data Grid creates indexes and stores data.
  6. Include the write-behind element if you want to configure the cache store with write-behind mode.
File-based cache store configuration

XML

<distributed-cache>
  <persistence passivation="true">
     <file-store shared="false">
        <data path="data"/>
        <index path="index"/>
        <write-behind modification-queue-size="2048" />
     </file-store>
  </persistence>
</distributed-cache>

JSON

{
  "distributed-cache": {
    "persistence": {
      "passivation": true,
      "file-store" : {
        "shared": false,
        "data": {
          "path": "data"
        },
        "index": {
          "path": "index"
        },
        "write-behind": {
          "modification-queue-size": "2048"
        }
      }
    }
  }
}

YAML

distributedCache:
  persistence:
    passivation: "true"
    fileStore:
      shared: "false"
      data:
        path: "data"
      index:
        path: "index"
      writeBehind:
        modificationQueueSize: "2048"

ConfigurationBuilder

ConfigurationBuilder builder = new ConfigurationBuilder();
builder.persistence().passivation(true)
       .addSoftIndexFileStore()
          .shared(false)
          .dataLocation("data")
          .indexLocation("index")
          .modificationQueueSize(2048);

6.8.2. Configuring single file cache stores

If required, you can configure Data Grid to create single file stores.

Important

Single file stores are deprecated. You should use soft-index file stores for better performance and data consistency in comparison with single file stores.

Prerequisites

  • Enable global state and configure a global persistent location if you are configuring embedded caches.

Procedure

  1. Add the persistence element to your cache configuration.
  2. Optionally specify true as the value for the passivation attribute to write to the file-based cache store only when data is evicted from memory.
  3. Include the single-file-store element.
  4. Specify false as the value for the shared attribute.
  5. Configure any other attributes as appropriate.
  6. Include the write-behind element to configure the cache store as write behind instead of as write through.
Single file cache store configuration

XML

<distributed-cache>
  <persistence passivation="true">
    <single-file-store shared="false"
                       preload="true"/>
  </persistence>
</distributed-cache>

JSON

{
  "distributed-cache": {
    "persistence" : {
      "passivation" : true,
      "single-file-store" : {
        "shared" : false,
        "preload" : true
      }
    }
  }
}

YAML

distributedCache:
  persistence:
    passivation: "true"
    singleFileStore:
      shared: "false"
      preload: "true"

ConfigurationBuilder

ConfigurationBuilder builder = new ConfigurationBuilder();
builder.persistence().passivation(true)
       .addStore(SingleFileStoreConfigurationBuilder.class)
          .shared(false)
          .preload(true);

6.9. JDBC connection factories

Data Grid provides different ConnectionFactory implementations that allow you to connect to databases. You use JDBC connections with SQL cache stores and JDBC string-based caches stores.

Connection pools

Connection pools are suitable for standalone Data Grid deployments and are based on Agroal.

XML

<distributed-cache>
  <persistence>
     <connection-pool connection-url="jdbc:h2:mem:infinispan;DB_CLOSE_DELAY=-1"
                      username="sa"
                      password="changeme"
                      driver="org.h2.Driver"/>
  </persistence>
</distributed-cache>

JSON

{
  "distributed-cache": {
    "persistence": {
      "connection-pool": {
        "connection-url": "jdbc:h2:mem:infinispan_string_based",
        "driver": "org.h2.Driver",
        "username": "sa",
        "password": "changeme"
      }
    }
  }
}

YAML

distributedCache:
  persistence:
    connectionPool:
      connectionUrl: "jdbc:h2:mem:infinispan_string_based;DB_CLOSE_DELAY=-1"
      driver: org.h2.Driver
      username: sa
      password: changeme

ConfigurationBuilder

ConfigurationBuilder builder = new ConfigurationBuilder();
builder.persistence()
       .connectionPool()
         .connectionUrl("jdbc:h2:mem:infinispan_string_based;DB_CLOSE_DELAY=-1")
         .username("sa")
         .driverClass("org.h2.Driver");

Managed datasources

Datasource connections are suitable for managed environments such as application servers.

XML

<distributed-cache>
  <persistence>
    <data-source jndi-url="java:/StringStoreWithManagedConnectionTest/DS" />
  </persistence>
</distributed-cache>

JSON

{
  "distributed-cache": {
    "persistence": {
      "data-source": {
        "jndi-url": "java:/StringStoreWithManagedConnectionTest/DS"
      }
    }
  }
}

YAML

distributedCache:
  persistence:
    dataSource:
      jndiUrl: "java:/StringStoreWithManagedConnectionTest/DS"

ConfigurationBuilder

ConfigurationBuilder builder = new ConfigurationBuilder();
builder.persistence()
       .dataSource()
         .jndiUrl("java:/StringStoreWithManagedConnectionTest/DS");

Simple connections

Simple connection factories create database connections on a per invocation basis and are intended for use with test or development environments only.

XML

<distributed-cache>
  <persistence>
    <simple-connection connection-url="jdbc:h2://localhost"
                       username="sa"
                       password="changeme"
                       driver="org.h2.Driver"/>
  </persistence>
</distributed-cache>

JSON

{
  "distributed-cache": {
    "persistence": {
      "simple-connection": {
        "connection-url": "jdbc:h2://localhost",
        "driver": "org.h2.Driver",
        "username": "sa",
        "password": "changeme"
      }
    }
  }
}

YAML

distributedCache:
  persistence:
    simpleConnection:
      connectionUrl: "jdbc:h2://localhost"
      driver: org.h2.Driver
      username: sa
      password: changeme

ConfigurationBuilder

ConfigurationBuilder builder = new ConfigurationBuilder();
builder.persistence()
       .simpleConnection()
         .connectionUrl("jdbc:h2://localhost")
         .driverClass("org.h2.Driver")
         .username("admin")
         .password("changeme");

6.9.1. Configuring managed datasources

Create managed datasources as part of your Data Grid Server configuration to optimize connection pooling and performance for JDBC database connections. You can then specify the JDNI name of the managed datasources in your caches, which centralizes JDBC connection configuration for your deployment.

Prerequisites

  • Copy database drivers to the server/lib directory in your Data Grid Server installation.

    Tip

    Use the install command with the Data Grid Command Line Interface (CLI) to download the required drivers to the server/lib directory, for example:

    install org.postgresql:postgresql:42.4.3

Procedure

  1. Open your Data Grid Server configuration for editing.
  2. Add a new data-source to the data-sources section.
  3. Uniquely identify the datasource with the name attribute or field.
  4. Specify a JNDI name for the datasource with the jndi-name attribute or field.

    Tip

    You use the JNDI name to specify the datasource in your JDBC cache store configuration.

  5. Set true as the value of the statistics attribute or field to enable statistics for the datasource through the /metrics endpoint.
  6. Provide JDBC driver details that define how to connect to the datasource in the connection-factory section.

    1. Specify the name of the database driver with the driver attribute or field.
    2. Specify the JDBC connection url with the url attribute or field.
    3. Specify credentials with the username and password attributes or fields.
    4. Provide any other configuration as appropriate.
  7. Define how Data Grid Server nodes pool and reuse connections with connection pool tuning properties in the connection-pool section.
  8. Save the changes to your configuration.

Verification

Use the Data Grid Command Line Interface (CLI) to test the datasource connection, as follows:

  1. Start a CLI session.

    bin/cli.sh
  2. List all datasources and confirm the one you created is available.

    server datasource ls
  3. Test a datasource connection.

    server datasource test my-datasource
Managed datasource configuration

XML

<server xmlns="urn:infinispan:server:15.0">
  <data-sources>
     <!-- Defines a unique name for the datasource and JNDI name that you
          reference in JDBC cache store configuration.
          Enables statistics for the datasource, if required. -->
     <data-source name="ds"
                  jndi-name="jdbc/postgres"
                  statistics="true">
        <!-- Specifies the JDBC driver that creates connections. -->
        <connection-factory driver="org.postgresql.Driver"
                            url="jdbc:postgresql://localhost:5432/postgres"
                            username="postgres"
                            password="changeme">
           <!-- Sets optional JDBC driver-specific connection properties. -->
           <connection-property name="name">value</connection-property>
        </connection-factory>
        <!-- Defines connection pool tuning properties. -->
        <connection-pool initial-size="1"
                         max-size="10"
                         min-size="3"
                         background-validation="1000"
                         idle-removal="1"
                         blocking-timeout="1000"
                         leak-detection="10000"/>
     </data-source>
  </data-sources>
</server>

JSON

{
  "server": {
    "data-sources": [{
      "name": "ds",
      "jndi-name": "jdbc/postgres",
      "statistics": true,
      "connection-factory": {
        "driver": "org.postgresql.Driver",
        "url": "jdbc:postgresql://localhost:5432/postgres",
        "username": "postgres",
        "password": "changeme",
        "connection-properties": {
          "name": "value"
        }
      },
      "connection-pool": {
        "initial-size": 1,
        "max-size": 10,
        "min-size": 3,
        "background-validation": 1000,
        "idle-removal": 1,
        "blocking-timeout": 1000,
        "leak-detection": 10000
      }
    }]
  }
}

YAML

server:
  dataSources:
    - name: ds
      jndiName: 'jdbc/postgres'
      statistics: true
      connectionFactory:
        driver: "org.postgresql.Driver"
        url: "jdbc:postgresql://localhost:5432/postgres"
        username: "postgres"
        password: "changeme"
        connectionProperties:
          name: value
      connectionPool:
        initialSize: 1
        maxSize: 10
        minSize: 3
        backgroundValidation: 1000
        idleRemoval: 1
        blockingTimeout: 1000
        leakDetection: 10000

6.9.1.1. Configuring caches with JNDI names

When you add a managed datasource to Data Grid Server you can add the JNDI name to a JDBC-based cache store configuration.

Prerequisites

  • Configure Data Grid Server with a managed datasource.

Procedure

  1. Open your cache configuration for editing.
  2. Add the data-source element or field to the JDBC-based cache store configuration.
  3. Specify the JNDI name of the managed datasource as the value of the jndi-url attribute.
  4. Configure the JDBC-based cache stores as appropriate.
  5. Save the changes to your configuration.
JNDI name in cache configuration

XML

<distributed-cache>
  <persistence>
    <jdbc:string-keyed-jdbc-store>
      <!-- Specifies the JNDI name of a managed datasource on Data Grid Server. -->
      <jdbc:data-source jndi-url="jdbc/postgres"/>
      <jdbc:string-keyed-table drop-on-exit="true" create-on-start="true" prefix="TBL">
        <jdbc:id-column name="ID" type="VARCHAR(255)"/>
        <jdbc:data-column name="DATA" type="BYTEA"/>
        <jdbc:timestamp-column name="TS" type="BIGINT"/>
        <jdbc:segment-column name="S" type="INT"/>
      </jdbc:string-keyed-table>
    </jdbc:string-keyed-jdbc-store>
  </persistence>
</distributed-cache>

JSON

{
  "distributed-cache": {
    "persistence": {
      "string-keyed-jdbc-store": {
        "data-source": {
          "jndi-url": "jdbc/postgres"
          },
        "string-keyed-table": {
          "prefix": "TBL",
          "drop-on-exit": true,
          "create-on-start": true,
          "id-column": {
            "name": "ID",
            "type": "VARCHAR(255)"
          },
          "data-column": {
            "name": "DATA",
            "type": "BYTEA"
          },
          "timestamp-column": {
            "name": "TS",
            "type": "BIGINT"
          },
          "segment-column": {
            "name": "S",
            "type": "INT"
          }
        }
      }
    }
  }
}

YAML

distributedCache:
  persistence:
    stringKeyedJdbcStore:
      dataSource:
        jndi-url: "jdbc/postgres"
      stringKeyedTable:
        prefix: "TBL"
        dropOnExit: true
        createOnStart: true
        idColumn:
          name: "ID"
          type: "VARCHAR(255)"
        dataColumn:
          name: "DATA"
          type: "BYTEA"
        timestampColumn:
          name: "TS"
          type: "BIGINT"
        segmentColumn:
          name: "S"
          type: "INT"

6.9.1.2. Connection pool tuning properties

You can tune JDBC connection pools for managed datasources in your Data Grid Server configuration.

PropertyDescription

initial-size

Initial number of connections the pool should hold.

max-size

Maximum number of connections in the pool.

min-size

Minimum number of connections the pool should hold.

blocking-timeout

Maximum time in milliseconds to block while waiting for a connection before throwing an exception. This will never throw an exception if creating a new connection takes an inordinately long period of time. Default is 0 meaning that a call will wait indefinitely.

background-validation

Time in milliseconds between background validation runs. A duration of 0 means that this feature is disabled.

validate-on-acquisition

Connections idle for longer than this time, specified in milliseconds, are validated before being acquired (foreground validation). A duration of 0 means that this feature is disabled.

idle-removal

Time in minutes a connection has to be idle before it can be removed.

leak-detection

Time in milliseconds a connection has to be held before a leak warning.

6.9.2. Configuring JDBC connection pools with Agroal properties

You can use a properties file to configure pooled connection factories for JDBC string-based cache stores.

Procedure

  1. Specify JDBC connection pool configuration with org.infinispan.agroal.* properties, as in the following example:

    org.infinispan.agroal.metricsEnabled=false
    
    org.infinispan.agroal.minSize=10
    org.infinispan.agroal.maxSize=100
    org.infinispan.agroal.initialSize=20
    org.infinispan.agroal.acquisitionTimeout_s=1
    org.infinispan.agroal.validationTimeout_m=1
    org.infinispan.agroal.leakTimeout_s=10
    org.infinispan.agroal.reapTimeout_m=10
    
    org.infinispan.agroal.metricsEnabled=false
    org.infinispan.agroal.autoCommit=true
    org.infinispan.agroal.jdbcTransactionIsolation=READ_COMMITTED
    org.infinispan.agroal.jdbcUrl=jdbc:h2:mem:PooledConnectionFactoryTest;DB_CLOSE_DELAY=-1
    org.infinispan.agroal.driverClassName=org.h2.Driver.class
    org.infinispan.agroal.principal=sa
    org.infinispan.agroal.credential=sa
  2. Configure Data Grid to use your properties file with the properties-file attribute or the PooledConnectionFactoryConfiguration.propertyFile() method.

    XML

    <connection-pool properties-file="path/to/agroal.properties"/>

    JSON

    "persistence": {
      "connection-pool": {
        "properties-file": "path/to/agroal.properties"
      }
    }

    YAML

    persistence:
      connectionPool:
        propertiesFile: path/to/agroal.properties

    ConfigurationBuilder

    .connectionPool().propertyFile("path/to/agroal.properties")

Additional resources

6.10. SQL cache stores

SQL cache stores let you load Data Grid caches from existing database tables. Data Grid offers two types of SQL cache store:

Table
Data Grid loads entries from a single database table.
Query
Data Grid uses SQL queries to load entries from single or multiple database tables, including from sub-columns within those tables, and perform insert, update, and delete operations.
Tip

Visit the code tutorials to try a SQL cache store in action. See the Persistence code tutorial with remote caches.

Both SQL table and query stores:

  • Allow read and write operations to persistent storage.
  • Can be read-only and act as a cache loader.
  • Support keys and values that correspond to a single database column or a composite of multiple database columns.

    For composite keys and values, you must provide Data Grid with Protobuf schema (.proto files) that describe the keys and values. With Data Grid Server you can add schema through the Data Grid Console or Command Line Interface (CLI) with the schema command.

Warning

The SQL cache store is intended for use with an existing database table. As a result, it does not store any metadata, which includes expiration, segments, and, versioning metadata. Due to the absence of version storage, SQL store does not support optimistic transactional caching and asynchronous cross-site replication. This limitation also extends to Hot Rod versioned operations.

Tip

Use expiration with the SQL cache store when it is configured as read only. Expiration removes stale values from memory, causing the cache to fetch the values from the database again and cache them anew.

6.10.1. Data types for keys and values

Data Grid loads keys and values from columns in database tables via SQL cache stores, automatically using the appropriate data types. The following CREATE statement adds a table named "books" that has two columns, isbn and title:

Database table with two columns

CREATE TABLE books (
    isbn NUMBER(13),
    title varchar(120)
    PRIMARY KEY(isbn)
);

When you use this table with a SQL cache store, Data Grid adds an entry to the cache using the isbn column as the key and the title column as the value.

6.10.1.1. Composite keys and values

You can use SQL stores with database tables that contain composite primary keys or composite values.

To use composite keys or values, you must provide Data Grid with Protobuf schema that describe the data types. You must also add schema configuration to your SQL store and specify the message names for keys and values.

Tip

Data Grid recommends generating Protobuf schema with the ProtoStream processor. You can then upload your Protobuf schema for remote caches through the Data Grid Console, CLI, or REST API.

Composite values

The following database table holds a composite value of the title and author columns:

CREATE TABLE books (
    isbn NUMBER(13),
    title varchar(120),
    author varchar(80)
    PRIMARY KEY(isbn)
);

Data Grid adds an entry to the cache using the isbn column as the key. For the value, Data Grid requires a Protobuf schema that maps the title column and the author columns:

package library;

message books_value {
    optional string title = 1;
    optional string author = 2;
}
Composite keys and values

The following database table holds a composite primary key and a composite value, with two columns each:

CREATE TABLE books (
    isbn NUMBER(13),
    reprint INT,
    title varchar(120),
    author varchar(80)
    PRIMARY KEY(isbn, reprint)
);

For both the key and the value, Data Grid requires a Protobuf schema that maps the columns to keys and values:

package library;

message books_key {
    required string isbn = 1;
    required int32 reprint = 2;
}

message books_value {
    optional string title = 1;
    optional string author = 2;
}

6.10.1.2. Embedded keys

Protobuf schema can include keys within values, as in the following example:

Protobuf schema with an embedded key

package library;

message books_key {
    required string isbn = 1;
    required int32 reprint = 2;
}

message books_value {
    required string isbn = 1;
    required string reprint = 2;
    optional string title = 3;
    optional string author = 4;
}

To use embedded keys, you must include the embedded-key="true" attribute or embeddedKey(true) method in your SQL store configuration.

6.10.1.3. SQL types to Protobuf types

The following table contains default mappings of SQL data types to Protobuf data types:

SQL typeProtobuf type

int4

int32

int8

int64

float4

float

float8

double

numeric

double

bool

bool

char

string

varchar

string

text, tinytext, mediumtext, longtext

string

bytea, tinyblob, blob, mediumblob, longblob

bytes

Additional resources

6.10.2. Loading Data Grid caches from database tables

Add a SQL table cache store to your configuration if you want Data Grid to load data from a database table. When it connects to the database, Data Grid uses metadata from the table to detect column names and data types. Data Grid also automatically determines which columns in the database are part of the primary key.

Prerequisites

  • Have JDBC connection details.
    You can add JDBC connection factories directly to your cache configuration.
    For remote caches in production environments, you should add managed datasources to Data Grid Server configuration and specify the JNDI name in the cache configuration.
  • Generate Protobuf schema for any composite keys or composite values and register your schemas with Data Grid.

    Tip

    Data Grid recommends generating Protobuf schema with the ProtoStream processor. For remote caches, you can register your schemas by adding them through the Data Grid Console, CLI, or REST API.

Procedure

  1. Add database drivers to your Data Grid deployment.

    • Remote caches: Copy database drivers to the server/lib directory in your Data Grid Server installation.

      Tip

      Use the install command with the Data Grid Command Line Interface (CLI) to download the required drivers to the server/lib directory, for example:

      install org.postgresql:postgresql:42.4.3
    • Embedded caches: Add the infinispan-cachestore-sql dependency to your pom file.

      <dependency>
        <groupId>org.infinispan</groupId>
        <artifactId>infinispan-cachestore-sql</artifactId>
      </dependency>
  2. Open your Data Grid configuration for editing.
  3. Add a SQL table cache store.

    Declarative

    table-jdbc-store xmlns="urn:infinispan:config:store:sql:15.0"

    Programmatic

    persistence().addStore(TableJdbcStoreConfigurationBuilder.class)

  4. Specify the database dialect with either dialect="" or dialect(), for example dialect="H2" or dialect="postgres".
  5. Configure the SQL cache store with the properties you require, for example:

    • To use the same cache store across your cluster, set shared="true" or shared(true).
    • To create a read only cache store, set read-only="true" or .ignoreModifications(true).
  6. Name the database table that loads the cache with table-name="<database_table_name>" or table.name("<database_table_name>").
  7. Add the schema element or the .schemaJdbcConfigurationBuilder() method and add Protobuf schema configuration for composite keys or values.

    1. Specify the package name with the package attribute or package() method.
    2. Specify composite values with the message-name attribute or messageName() method.
    3. Specify composite keys with the key-message-name attribute or keyMessageName() method.
    4. Set a value of true for the embedded-key attribute or embeddedKey() method if your schema includes keys within values.
  8. Save the changes to your configuration.
SQL table store configuration

The following example loads a distributed cache from a database table named "books" using composite values defined in a Protobuf schema:

XML

<distributed-cache>
  <persistence>
    <table-jdbc-store xmlns="urn:infinispan:config:store:sql:15.0"
                      dialect="H2"
                      shared="true"
                      table-name="books">
      <schema message-name="books_value"
              package="library"/>
    </table-jdbc-store>
  </persistence>
</distributed-cache>

JSON

{
  "distributed-cache": {
    "persistence": {
      "table-jdbc-store": {
        "dialect": "H2",
        "shared": "true",
        "table-name": "books",
        "schema": {
          "message-name": "books_value",
          "package": "library"
        }
      }
    }
  }
}

YAML

distributedCache:
  persistence:
    tableJdbcStore:
      dialect: "H2"
      shared: "true"
      tableName: "books"
      schema:
        messageName: "books_value"
        package: "library"

ConfigurationBuilder

ConfigurationBuilder builder = new ConfigurationBuilder();
builder.persistence().addStore(TableJdbcStoreConfigurationBuilder.class)
      .dialect(DatabaseType.H2)
      .shared("true")
      .tableName("books")
      .schemaJdbcConfigurationBuilder()
        .messageName("books_value")
        .packageName("library");

6.10.3. Using SQL queries to load data and perform operations

SQL query cache stores let you load caches from multiple database tables, including from sub-columns in database tables, and perform insert, update, and delete operations.

Prerequisites

  • Have JDBC connection details.
    You can add JDBC connection factories directly to your cache configuration.
    For remote caches in production environments, you should add managed datasources to Data Grid Server configuration and specify the JNDI name in the cache configuration.
  • Generate Protobuf schema for any composite keys or composite values and register your schemas with Data Grid.

    Tip

    Data Grid recommends generating Protobuf schema with the ProtoStream processor. For remote caches, you can register your schemas by adding them through the Data Grid Console, CLI, or REST API.

Procedure

  1. Add database drivers to your Data Grid deployment.

    • Remote caches: Copy database drivers to the server/lib directory in your Data Grid Server installation.

      Tip

      Use the install command with the Data Grid Command Line Interface (CLI) to download the required drivers to the server/lib directory, for example:

      install org.postgresql:postgresql:42.4.3
    • Embedded caches: Add the infinispan-cachestore-sql dependency to your pom file and make sure database drivers are on your application classpath.

      <dependency>
        <groupId>org.infinispan</groupId>
        <artifactId>infinispan-cachestore-sql</artifactId>
      </dependency>
  2. Open your Data Grid configuration for editing.
  3. Add a SQL query cache store.

    Declarative

    query-jdbc-store xmlns="urn:infinispan:config:store:sql:15.0"

    Programmatic

    persistence().addStore(QueriesJdbcStoreConfigurationBuilder.class)

  4. Specify the database dialect with either dialect="" or dialect(), for example dialect="H2" or dialect="postgres".
  5. Configure the SQL cache store with the properties you require, for example:

    • To use the same cache store across your cluster, set shared="true" or shared(true).
    • To create a read only cache store, set read-only="true" or .ignoreModifications(true).
  6. Define SQL query statements that load caches with data and modify database tables with the queries element or the queries() method.

    Query statementDescription

    SELECT

    Loads a single entry into caches. You can use wildcards but must specify parameters for keys. You can use labelled expressions.

    SELECT ALL

    Loads multiple entries into caches. You can use the * wildcard if the number of columns returned match the key and value columns. You can use labelled expressions.

    SIZE

    Counts the number of entries in the cache.

    DELETE

    Deletes a single entry from the cache.

    DELETE ALL

    Deletes all entries from the cache.

    UPSERT

    Modifies entries in the cache.

    Note

    DELETE, DELETE ALL, and UPSERT statements do not apply to read only cache stores but are required if cache stores allow modifications.

    Parameters in DELETE statements must match parameters in SELECT statements exactly.

    Variables in UPSERT statements must have the same number of uniquely named variables that SELECT and SELECT ALL statements return. For example, if SELECT returns foo and bar this statement must take only :foo and :bar as variables. However you can apply the same named variable more than once in a statement.

    SQL queries can include JOIN, ON, and any other clauses that the database supports.

  7. Add the schema element or the .schemaJdbcConfigurationBuilder() method and add Protobuf schema configuration for composite keys or values.

    1. Specify the package name with the package attribute or package() method.
    2. Specify composite values with the message-name attribute or messageName() method.
    3. Specify composite keys with the key-message-name attribute or keyMessageName() method.
    4. Set a value of true for the embedded-key attribute or embeddedKey() method if your schema includes keys within values.
  8. Save the changes to your configuration.

6.10.3.1. SQL query store configuration

This section provides an example configuration for a SQL query cache store that loads a distributed cache with data from two database tables: "person" and "address".

SQL statements

The following examples show SQL data definition language (DDL) statements for the "person" and "address" tables. The data types described in the example are only valid for PostgreSQL database.

SQL statement for the "person" table

CREATE TABLE Person (
  name VARCHAR(255) NOT NULL,
  picture BYTEA,
  sex VARCHAR(255),
  birthdate TIMESTAMP,
  accepted_tos BOOLEAN,
  notused VARCHAR(255),
  PRIMARY KEY (name)
);

SQL statement for the "address" table

CREATE TABLE Address (
  name VARCHAR(255) NOT NULL,
  street VARCHAR(255),
  city VARCHAR(255),
  zip INT,
  PRIMARY KEY (name)
);

Protobuf schemas

Protobuf schema for the "person" and "address" tables are as follows:

Protobuf schema for the "address" table

package com.example;

message Address {
   optional string street = 1;
   optional string city = 2 [default = "San Jose"];
   optional int32 zip = 3 [default = 0];
}

Protobuf schema for the "person" table

package com.example;

import "/path/to/address.proto";

enum Sex {
   FEMALE = 1;
   MALE = 2;
}

message Person {
   optional string name = 1;
   optional Address address = 2;
   optional bytes picture = 3;
   optional Sex sex = 4;
   optional fixed64 birthDate = 5 [default = 0];
   optional bool accepted_tos = 6 [default = false];
}

Cache configuration

The following example loads a distributed cache from the "person" and "address" tables using a SQL query that includes a JOIN clause:

XML

<distributed-cache>
  <persistence>
    <query-jdbc-store xmlns="urn:infinispan:config:store:sql:15.0"
                      dialect="POSTGRES"
                      shared="true" key-columns="name">
          <connection-pool driver="org.postgresql.Driver"
                            connection-url="jdbc:postgresql://localhost:5432/postgres"
                            username="postgres"
                            password="changeme"/>
      <queries select-single="SELECT t1.name, t1.picture, t1.sex, t1.birthdate, t1.accepted_tos, t2.street, t2.city, t2.zip FROM Person t1 JOIN Address t2 ON t1.name = :name AND t2.name = :name"
        select-all="SELECT t1.name, t1.picture, t1.sex, t1.birthdate, t1.accepted_tos, t2.street, t2.city, t2.zip FROM Person t1 JOIN Address t2 ON t1.name = t2.name"
        delete-single="DELETE FROM Person t1 WHERE t1.name = :name; DELETE FROM Address t2 where t2.name = :name"
        delete-all="DELETE FROM Person; DELETE FROM Address"
        upsert="INSERT INTO Person (name,  picture, sex, birthdate, accepted_tos) VALUES (:name, :picture, :sex, :birthdate, :accepted_tos); INSERT INTO Address(name, street, city, zip) VALUES (:name, :street, :city, :zip)"
        size="SELECT COUNT(*) FROM Person"
      />
      <schema message-name="Person"
              package="com.example"
              embedded-key="true"/>
    </query-jdbc-store>
  </persistence>
</distributed-cache>

JSON

{
  "distributed-cache": {
    "persistence": {
      "query-jdbc-store": {
        "dialect": "POSTGRES",
        "shared": "true",
        "key-columns": "name",
        "connection-pool": {
          "username": "postgres",
          "password": "changeme",
          "driver": "org.postgresql.Driver",
          "connection-url": "jdbc:postgresql://localhost:5432/postgres"
        },
        "queries": {
          "select-single": "SELECT t1.name, t1.picture, t1.sex, t1.birthdate, t1.accepted_tos, t2.street, t2.city, t2.zip FROM Person t1 JOIN Address t2 ON t1.name = :name AND t2.name = :name",
          "select-all": "SELECT t1.name, t1.picture, t1.sex, t1.birthdate, t1.accepted_tos, t2.street, t2.city, t2.zip FROM Person t1 JOIN Address t2 ON t1.name = t2.name",
          "delete-single": "DELETE FROM Person t1 WHERE t1.name = :name; DELETE FROM Address t2 where t2.name = :name",
          "delete-all": "DELETE FROM Person; DELETE FROM Address",
          "upsert": "INSERT INTO Person (name,  picture, sex, birthdate, accepted_tos) VALUES (:name, :picture, :sex, :birthdate, :accepted_tos); INSERT INTO Address(name, street, city, zip) VALUES (:name, :street, :city, :zip)",
          "size": "SELECT COUNT(*) FROM Person"
        },
        "schema": {
          "message-name": "Person",
          "package": "com.example",
          "embedded-key": "true"
        }
      }
    }
  }
}

YAML

distributedCache:
  persistence:
    queryJdbcStore:
      dialect: "POSTGRES"
      shared: "true"
      keyColumns: "name"
      connectionPool:
        username: "postgres"
        password: "changeme"
        driver: "org.postgresql.Driver"
        connectionUrl: "jdbc:postgresql://localhost:5432/postgres"
      queries:
        selectSingle: "SELECT t1.name, t1.picture, t1.sex, t1.birthdate, t1.accepted_tos, t2.street, t2.city, t2.zip FROM Person t1 JOIN Address t2 ON t1.name = :name AND t2.name = :name"
        selectAll: "SELECT t1.name, t1.picture, t1.sex, t1.birthdate, t1.accepted_tos, t2.street, t2.city, t2.zip FROM Person t1 JOIN Address t2 ON t1.name = t2.name"
        deleteSingle: "DELETE FROM Person t1 WHERE t1.name = :name; DELETE FROM Address t2 where t2.name = :name"
        deleteAll: "DELETE FROM Person; DELETE FROM Address"
        upsert: "INSERT INTO Person (name,  picture, sex, birthdate, accepted_tos) VALUES (:name, :picture, :sex, :birthdate, :accepted_tos); INSERT INTO Address(name, street, city, zip) VALUES (:name, :street, :city, :zip)"
        size: "SELECT COUNT(*) FROM Person"
      schema:
        messageName: "Person"
        package: "com.example"
        embeddedKey: "true"

ConfigurationBuilder

ConfigurationBuilder builder = new ConfigurationBuilder();
builder.persistence().addStore(QueriesJdbcStoreConfigurationBuilder.class)
      .dialect(DatabaseType.POSTGRES)
      .shared("true")
      .keyColumns("name")
      .queriesJdbcConfigurationBuilder()
         .select("SELECT t1.name, t1.picture, t1.sex, t1.birthdate, t1.accepted_tos, t2.street, t2.city, t2.zip FROM Person t1 JOIN Address t2 ON t1.name = :name AND t2.name = :name")
         .selectAll("SELECT t1.name, t1.picture, t1.sex, t1.birthdate, t1.accepted_tos, t2.street, t2.city, t2.zip FROM Person t1 JOIN Address t2 ON t1.name = t2.name")
         .delete("DELETE FROM Person t1 WHERE t1.name = :name; DELETE FROM Address t2 where t2.name = :name")
         .deleteAll("DELETE FROM Person; DELETE FROM Address")
         .upsert("INSERT INTO Person (name,  picture, sex, birthdate, accepted_tos) VALUES (:name, :picture, :sex, :birthdate, :accepted_tos); INSERT INTO Address(name, street, city, zip) VALUES (:name, :street, :city, :zip)")
         .size("SELECT COUNT(*) FROM Person")
      .schemaJdbcConfigurationBuilder()
         .messageName("Person")
         .packageName("com.example")
         .embeddedKey(true);

6.10.4. SQL cache store troubleshooting

Find out about common issues and errors with SQL cache stores and how to troubleshoot them.

ISPN008064: No primary keys found for table <table_name>, check case sensitivity

Data Grid logs this message in the following cases:

  • The database table does not exist.
  • The database table name is case sensitive and needs to be either all lower case or all upper case, depending on the database provider.
  • The database table does not have any primary keys defined.

To resolve this issue you should:

  1. Check your SQL cache store configuration and ensure that you specify the name of an existing table.
  2. Ensure that the database table name conforms to an case sensitivity requirements.
  3. Ensure that your database tables have primary keys that uniquely identify the appropriate rows.

6.11. JDBC string-based cache stores

JDBC String-Based cache stores, JdbcStringBasedStore, use JDBC drivers to load and store values in the underlying database.

JDBC String-Based cache stores:

  • Store each entry in its own row in the table to increase throughput for concurrent loads.
  • Use a simple one-to-one mapping that maps each key to a String object using the key-to-string-mapper interface.
    Data Grid provides a default implementation, DefaultTwoWayKey2StringMapper, that handles primitive types.

In addition to the data table used to store cache entries, the store also creates a _META table for storing metadata. This table is used to ensure that any existing database content is compatible with the current Data Grid version and configuration.

Note

By default Data Grid shares are not stored, which means that all nodes in the cluster write to the underlying store on each update. If you want operations to write to the underlying database once only, you must configure JDBC store as shared.

Segmentation

JdbcStringBasedStore uses segmentation by default and requires a column in the database table to represent the segments to which entries belong.

6.11.1. Configuring JDBC string-based cache stores

Configure Data Grid caches with JDBC string-based cache stores that can connect to databases.

Prerequisites

  • Remote caches: Copy database drivers to the server/lib directory in your Data Grid Server installation.
  • Embedded caches: Add the infinispan-cachestore-jdbc dependency to your pom file.

    <dependency>
      <groupId>org.infinispan</groupId>
      <artifactId>infinispan-cachestore-jdbc</artifactId>
    </dependency>

Procedure

  1. Create a JDBC string-based cache store configuration in one of the following ways:

    • Declaratively, add the persistence element or field then add string-keyed-jdbc-store with the following schema namespace:

      xmlns="urn:infinispan:config:store:jdbc:15.0"
    • Programmatically, add the following methods to your ConfigurationBuilder:

      persistence().addStore(JdbcStringBasedStoreConfigurationBuilder.class)
  2. Specify the dialect of the database with either the dialect attribute or the dialect() method.
  3. Configure any properties for the JDBC string-based cache store as appropriate.

    For example, specify if the cache store is shared with multiple cache instances with either the shared attribute or the shared() method.

  4. Add a JDBC connection factory so that Data Grid can connect to the database.
  5. Add a database table that stores cache entries.
Important

Configuring the string-keyed-jdbc-store with inappropriate data type can lead to exceptions during loading or storing cache entries. For more information and a list of data types that are tested as part of the Data Grid release, see Tested database settings for Data Grid string-keyed-jdbc-store persistence (Login required).

JDBC string-based cache store configuration

XML

<distributed-cache>
  <persistence>
    <string-keyed-jdbc-store xmlns="urn:infinispan:config:store:jdbc:15.0"
                             dialect="H2">
      <connection-pool connection-url="jdbc:h2:mem:infinispan"
                       username="sa"
                       password="changeme"
                       driver="org.h2.Driver"/>
      <string-keyed-table create-on-start="true"
                          prefix="ISPN_STRING_TABLE">
        <id-column name="ID_COLUMN"
                   type="VARCHAR(255)" />
        <data-column name="DATA_COLUMN"
                     type="BINARY" />
        <timestamp-column name="TIMESTAMP_COLUMN"
                          type="BIGINT" />
        <segment-column name="SEGMENT_COLUMN"
                        type="INT"/>
      </string-keyed-table>
    </string-keyed-jdbc-store>
  </persistence>
</distributed-cache>

JSON

{
  "distributed-cache": {
    "persistence": {
      "string-keyed-jdbc-store": {
        "dialect": "H2",
        "string-keyed-table": {
          "prefix": "ISPN_STRING_TABLE",
          "create-on-start": true,
          "id-column": {
            "name": "ID_COLUMN",
            "type": "VARCHAR(255)"
          },
          "data-column": {
            "name": "DATA_COLUMN",
            "type": "BINARY"
          },
          "timestamp-column": {
            "name": "TIMESTAMP_COLUMN",
            "type": "BIGINT"
          },
          "segment-column": {
            "name": "SEGMENT_COLUMN",
            "type": "INT"
          }
        },
        "connection-pool": {
          "connection-url": "jdbc:h2:mem:infinispan",
          "driver": "org.h2.Driver",
          "username": "sa",
          "password": "changeme"
        }
      }
    }
  }
}

YAML

distributedCache:
  persistence:
    stringKeyedJdbcStore:
      dialect: "H2"
      stringKeyedTable:
        prefix: "ISPN_STRING_TABLE"
        createOnStart: true
        idColumn:
          name: "ID_COLUMN"
          type: "VARCHAR(255)"
        dataColumn:
          name: "DATA_COLUMN"
          type: "BINARY"
        timestampColumn:
          name: "TIMESTAMP_COLUMN"
          type: "BIGINT"
        segmentColumn:
          name: "SEGMENT_COLUMN"
          type: "INT"
      connectionPool:
        connectionUrl: "jdbc:h2:mem:infinispan"
        driver: "org.h2.Driver"
        username: "sa"
        password: "changeme"

ConfigurationBuilder

ConfigurationBuilder builder = new ConfigurationBuilder();
builder.persistence().addStore(JdbcStringBasedStoreConfigurationBuilder.class)
      .dialect(DatabaseType.H2)
      .table()
         .dropOnExit(true)
         .createOnStart(true)
         .tableNamePrefix("ISPN_STRING_TABLE")
         .idColumnName("ID_COLUMN").idColumnType("VARCHAR(255)")
         .dataColumnName("DATA_COLUMN").dataColumnType("BINARY")
         .timestampColumnName("TIMESTAMP_COLUMN").timestampColumnType("BIGINT")
         .segmentColumnName("SEGMENT_COLUMN").segmentColumnType("INT")
      .connectionPool()
         .connectionUrl("jdbc:h2:mem:infinispan")
         .username("sa")
         .password("changeme")
         .driverClass("org.h2.Driver");

Additional resources

6.12. RocksDB cache stores

RocksDB provides key-value filesystem-based storage with high performance and reliability for highly concurrent environments.

RocksDB cache stores, RocksDBStore, use two databases. One database provides a primary cache store for data in memory; the other database holds entries that Data Grid expires from memory.

Table 6.1. Configuration parameters
ParameterDescription

location

Specifies the path to the RocksDB database that provides the primary cache store. If you do not set the location, it is automatically created. Note that the path must be relative to the global persistent location.

expiredLocation

Specifies the path to the RocksDB database that provides the cache store for expired data. If you do not set the location, it is automatically created. Note that the path must be relative to the global persistent location.

expiryQueueSize

Sets the size of the in-memory queue for expiring entries. When the queue reaches the size, Data Grid flushes the expired into the RocksDB cache store.

clearThreshold

Sets the maximum number of entries before deleting and re-initializing (re-init) the RocksDB database. For smaller size cache stores, iterating through all entries and removing each one individually can provide a faster method.

Tuning parameters

You can also specify the following RocksDB tuning parameters:

  • compressionType
  • blockSize
  • cacheSize

Configuration properties

Optionally set properties in the configuration as follows:

  • Prefix properties with database to adjust and tune RocksDB databases.
  • Prefix properties with data to configure the column families in which RocksDB stores your data.
<property name="database.max_background_compactions">2</property>
<property name="data.write_buffer_size">64MB</property>
<property name="data.compression_per_level">kNoCompression:kNoCompression:kNoCompression:kSnappyCompression:kZSTD:kZSTD</property>

Segmentation

RocksDBStore supports segmentation and creates a separate column family per segment. Segmented RocksDB cache stores improve lookup performance and iteration but slightly lower performance of write operations.

Note

You should not configure more than a few hundred segments. RocksDB is not designed to have an unlimited number of column families. Too many segments also significantly increases cache store start time.

RocksDB cache store configuration

XML

<local-cache>
   <persistence>
      <rocksdb-store xmlns="urn:infinispan:config:store:rocksdb:15.0"
                     path="rocksdb/data">
         <expiration path="rocksdb/expired"/>
      </rocksdb-store>
   </persistence>
</local-cache>

JSON

{
  "local-cache": {
    "persistence": {
      "rocksdb-store": {
        "path": "rocksdb/data",
        "expiration": {
          "path": "rocksdb/expired"
        }
      }
    }
  }
}

YAML

localCache:
  persistence:
    rocksdbStore:
      path: "rocksdb/data"
      expiration:
        path: "rocksdb/expired"

ConfigurationBuilder

Configuration cacheConfig = new ConfigurationBuilder().persistence()
				.addStore(RocksDBStoreConfigurationBuilder.class)
				.build();
EmbeddedCacheManager cacheManager = new DefaultCacheManager(cacheConfig);

Cache<String, User> usersCache = cacheManager.getCache("usersCache");
usersCache.put("raytsang", new User(...));

ConfigurationBuilder with properties

Properties props = new Properties();
props.put("database.max_background_compactions", "2");
props.put("data.write_buffer_size", "512MB");

Configuration cacheConfig = new ConfigurationBuilder().persistence()
				.addStore(RocksDBStoreConfigurationBuilder.class)
				.location("rocksdb/data")
				.expiredLocation("rocksdb/expired")
				.properties(props)
				.build();

6.13. Remote cache stores

Remote cache stores, RemoteStore, use the Hot Rod protocol to store data on Data Grid clusters.

Note

If you configure remote cache stores as shared you cannot preload data. In other words if shared="true" in your configuration then you must set preload="false".

Shared remote cache containers

Each remote cache store creates a dedicated remote cache manager. In case there are multiple remote stores connecting to the same server, this wastes resources. Use the remote-cache-containers configuration to create shared remote cache managers and reference them by name within your remote-store definitions.

Tip

If there is only a single remote-cache-container, it will be used by default by all remote cache stores which don’t specify one explicitly.

Segmentation

RemoteStore supports segmentation and can publish keys and entries by segment, which makes bulk operations more efficient. However, segmentation is available only with Data Grid Hot Rod protocol version 2.3 or later.

Warning

When you enable segmentation for RemoteStore, it uses the number of segments that you define in your Data Grid server configuration.

If the source cache is segmented and uses a different number of segments than RemoteStore, then incorrect values are returned for bulk operations. In this case, you should disable segmentation for RemoteStore.

Remote cache store configuration with shared remote containers

XML

<infinispan>

    <remote-cache-containers>
        <remote-cache-container uri="hotrod://one,two:12111?max-active=10&amp;exhausted-action=CREATE_NEW"/>
    </remote-cache-containers>

    <cache-container>
        <distributed-cache>
            <persistence>
                <remote-store xmlns="urn:infinispan:config:store:remote:15.0"
                              cache="mycache"
                              raw-values="true"
                />
            </persistence>
        </distributed-cache>
    </cache-container>
</infinispan>

JSON

{
  "infinispan": {
    "remote-cache-containers": [
      {
        "uri": "hotrod://one,two:12111?max-active=10&exhausted-action=CREATE_NEW"
      }
    ],
    "cache-container": {
      "caches": {
        "mycache": {
          "distributed-cache": {
            "remote-store": {
              "cache": "mycache",
              "raw-values": "true"
            }
          }
        }
      }
    }
  }
}

YAML

infinispan:
  remoteCacheContainers:
    - uri: "hotrod://one,two:12111?max-active=10&exhausted-action=CREATE_NEW"
  cacheContainer:
    caches:
      mycache:
        distributedCache:
          remoteStore:
            cache: "mycache"
            rawValues: "true"
            remoteServer:
              - host: "one"
                port: "12111"
              - host: "two"
            connectionPool:
              maxActive: "10"
              exhaustedAction: "CREATE_NEW"

ConfigurationBuilder

ConfigurationBuilder b = new ConfigurationBuilder();
b.persistence().addStore(RemoteStoreConfigurationBuilder.class)
      .ignoreModifications(false)
      .purgeOnStartup(false)
      .remoteCacheName("mycache")
      .rawValues(true)
.addServer()
      .host("one").port(12111)
      .addServer()
      .host("two")
      .connectionPool()
      .maxActive(10)
      .exhaustedAction(ExhaustedAction.CREATE_NEW)
      .async().enable();

Remote cache store configuration with private remote container

XML

<distributed-cache>
  <persistence>
    <remote-store xmlns="urn:infinispan:config:store:remote:15.0"
                  cache="mycache"
                  raw-values="true">
      <remote-server host="one"
                     port="12111" />
      <remote-server host="two" />
      <connection-pool max-active="10"
                       exhausted-action="CREATE_NEW" />
    </remote-store>
  </persistence>
</distributed-cache>

JSON

{
  "distributed-cache": {
    "remote-store": {
      "cache": "mycache",
      "raw-values": "true",
      "remote-server": [
        {
          "host": "one",
          "port": "12111"
        },
        {
          "host": "two"
        }
      ],
      "connection-pool": {
        "max-active": "10",
        "exhausted-action": "CREATE_NEW"
      }
    }
  }
}

YAML

distributedCache:
  remoteStore:
    cache: "mycache"
    rawValues: "true"
    remoteServer:
      - host: "one"
        port: "12111"
      - host: "two"
    connectionPool:
      maxActive: "10"
      exhaustedAction: "CREATE_NEW"

ConfigurationBuilder

ConfigurationBuilder b = new ConfigurationBuilder();
b.persistence().addStore(RemoteStoreConfigurationBuilder.class)
      .ignoreModifications(false)
      .purgeOnStartup(false)
      .remoteCacheName("mycache")
      .rawValues(true)
.addServer()
      .host("one").port(12111)
      .addServer()
      .host("two")
      .connectionPool()
      .maxActive(10)
      .exhaustedAction(ExhaustedAction.CREATE_NEW)
      .async().enable();

6.14. Cluster cache loaders

ClusterCacheLoader retrieves data from other Data Grid cluster members but does not persist data. In other words, ClusterCacheLoader is not a cache store.

Warning

ClusterLoader is deprecated and planned for removal in a future version.

ClusterCacheLoader provides a non-blocking partial alternative to state transfer. ClusterCacheLoader fetches keys from other nodes on demand if those keys are not available on the local node, which is similar to lazily loading cache content.

The following points also apply to ClusterCacheLoader:

  • Preloading does not take effect (preload=true).
  • Segmentation is not supported.

Cluster cache loader configuration

XML

<distributed-cache>
  <persistence>
    <cluster-loader preload="true" remote-timeout="500"/>
  </persistence>
</distributed-cache>

JSON

{
  "distributed-cache": {
    "persistence" : {
      "cluster-loader" : {
        "preload" : true,
        "remote-timeout" : "500"
      }
    }
  }
}

YAML

distributedCache:
  persistence:
    clusterLoader:
      preload: "true"
      remoteTimeout: "500"

ConfigurationBuilder

ConfigurationBuilder b = new ConfigurationBuilder();
b.persistence()
    .addClusterLoader()
    .remoteCallTimeout(500);

6.15. Creating custom cache store implementations

You can create custom cache stores through the Data Grid persistent SPI.

6.15.1. Data Grid Persistence SPI

The Data Grid Service Provider Interface (SPI) enables read and write operations to external storage through the NonBlockingStore interface and has the following features:

Simplified transaction integration
Data Grid automatically handles locking so your implementations do not need to coordinate concurrent access to persistent stores. Depending on the locking mode you use, concurrent writes to the same key generally do not occur. However, you should expect operations on the persistent storage to originate from multiple threads and create implementations to tolerate this behavior.
Parallel iteration
Data Grid lets you iterate over entries in persistent stores with multiple threads in parallel.
Reduced serialization resulting in less CPU usage
Data Grid exposes stored entries in a serialized format that can be transmitted remotely. For this reason, Data Grid does not need to deserialize entries that it retrieves from persistent storage and then serialize again when writing to the wire.

6.15.2. Creating cache stores

Create custom cache stores with implementations of the NonBlockingStore API.

Procedure

  1. Implement the appropriate Data Grid persistent SPIs.
  2. Annotate your store class with the @ConfiguredBy annotation if it has a custom configuration.
  3. Create a custom cache store configuration and builder if desired.

    1. Extend AbstractStoreConfiguration and AbstractStoreConfigurationBuilder.
    2. Optionally add the following annotations to your store Configuration class to ensure that your custom configuration builder parses your cache store configuration from XML:

      • @ConfigurationFor
      • @BuiltBy

        If you do not add these annotations, then CustomStoreConfigurationBuilder parses the common store attributes defined in AbstractStoreConfiguration and any additional elements are ignored.

        Note

        If a configuration does not declare the @ConfigurationFor annotation, a warning message is logged when Data Grid initializes the cache.

6.15.3. Examples of custom cache store configuration

The following are examples show how to configure Data Grid with custom cache store implementations:

XML

<distributed-cache>
  <persistence>
    <store class="org.infinispan.persistence.example.MyInMemoryStore" />
  </persistence>
</distributed-cache>

JSON

{
  "distributed-cache": {
    "persistence" : {
      "store" : {
        "class" : "org.infinispan.persistence.example.MyInMemoryStore"
      }
    }
  }
}

YAML

distributedCache:
  persistence:
    store:
      class: "org.infinispan.persistence.example.MyInMemoryStore"

ConfigurationBuilder

Configuration config = new ConfigurationBuilder()
            .persistence()
            .addStore(CustomStoreConfigurationBuilder.class)
            .build();

6.15.4. Deploying custom cache stores

To use your cache store implementation with Data Grid Server, you must provide it with a JAR file.

Prerequisites

  • Stop Data Grid Server if it is running.

    Data Grid loads JAR files at startup only.

Procedure

  1. Package your custom cache store implementation in a JAR file.
  2. Add your JAR file to the server/lib directory of your Data Grid Server installation.

6.16. Migrating data between cache stores

Data Grid provides a utility to migrate data from one cache store to another.

6.16.1. Cache store migrator

Data Grid provides the CLI migrate store command that recreates data for the latest Data Grid cache store implementations.

The store migrator takes a cache store from a previous version of Data Grid as source and uses a cache store implementation as target.

When you run the store migrator, it creates the target cache with the cache store type that you define using the EmbeddedCacheManager interface. The store migrator then loads entries from the source store into memory and then puts them into the target cache.

The store migrator also lets you migrate data from one type of cache store to another. For example, you can migrate from a JDBC string-based cache store to a SIFS cache store.

Important

The store migrator cannot migrate data from segmented cache stores to:

  • Non-segmented cache store.
  • Segmented cache stores that have a different number of segments.

6.16.2. Configuring the cache store migrator

Use the migrator.properties file to configure properties for source and target cache stores.

Procedure

  1. Create a migrator.properties file.
  2. Configure properties for source and target cache store using the migrator.properties file.

    1. Add the source. prefix to all configuration properties for the source cache store.

      Example source cache store

      source.type=SOFT_INDEX_FILE_STORE
      source.cache_name=myCache
      source.location=/path/to/source/sifs
      source.version=<version>

      Important

      For migrating data from segmented cache stores, you must also configure the number of segments using the source.segment_count property. The number of segments must match clustering.hash.numSegments in your Data Grid configuration. If the number of segments for a cache store does not match the number of segments for the corresponding cache, Data Grid cannot read data from the cache store.

    2. Add the target. prefix to all configuration properties for the target cache store.

      Example target cache store

      target.type=SINGLE_FILE_STORE
      target.cache_name=myCache
      target.location=/path/to/target/sfs.dat

6.16.2.1. Configuration properties for the cache store migrator

Configure source and target cache stores in a StoreMigrator properties.

Table 6.2. Cache Store Type Property
PropertyDescriptionRequired/Optional

type

Specifies the type of cache store for a source or target cache store.

.type=JDBC_STRING

.type=JDBC_BINARY

.type=JDBC_MIXED

.type=LEVELDB

.type=ROCKSDB

.type=SINGLE_FILE_STORE

.type=SOFT_INDEX_FILE_STORE

.type=JDBC_MIXED

Required

Table 6.3. Common Properties
PropertyDescriptionExample ValueRequired/Optional

cache_name

The name of the cache that you want to back up.

.cache_name=myCache

Required

segment_count

The number of segments for target cache stores that can use segmentation.

The number of segments must match clustering.hash.numSegments in the Data Grid configuration. If the number of segments for a cache store does not match the number of segments for the corresponding cache, Data Grid cannot read data from the cache store.

.segment_count=256

Optional

marshaller.class

Specifies a custom marshaller class.

Required if using custom marshallers.

marshaller.allow-list.classes

Specifies a comma-separated list of fully qualified class names that are allowed to be deserialized.

Optional

marshaller.allow-list.regexps

Specifies a comma-separated list of regular expressions that determine which classes are allowed be deserialized.

Optional

marshaller.externalizers

Specifies a comma-separated list of custom AdvancedExternalizer implementations to load in this format: [id]:<Externalizer class>

Optional

Table 6.4. JDBC Properties
PropertyDescriptionRequired/Optional

dialect

Specifies the dialect of the underlying database.

Required

version

Specifies the marshaller version for source cache stores.
Set one of the following values:

* 8 for Data Grid 7.2.x

* 9 for Data Grid 7.3.x

* 10 for Data Grid 8.0.x

* 11 for Data Grid 8.1.x

* 12 for Data Grid 8.2.x

* 13 for Data Grid 8.3.x

Required for source stores only.

connection_pool.connection_url

Specifies the JDBC connection URL.

Required

connection_pool.driver_class

Specifies the class of the JDBC driver.

Required

connection_pool.username

Specifies a database username.

Required

connection_pool.password

Specifies a password for the database username.

Required

db.disable_upsert

Disables database upsert.

Optional

db.disable_indexing

Specifies if table indexes are created.

Optional

table.string.table_name_prefix

Specifies additional prefixes for the table name.

Optional

table.string.<id|data|timestamp>.name

Specifies the column name.

Required

table.string.<id|data|timestamp>.type

Specifies the column type.

Required

key_to_string_mapper

Specifies the TwoWayKey2StringMapper class.

Optional

Note

To migrate from Binary cache stores in older Data Grid versions, change table.string.* to table.binary.\* in the following properties:

  • source.table.binary.table_name_prefix
  • source.table.binary.<id\|data\|timestamp>.name
  • source.table.binary.<id\|data\|timestamp>.type
# Example configuration for migrating to a JDBC String-Based cache store
target.type=STRING
target.cache_name=myCache
target.dialect=POSTGRES
target.marshaller.class=org.infinispan.commons.marshall.JavaSerializationMarshaller
target.marshaller.allow-list.classes=org.example.Person,org.example.Animal
target.marshaller.allow-list.regexps="org.another.example.*"
target.marshaller.externalizers=25:Externalizer1,org.example.Externalizer2
target.connection_pool.connection_url=jdbc:postgresql:postgres
target.connection_pool.driver_class=org.postrgesql.Driver
target.connection_pool.username=postgres
target.connection_pool.password=redhat
target.db.disable_upsert=false
target.db.disable_indexing=false
target.table.string.table_name_prefix=tablePrefix
target.table.string.id.name=id_column
target.table.string.data.name=datum_column
target.table.string.timestamp.name=timestamp_column
target.table.string.id.type=VARCHAR
target.table.string.data.type=bytea
target.table.string.timestamp.type=BIGINT
target.key_to_string_mapper=org.infinispan.persistence.keymappers. DefaultTwoWayKey2StringMapper
Table 6.5. RocksDB Properties
PropertyDescriptionRequired/Optional

location

Sets the database directory.

Required

compression

Specifies the compression type to use.

Optional

# Example configuration for migrating from a RocksDB cache store.
source.type=ROCKSDB
source.cache_name=myCache
source.location=/path/to/rocksdb/database
source.compression=SNAPPY
Table 6.6. SingleFileStore Properties
PropertyDescriptionRequired/Optional

location

Sets the directory that contains the cache store .dat file.

Required

# Example configuration for migrating to a Single File cache store.
target.type=SINGLE_FILE_STORE
target.cache_name=myCache
target.location=/path/to/sfs.dat
Table 6.7. SoftIndexFileStore Properties
PropertyDescriptionValue

Required/Optional

location

Sets the database directory.

Required

index_location

Sets the database index directory.

# Example configuration for migrating to a Soft-Index File cache store.
target.type=SOFT_INDEX_FILE_STORE
target.cache_name=myCache
target.location=path/to/sifs/database
target.location=path/to/sifs/index

6.16.3. Migrating Data Grid cache stores

Run the store migrator to migrate data from one cache store to another.

Prerequisites

  • Get the Data Grid CLI.
  • Create a migrator.properties file that configures the source and target cache stores.

Procedure

  • Run the migrate store -p /path/to/migrator.properties CLI command
Red Hat logoGithubRedditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

© 2024 Red Hat, Inc.