Chapter 26. Performing Rolling Upgrades
Upgrade Red Hat Data Grid without downtime or data loss. You can perform rolling upgrades in Remote Client/Server Mode to start using a more recent version of Red Hat Data Grid.
This section explains how to upgrade Red Hat Data Grid servers, see the appropriate documentation for your Hot Rod client for upgrade procedures.
From a high-level, you do the following to perform rolling upgrades:
- Set up a target cluster. The target cluster is the Red Hat Data Grid version to which you want to migrate data. The source cluster is the Red Hat Data Grid deployment that is currently in use. After the target cluster is running, you configure all clients to point to it instead of the source cluster.
- Synchronize data from the source cluster to the target cluster.
26.1. Setting Up a Target Cluster
- Start the target cluster with unique network properties or a different JGroups cluster name to keep it separate from the source cluster.
Configure a
RemoteCacheStore
on the target cluster for each cache you want to migrate from the source cluster.RemoteCacheStore
settings-
remote-server
must point to the source cluster via theoutbound-socket-binding
property. -
remoteCacheName
must match the cache name on the source cluster. -
hotrod-wrapping
must betrue
(enabled). -
shared
must betrue
(enabled). -
purge
must befalse
(disabled). -
passivation
must befalse
(disabled). protocol-version
matches the Hot Rod protocol version of the source cluster.Example
RemoteCacheStore
Configuration<distributed-cache> <remote-store cache="MyCache" socket-timeout="60000" tcp-no-delay="true" protocol-version="2.5" shared="true" hotrod-wrapping="true" purge="false" passivation="false"> <remote-server outbound-socket-binding="remote-store-hotrod-server"/> </remote-store> </distributed-cache> ... <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}"> ... <outbound-socket-binding name="remote-store-hotrod-server"> <remote-destination host="198.51.100.0" port="11222"/> </outbound-socket-binding> ... </socket-binding-group>
-
Configure the target cluster to handle all client requests instead of the source cluster:
- Configure all clients to point to the target cluster instead of the source cluster.
Restart each client node.
The target cluster lazily loads data from the source cluster on demand via
RemoteCacheStore
.
26.2. Synchronizing Data from the Source Cluster
Call the
synchronizeData()
method in theTargetMigrator
interface. Do one of the following on the target cluster for each cache that you want to migrate:- JMX
-
Invoke the
synchronizeData
operation and specify thehotrod
parameter on theRollingUpgradeManager
MBean. - CLI
$ bin/cli.sh --connect controller=127.0.0.1:9990 -c "/subsystem=datagrid-infinispan/cache-container=clustered/distributed-cache=MyCache:synchronize-data(migrator-name=hotrod)"
Data migrates to all nodes in the target cluster in parallel, with each node receiving a subset of the data.
Use the following parameters to tune the operation:
-
read-batch
configures the number of entries to read from the source cluster at a time. The default value is10000
. write-threads
configures the number of threads used to write data. The default value is the number of processors available.For example:
synchronize-data(migrator-name=hotrod, read-batch=100000, write-threads=3)
-
Disable the
RemoteCacheStore
on the target cluster. Do one of the following:- JMX
-
Invoke the
disconnectSource
operation and specify thehotrod
parameter on theRollingUpgradeManager
MBean. - CLI
$ bin/cli.sh --connect controller=127.0.0.1:9990 -c "/subsystem=datagrid-infinispan/cache-container=clustered/distributed-cache=MyCache:disconnect-source(migrator-name=hotrod)"
- Decommission the source cluster. == Extending Red Hat Data Grid Red Hat Data Grid can be extended to provide the ability for an end user to add additional configurations, operations and components outside of the scope of the ones normally provided by Red Hat Data Grid.
26.3. Custom Commands
Red Hat Data Grid makes use of a command/visitor pattern to implement the various top-level methods you see on the public-facing API. This is explained in further detail in the Architectural Overview section. While the core commands - and their corresponding visitors - are hard-coded as a part of Red Hat Data Grid’s core module, module authors can extend and enhance Red Hat Data Grid by creating new custom commands.
As a module author (such as infinispan-query, etc.) you can define your own commands.
You do so by:
-
Create a
META-INF/services/org.infinispan.commands.module.ModuleCommandExtensions
file and ensure this is packaged in your jar. -
Implementing
ModuleCommandFactory
,ModuleCommandInitializer
andModuleCommandExtensions
-
Specifying the fully-qualified class name of the
ModuleCommandExtensions
implementation inMETA-INF/services/org.infinispan.commands.module.ModuleCommandExtensions
. - Implement your custom commands and visitors for these commands
26.3.1. An Example
Here is an example of an META-INF/services/org.infinispan.commands.module.ModuleCommandExtensions
file, configured accordingly:
org.infinispan.commands.module.ModuleCommandExtensions
org.infinispan.query.QueryModuleCommandExtensions
For a full, working example of a sample module that makes use of custom commands and visitors, check out Red Hat Data Grid Sample Module .
26.3.2. Preassigned Custom Command Id Ranges
This is the list of Command
identifiers that are used by Red Hat Data Grid based modules or frameworks. Red Hat Data Grid users should avoid using ids within these ranges. (RANGES to be finalised yet!) Being this a single byte, ranges can’t be too large.
Red Hat Data Grid Query: | 100 - 119 |
Hibernate Search: | 120 - 139 |
Hot Rod Server: | 140 - 141 |
26.4. Extending the configuration builders and parsers
If your custom module requires configuration, it is possible to enhance Red Hat Data Grid’s configuration builders and parsers. Look at the custom module tests for a detail example on how to implement this.