Chapter 39. Externalize Sessions
39.1. Externalize Sessions
Red Hat JBoss Data Grid can be used as an external cache for containers, such as JBoss Enterprise Application Platform (EAP). This allows JBoss Data Grid to store HTTP Sessions, among other data, independent of the application layer, which provides the following benefits:
Application Elasticity
By making the application stateless additional nodes may be added to the EAP cluster without expensive data rebalancing operations. The EAP cluster may also be replaced without downtime by keeping the state in the JBoss Data Grid layer, as upgraded nodes may be brought online and retrieve the sessions.
Failover Across Data Centers
Should a data center become unavailable the session data persists, as it is stored safely within the JBoss Data Grid cluster. This allows a load balancer to redirect incoming requests to a second cluster to retrieve the session information.
Reduced Memory Footprint
There is reduced memory pressure, resulting in shorter garbage collection time and frequency of collections, as the HTTP Sessions have been moved out of the application layer and into the backing caches.
39.2. Externalize HTTP Session from JBoss EAP to JBoss Data Grid
The following procedure applies for both standalone and domain mode of EAP; however, in domain mode each server group requires a unique remote cache configured.
While multiple server groups can utilize the same Red Hat JBoss Data Grid cluster the respective remote caches will be unique to the EAP server group.
The following procedures have been tested and validated on JBoss EAP 7.0 and JBoss Data Grid 7.0.
Externalize HTTP Sessions
Ensure the remote cache containers are defined in EAP’s
infinispan
subsystem; in the example below thecache
attribute in theremote-store
element defines the cache name on the remote JBoss Data Grid server:<subsystem xmlns="urn:jboss:domain:infinispan:4.0"> [...] <cache-container name="web" default-cache="dist" module="org.wildfly.clustering.web.infinispan" statistics-enabled="true"> <transport lock-timeout="60000"/> <invalidation-cache name="jdg"> <locking isolation="REPEATABLE_READ"/> <transaction mode="BATCH"/> <remote-store remote-servers="remote-jdg-server1 remote-jdg-server2" cache="default" socket-timeout="60000" preload="true" passivation="false" purge="false" shared="true"/> </invalidation-cache> </cache-container> </subsystem>
Define the location of the remote Red Hat JBoss Data Grid server by adding the networking information to the
socket-binding-group
:<socket-binding-group ...> <outbound-socket-binding name="remote-jdg-server1"> <remote-destination host="JDGHostName1" port="11222"/> </outbound-socket-binding> <outbound-socket-binding name="remote-jdg-server2"> <remote-destination host="JDGHostName2" port="11222"/> </outbound-socket-binding> </socket-binding-group>
-
Repeat the above steps for each cache-container and each Red Hat JBoss Data Grid server. Each server defined must have a separate
<outbound-socket-binding>
element defined. Add passivation and cache information into the application’s
jboss-web.xml
. In the following exampleweb
is the name of the cache container, andjdg
is the name of the default cache located in this container. An example file is shown below:<?xml version="1.0" encoding="UTF-8"?> <jboss-web xmlns="http://www.jboss.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.jboss.com/xml/ns/javaee http://www.jboss.org/j2ee/schema/jboss-web_10_0.xsd" version="10.0"> <distributable/> <replication-config> <replication-granularity>SESSION</replication-granularity> <cache-name>web.jdg</cache-name> </replication-config> </jboss-web>
The passivation timeouts above are provided assuming that a typical session is abandoned within 15 minutes and uses the default HTTP session timeout in JBoss EAP of 30 minutes. These values may need to be adjusted based on each application’s workload.
39.3. Externalize HTTP Sessions from JBoss Web Server (JWS) to JBoss Data Grid
39.3.1. Externalize HTTP Session from JBoss Web Server (JWS) to JBoss Data Grid
A session manager has been provided as part of the JBoss Data Grid distribution, allowing JWS users to externalize sessions to JBoss Data Grid by integrating with an extension of Tomcat’s Manager
. This allows the Tomcat layer to remain stateless while providing session management persistence.
39.3.2. Prerequisites
This manager requires the following versions, or later, be installed:
- JBoss Web Server 3.0
- JBoss Data Grid 7.1
39.3.3. Installation
Complete the following steps to install this manager for either Tomcat 7 or Tomcat 8:
Download one of the following from the JBoss Data Grid product page:
- jboss-datagrid-7.2.0-tomcat7-session-client.zip
- jboss-datagrid-7.2.0-tomcat8-session-client.zip
- Extract the archive.
-
Copy the
lib/
directory from the extracted archive into$CATALINA_HOME
. Define the implementation of the Session Manager in
context.xml
, as seen below:<Manager className="org.wildfly.clustering.tomcat.hotrod.HotRodManager" server_list="www.server1.com:7600;www.server2.com:7600" <!-- Additional Configuration Elements --> />
39.3.4. Session Management Details
When using the HotRodManager
all sessions are placed into the default
cache located on the Remote JBoss Data Grid server. Cache names are not configurable.
Sessions stored in JBoss Web Server are all mutable by default. If an object changes during the course of the request then it will be replicated after the request ends. To define immutable objects, use one of the following annotations:
-
The Wildfly specific annotation -
org.wildfly.clustering.web.annotation.Immutable
. - Any generic immutable annotation.
- Any known immutable type from the JDK implementation.
Objects may have custom marshalling by defining an Externalizer
. By default the Wildfly Externalizer
is recognized; however, any implementation of this Externalizer
may be used. Additionally, non-serializable objects may be stored without issue as long as they have an Externalizer
defined.
39.3.5. Configure the JBoss Web Server Session Manager
The HotRodManager
is configured by defining properties on the Manager
element inside of context.xml
. These are pulled from two separate lists:
-
org.apache.catalina.Manager
- As the session manager implements this class many of theCommon Attributes
are configurable. -
ConfigurationParameters
- This session manager also uses the HotRod Configuration Properties.
The following table displays all of the configurable elements
Attribute | Description |
---|---|
name |
The name of this cluster manager. The name is used to identify a session manager on a node. The name might get modified by the |
sessionIdLength |
The length of session ids created by this Manager, measured in bytes, excluding subsequent conversion to a hexadecimal string and excluding any JVM route information used for load balancing. This attribute is deprecated. Set the length on a nested |
secureRandomClass |
Name of the Java class that extends |
secureRandomProvider |
Name of the provider to use to create the |
secureRandomAlgorithm |
Name of the algorithm to use to create the |
recordAllActions | Flag whether send all actions for session across Tomcat cluster nodes. If set to false, if already done something to the same attribute, make sure don’t send multiple actions across Tomcat cluster nodes. In that case, sends only the actions that have been added at last. Default is false. |
There is also a property specific to the JWS HotRodManager
, shown below:
Attribute | Description |
---|---|
persistenceStrategy |
Determines whether or not all attributes that compose a session should be serialized together ( |
In addition to the attributes inherited from Tomcat, the HotRodManager
may use any of the properties typically available to a RemoteCacheManager
. These are outlined in HotRod Properties.
When using HotRod properties only the property name itself is required. For instance, to configure TCP KEEPALIVE and TCP NODELAY on the manager the following xml snippet would be used:
<Manager className="org.wildfly.clustering.tomcat.hotrod.HotRodManager" tcp_no_delay="true" tcp_keep_alive="true"/>