此内容没有您所选择的语言版本。
Chapter 6. Configuration
Since the server is based on the WildFly codebase, refer to the WildFly documentation, apart from the JGroups, Red Hat Data Grid and Endpoint subsytems.
6.1. JGroups subsystem configuration 复制链接链接已复制到粘贴板!
The JGroups subsystem configures the network transport and is only required when clustering multiple Red Hat Data Grid Server nodes together.
The subsystem declaration is enclosed in the following XML element:
Within the subsystem, you need to declare the stacks that you wish to use and name them. The default-stack attribute in the subsystem declaration must point to one of the declared stacks. You can switch stacks from the command-line using the jboss.default.jgroups.stack property:
bin/standalone.sh -c clustered.xml -Djboss.default.jgroups.stack=tcp
bin/standalone.sh -c clustered.xml -Djboss.default.jgroups.stack=tcp
A stack declaration is composed of a transport, UDP or TCP, followed by a list of protocols. You can tune protocols by adding properties as child elements with this format:
<property name="prop_name">prop_value</property>
Default stacks for Red Hat Data Grid are as follows:
For some properties, Red Hat Data Grid uses values other than the JGroups defaults to tune performance. You should examine the following files to review the JGroups configuration for Red Hat Data Grid:
Remote Client/Server Mode:
-
jgroups-defaults.xml -
infinispan-jgroups.xml
-
Library Mode:
-
default-jgroups-tcp.xml -
default-jgroups-udp.xml
-
These JGroups files are available in the Red Hat Data Grid source distribution from the Red Hat customer portal.
See JGroups Protocol documentation for more information about available properties and default values.
The default TCP stack uses the MPING protocol for discovery, which uses UDP multicast. If you need to use a different protocol, look at the JGroups Discovery Protocols . The following example stack configures the TCPPING discovery protocol with two initial hosts:
The default configurations come with a variety of pre-configured stacks for different enviroments. For example, the tcpgossip stack uses Gossip discovery:
<protocol type="TCPGOSSIP">
<property name="initial_hosts">${jgroups.gossip.initial_hosts:}</property>
</protocol>
<protocol type="TCPGOSSIP">
<property name="initial_hosts">${jgroups.gossip.initial_hosts:}</property>
</protocol>
Use the s3 stack when running in Amazon AWS:
<protocol type="org.jgroups.aws.s3.NATIVE_S3_PING" module="org.jgroups.aws.s3:{infinispanslot}">
<property name="region_name">${jgroups.s3.region:}</property>
<property name="bucket_name">${jgroups.s3.bucket_name:}</property>
<property name="bucket_prefix">${jgroups.s3.bucket_prefix:}</property>
</protocol>
<protocol type="org.jgroups.aws.s3.NATIVE_S3_PING" module="org.jgroups.aws.s3:{infinispanslot}">
<property name="region_name">${jgroups.s3.region:}</property>
<property name="bucket_name">${jgroups.s3.bucket_name:}</property>
<property name="bucket_prefix">${jgroups.s3.bucket_prefix:}</property>
</protocol>
Similarly, when using Google’s Cloud Platform, use the google stack:
<protocol type="GOOGLE_PING">
<property name="location">${jgroups.google.bucket:}</property>
<property name="access_key">${jgroups.google.access_key:}</property>
<property name="secret_access_key">${jgroups.google.secret_access_key:}</property>
</protocol>
<protocol type="GOOGLE_PING">
<property name="location">${jgroups.google.bucket:}</property>
<property name="access_key">${jgroups.google.access_key:}</property>
<property name="secret_access_key">${jgroups.google.secret_access_key:}</property>
</protocol>
Use the dns-ping stack to run Red Hat Data Grid on Kubernetes environments such as OKD or OpenShift:
<protocol type="dns.DNS_PING">
<property name="dns_query">${jgroups.dns_ping.dns_query}</property>
</protocol>
<protocol type="dns.DNS_PING">
<property name="dns_query">${jgroups.dns_ping.dns_query}</property>
</protocol>
The value of the dns_query property is the DNS query that returns the cluster members. See DNS for Services and Pods for information about Kubernetes DNS naming.
6.2. Red Hat Data Grid subsystem configuration 复制链接链接已复制到粘贴板!
The Red Hat Data Grid subsystem configures the cache containers and caches.
The subsystem declaration is enclosed in the following XML element:
<subsystem xmlns="urn:infinispan:server:core:9.4" default-cache-container="clustered"> ... </subsystem>
<subsystem xmlns="urn:infinispan:server:core:9.4" default-cache-container="clustered">
...
</subsystem>
6.2.1. Containers 复制链接链接已复制到粘贴板!
The Red Hat Data Grid subsystem can declare multiple containers. A container is declared as follows:
<cache-container name="clustered" default-cache="default"> ... </cache-container>
<cache-container name="clustered" default-cache="default">
...
</cache-container>
Note that in server mode is the lack of an implicit default cache, but the ability to specify a named cache as the default.
If you need to declare clustered caches (distributed, replicated, invalidation), you also need to specify the <transport/> element which references an existing JGroups transport. This is not needed if you only intend to have local caches only.
<transport executor="infinispan-transport" lock-timeout="60000" stack="udp" cluster="my-cluster-name"/>
<transport executor="infinispan-transport" lock-timeout="60000" stack="udp" cluster="my-cluster-name"/>
6.2.2. Caches 复制链接链接已复制到粘贴板!
Now you can declare your caches. Please be aware that only the caches declared in the configuration will be available to the endpoints and that attempting to access an undefined cache is an illegal operation. Contrast this with the default Red Hat Data Grid library behaviour where obtaining an undefined cache will implicitly create one using the default settings. The following are example declarations for all four available types of caches:
6.2.3. Expiration 复制链接链接已复制到粘贴板!
To define a default expiration for entries in a cache, add the <expiration/> element as follows:
<expiration lifespan="2000" max-idle="1000"/>
<expiration lifespan="2000" max-idle="1000"/>
The possible attributes for the expiration element are:
- lifespan maximum lifespan of a cache entry, after which the entry is expired cluster-wide, in milliseconds. -1 means the entries never expire.
- max-idle maximum idle time a cache entry will be maintained in the cache, in milliseconds. If the idle time is exceeded, the entry will be expired cluster-wide. -1 means the entries never expire.
- interval interval (in milliseconds) between subsequent runs to purge expired entries from memory and any cache stores. If you wish to disable the periodic eviction process altogether, set interval to -1.
6.2.4. Eviction 复制链接链接已复制到粘贴板!
To define an eviction strategy for a cache, add the <memory> element to your <*-cache />.
For more information about configuring the eviction strategy, see Eviction and Data Container in the Red Hat Data Grid User Guide.
6.2.5. Locking 复制链接链接已复制到粘贴板!
To define the locking configuration for a cache, add the <locking/> element as follows:
<locking isolation="REPEATABLE_READ" acquire-timeout="30000" concurrency-level="1000" striping="false"/>
<locking isolation="REPEATABLE_READ" acquire-timeout="30000" concurrency-level="1000" striping="false"/>
The possible attributes for the locking element are:
- isolation sets the cache locking isolation level. Can be NONE, READ_UNCOMMITTED, READ_COMMITTED, REPEATABLE_READ, SERIALIZABLE. Defaults to REPEATABLE_READ
- striping if true, a pool of shared locks is maintained for all entries that need to be locked. Otherwise, a lock is created per entry in the cache. Lock striping helps control memory footprint but may reduce concurrency in the system.
- acquire-timeout maximum time to attempt a particular lock acquisition.
- concurrency-level concurrency level for lock containers. Adjust this value according to the number of concurrent threads interacting with Red Hat Data Grid.
- concurrent-updates for non-transactional caches only: if set to true(default value) the cache keeps data consistent in the case of concurrent updates. For clustered caches this comes at the cost of an additional RPC, so if you don’t expect your application to write data concurrently, disabling this flag increases performance.
6.2.6. Transactional Operations with Hot Rod 复制链接链接已复制到粘贴板!
Hot Rod clients can take advantage of transactional capabilities when performing cache operations. No other protocols that Red Hat Data Grid supports offer transactional capabilities.
6.2.7. Loaders and Stores 复制链接链接已复制到粘贴板!
Loaders and stores can be defined in server mode in almost the same way as in embedded mode.
However, in server mode it is no longer necessary to define the <persistence>…</persistence> tag. Instead, a store’s attributes are now defined on the store type element. For example, to configure the H2 database with a distributed cache in domain mode we define the "default" cache as follows in our domain.xml configuration:
Another important thing to note in this example, is that we use the "ExampleDS" datasource which is defined in the datasources subsystem in our domain.xml configuration as follows:
For additional examples of store configurations, please view the configuration templates in the default "domain.xml" file provided with in the server distribution at ./domain/configuration/domain.xml.
6.2.8. State Transfer 复制链接链接已复制到粘贴板!
To define the state transfer configuration for a distributed or replicated cache, add the <state-transfer/> element as follows:
<state-transfer enabled="true" timeout="240000" chunk-size="512" await-initial-transfer="true" />
<state-transfer enabled="true" timeout="240000" chunk-size="512" await-initial-transfer="true" />
The possible attributes for the state-transfer element are:
- enabled if true, this will cause the cache to ask neighboring caches for state when it starts up, so the cache starts 'warm', although it will impact startup time. Defaults to true.
- timeout the maximum amount of time (ms) to wait for state from neighboring caches, before throwing an exception and aborting startup. Defaults to 240000 (4 minutes).
- chunk-size the number of cache entries to batch in each transfer. Defaults to 512.
- await-initial-transfer if true, this will cause the cache to wait for initial state transfer to complete before responding to requests. Defaults to true.
6.3. Endpoint subsystem configuration 复制链接链接已复制到粘贴板!
The endpoint subsystem exposes a whole container (or in the case of Memcached, a single cache) over a specific connector protocol. You can define as many connector as you need, provided they bind on different interfaces/ports.
The subsystem declaration is enclosed in the following XML element:
<subsystem xmlns="urn:infinispan:server:endpoint:9.4"> ... </subsystem>
<subsystem xmlns="urn:infinispan:server:endpoint:9.4">
...
</subsystem>
6.3.1. Hot Rod 复制链接链接已复制到粘贴板!
The following connector declaration enables a HotRod server using the hotrod socket binding (declared within a <socket-binding-group /> element) and exposing the caches declared in the local container, using defaults for all other settings.
<hotrod-connector socket-binding="hotrod" cache-container="local" />
<hotrod-connector socket-binding="hotrod" cache-container="local" />
The connector will create a supporting topology cache with default settings. If you wish to tune these settings add the <topology-state-transfer /> child element to the connector as follows:
<hotrod-connector socket-binding="hotrod" cache-container="local"> <topology-state-transfer lazy-retrieval="false" lock-timeout="1000" replication-timeout="5000" /> </hotrod-connector>
<hotrod-connector socket-binding="hotrod" cache-container="local">
<topology-state-transfer lazy-retrieval="false" lock-timeout="1000" replication-timeout="5000" />
</hotrod-connector>
The Hot Rod connector can be further tuned with additional settings such as concurrency and buffering. See the protocol connector settings paragraph for additional details
Furthermore the HotRod connector can be secured using SSL. First you need to declare an SSL server identity within a security realm in the management section of the configuration file. The SSL server identity should specify the path to a keystore and its secret. Refer to the AS documentation on this. Next add the <security /> element to the HotRod connector as follows:
<hotrod-connector socket-binding="hotrod" cache-container="local">
<security ssl="true" security-realm="ApplicationRealm" require-ssl-client-auth="false" />
</hotrod-connector>
<hotrod-connector socket-binding="hotrod" cache-container="local">
<security ssl="true" security-realm="ApplicationRealm" require-ssl-client-auth="false" />
</hotrod-connector>
6.3.2. Memcached 复制链接链接已复制到粘贴板!
The following connector declaration enables a Memcached server using the memcached socket binding (declared within a <socket-binding-group /> element) and exposing the memcachedCache cache declared in the local container, using defaults for all other settings. Because of limitations in the Memcached protocol, only one cache can be exposed by a connector. If you wish to expose more than one cache, declare additional memcached-connectors on different socket-bindings.
<memcached-connector socket-binding="memcached" cache-container="local"/>
<memcached-connector socket-binding="memcached" cache-container="local"/>
6.3.3. REST 复制链接链接已复制到粘贴板!
<rest-connector socket-binding="rest" cache-container="local" security-domain="other" auth-method="BASIC"/>
<rest-connector socket-binding="rest" cache-container="local" security-domain="other" auth-method="BASIC"/>
6.3.4. Common Protocol Connector Settings 复制链接链接已复制到粘贴板!
The HotRod and Memcached protocol connectors support a number of tuning attributes in their declaration:
- worker-threads Sets the number of worker threads. Defaults to 160.
- idle-timeout Specifies the maximum time in seconds that connections from client will be kept open without activity. Defaults to -1 (connections will never timeout)
- tcp-nodelay Affects TCP NODELAY on the TCP stack. Defaults to enabled.
- send-buffer-size Sets the size of the send buffer.
- receive-buffer-size Sets the size of the receive buffer.
6.3.5. Starting and Stopping Red Hat Data Grid Endpoints 复制链接链接已复制到粘贴板!
Use the Command-Line Interface (CLI) to start and stop Red Hat Data Grid endpoint connectors.
Commands to start and stop endpoint connectors:
- Apply to individual endpoints. To stop or start all endpoint connectors, you must run the command on each endpoint connector.
- Take effect on single nodes only (not cluster-wide).
Procedure
- Start the CLI and connect to Red Hat Data Grid.
List the endpoint connectors in the
datagrid-infinispan-endpointsubsystem, as follows:[standalone@localhost:9990 /] ls subsystem=datagrid-infinispan-endpoint hotrod-connector memcached-connector rest-connector router-connector
[standalone@localhost:9990 /] ls subsystem=datagrid-infinispan-endpoint hotrod-connector memcached-connector rest-connector router-connectorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Navigate to the endpoint connector you want to start or stop, for example:
[standalone@localhost:9990 /] cd subsystem=datagrid-infinispan-endpoint [standalone@localhost:9990 subsystem=datagrid-infinispan-endpoint] cd rest-connector=rest-connector
[standalone@localhost:9990 /] cd subsystem=datagrid-infinispan-endpoint [standalone@localhost:9990 subsystem=datagrid-infinispan-endpoint] cd rest-connector=rest-connectorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
:stop-connectorand:start-connectorcommands as appropriate.[standalone@localhost:9990 rest-connector=rest-connector] :stop-connector {"outcome" => "success"} [standalone@localhost:9990 rest-connector=rest-connector] :start-connector {"outcome" => "success"}[standalone@localhost:9990 rest-connector=rest-connector] :stop-connector {"outcome" => "success"} [standalone@localhost:9990 rest-connector=rest-connector] :start-connector {"outcome" => "success"}Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3.6. Protocol Interoperability 复制链接链接已复制到粘贴板!
Clients exchange data with Red Hat Data Grid through endpoints such as REST or Hot Rod.
Each endpoint uses a different protocol so that clients can read and write data in a suitable format. Because Red Hat Data Grid can interoperate with multiple clients at the same time, it must convert data between client formats and the storage formats.
For more information, see the Protocol Interoperability topic in the User Guide.
6.3.7. Custom Marshaller Bridges 复制链接链接已复制到粘贴板!
Red Hat Data Grid provides two marshalling bridges for marshalling client/server requests using the Kryo and Protostuff libraries. To utilise either of these marshallers, you simply place the dependency of the marshaller you require in your client pom. Custom schemas for object marshalling must then be registered with the selected library using the library’s api on the client or by implementing a RegistryService for the given marshaller bridge. Examples of how to achieve this for both libraries are presented below:
6.3.7.1. Protostuff 复制链接链接已复制到粘贴板!
Add the protostuff marshaller dependency to your pom:
<dependency>
<groupId>org.infinispan</groupId>
<artifactId>infinispan-marshaller-protostuff</artifactId>
<version>${version.infinispan}</version>
</dependency>
<dependency>
<groupId>org.infinispan</groupId>
<artifactId>infinispan-marshaller-protostuff</artifactId>
<version>${version.infinispan}</version>
</dependency>
Replace ${version.infinispan} with the appropriate version of Red Hat Data Grid.
To register custom Protostuff schemas in your own code, you must register the custom schema with Protostuff before any marshalling begins. This can be achieved by simply calling:
RuntimeSchema.register(ExampleObject.class, new ExampleObjectSchema());
RuntimeSchema.register(ExampleObject.class, new ExampleObjectSchema());
Or, you can implement a service provider for the SchemaRegistryService.java interface, placing all Schema registrations in the register() method. Implementations of this interface are loaded via Java’s ServiceLoader api, therefore the full path of the implementing class(es) should be provided in a META-INF/services/org/infinispan/marshaller/protostuff/SchemaRegistryService file within your deployment jar.
6.3.7.2. Kryo 复制链接链接已复制到粘贴板!
Add the kryo marshaller dependency to your pom:
<dependency>
<groupId>org.infinispan</groupId>
<artifactId>infinispan-marshaller-kryo</artifactId>
<version>${version.infinispan}</version>
</dependency>
<dependency>
<groupId>org.infinispan</groupId>
<artifactId>infinispan-marshaller-kryo</artifactId>
<version>${version.infinispan}</version>
</dependency>
Replace ${version.infinispan} with the appropriate version of Red Hat Data Grid.
To register custom Kryo serializer in your own code, you must register the custom serializer with Kryo before any marshalling begins. This can be achieved by implementing a service provider for the SerializerRegistryService.java interface, placing all serializer registrations in the register(Kryo) method; where serializers should be registered with the supplied Kryo object using the Kryo api. e.g. kryo.register(ExampleObject.class, new ExampleObjectSerializer()). Implementations of this interface are loaded via Java’s ServiceLoader api, therefore the full path of the implementing class(es) should be provided in a META-INF/services/org/infinispan/marshaller/kryo/SerializerRegistryService file within your deployment jar.
6.3.7.3. Server Compatibility Mode 复制链接链接已复制到粘贴板!
When using the Protostuff/Kryo bridges in compatibility mode, it is necessary for the class files of all custom objects to be placed on the classpath of the server. To achieve this, you should follow the steps outlined in the Protocol Interoperability section, to place a jar containing all of their custom classes on the server’s classpath.
When utilising a custom marshaller in compatibility mode, it is also necessary for the marshaller and it’s runtime dependencies to be on the server’s classpath. To aid with this step we have created a "bundle" jar for each of the bridge implementations which includes all of the runtime class files required by the bridge and underlying library. Therefore, it is only necessary to include this single jar on the server’s classpath.
Bundle jar downloads:
Jar files containing custom classes must be placed in the same module/directory as the custom marshaller bundle so that the marshaller can load them. i.e. if you register the marshaller bundle in modules/system/layers/base/org/infinispan/main/modules.xml, then you must also register your custom classes here.
6.3.7.3.1. Registering Custom Schemas/Serializers 复制链接链接已复制到粘贴板!
Custom serializers/schemas for the Kryo/Protostuff marshallers must be registered via their respective service interfaces in compatibility mode. To achieve this, it is necessary for a JAR that contains the service provider to be registered in the same directory or module as the marshaller bundle and custom classes.
It is not necessary for the service provider implementation to be provided in the same JAR as the user’s custom classes. However, the JAR that contains the provider must be in the same directory/module as the marshaller and custom class JAR files.