Performance Tuning Guide
For use with Red Hat JBoss Data Grid 7.0
Abstract
Chapter 1. Introduction Copy linkLink copied to clipboard!
- Schemaless key-value store – JBoss Data Grid is a NoSQL database that provides the flexibility to store different objects without a fixed data model.
- Grid-based data storage – JBoss Data Grid is designed to easily replicate data across multiple nodes.
- Elastic scaling – Adding and removing nodes is simple and non-disruptive.
- Multiple access protocols – It is easy to access the data grid using REST, Memcached, Hot Rod, or simple map-like API.
1.1. Supported Configurations Copy linkLink copied to clipboard!
1.2. Components and Versions Copy linkLink copied to clipboard!
1.3. About Performance Tuning in Red Hat JBoss Data Grid Copy linkLink copied to clipboard!
Chapter 2. Java Virtual Machine Settings Copy linkLink copied to clipboard!
The JVM's heap size determines how much memory is allowed for the application to consume, and is controlled by the following parameters:
- -Xms - Defines the minimum heap size allowed.
- -Xmx - Defines the maximum heap size allowed.
- -XX:NewRatio - Define the ratio between young and old generations. Should not be used if
-Xmnis enabled. - -Xmn - Defines the minimum and maximum value for the young generation.
Xms and Xmx should be identical to prevent dynamic resizing of the heap, which will result in longer garbage collection periods.
The choice of garbage collection algorithm to use will largely be determined if throughput is valued over minimizing the amount of the time the JVM is fully paused. As JBoss Data Grid applications are often clustered it is recommended to choose a low pause collector to prevent network timeouts. The following parameters assume the CMS (Concurrent Mark-Sweep) collector is chosen:
- -XX:+UseConcMarkSweepGC - Enables usage of the CMS collector.
- -XX:+CMSClassUnloadingEnabled - Allows class unloading when the CMS collector is enabled.
- -XX:+UseParNewGC - Utilize a parallel collector for the young generation. This parameter minimizes pausing by using multiple collection threads in parallel.
- -XX:+DisableExplicitGC - Prevent explicit garbage collections.
Large, or Huge, Pages are contiguous pages of memory that are much larger than what is typically defined at the OS level. By utilizing large pages the JVM will have access to memory that is much more efficiently referenced, and memory that may not be swapped out, resulting in a more consistent behavior from the JVM. Large pages are discussed in further detail at Section 3.1, “About Page Memory”.
- -XX:+UseLargePages - Instructs the JVM to allocate memory in Large Pages. These pages must be configured at the OS level for this parameter to function successfully.
This parameter relates to JIT (Just-In-Time) compilation, which requires extended loading times during startup, but provides extensive compilation and optimization benefits after the startup process completes.
- -server - Enables server mode for the JVM.
2.1. Memory Requirements Copy linkLink copied to clipboard!
The default minimum amount of memory required to run JBoss Data Grid varies based on the configuration in use:
standalone.conf- The server should have a minimum of 2 GB of RAM for a single JBoss Data Grid instance, as the default heap may grow up to 1.3 GB, and Metaspace may occupy up to 256 MB of memory.domain.conf- The server should have a minimum of 2.5 GB of RAM for a single JBoss Data Grid managed domain consisting of two JBoss Data Grid server instances, as the heap may grow up to 512 GB for the domain controller, the heap for each server instance may grow up to 256 MB, and the Metaspace may occupy up to 256 MB of memory for the domain controller and each server instance.
There is no official memory recommendation for JBoss Data Grid, as the memory requirements will vary depending on the application and workload in use. As the heap is increased more data may be stored in the grid.
Each JVM process has a memory footprint that adheres to the following formula:
JvmProcessMemory = JvmHeap + Metaspace + (ThreadStackSize * Number of Threads) + Jvm-native-c++-heap
JvmProcessMemory = JvmHeap + Metaspace + (ThreadStackSize * Number of Threads) + Jvm-native-c++-heap
Jvm-native-c++-heap will vary based on the native threads and if any native libraries are used; however, for a default installation it is safe to assume this will use no more than 256 MB of memory.
2.2. JVM Example Configurations Copy linkLink copied to clipboard!
Example 2.1. 8GB JVM
Example 2.2. 32GB JVM
Example 2.3. 64GB JVM
Chapter 3. Configure Page Memory Copy linkLink copied to clipboard!
3.1. About Page Memory Copy linkLink copied to clipboard!
shmget() system call. Additionally, the proper security permissions are required for the memlock() system call. For any application that does not have the ability to use large page memory, the server behaves as if the large page memory does not exist, which can be a problem.
3.2. Configure Page Memory Copy linkLink copied to clipboard!
Procedure 3.1. Configure Page Memory for Red Hat Enterprise Linux
Set the Shared Memory Segment Size
As root, set the maximum size of a shared memory segment in bytes; below we define this to be 32 GB:echo "kernel.shmmax = 34359738368" >> /etc/sysctl.conf
# echo "kernel.shmmax = 34359738368" >> /etc/sysctl.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the Huge Pages
The number of huge pages is set to the total amount of memory the JVM will consume (heap, meta space, thread stacks, native code) divided by theHugepagesize. In Red Hat Enterprise Linux systemsHugepagesizeis set to 2048 MB.- The number of huge pages required can be determined by the following formula:
Heap + Meta space + Native JVM Memory + (Number of Threads * Thread Stack Size)
Heap + Meta space + Native JVM Memory + (Number of Threads * Thread Stack Size)Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Assuming a JVM with a 32 GB Heap, 2 GB of Meta space, a 512 MB native footprint, and 500 threads, each with a default size of 1 MB per thread, we have the following equation.
32*(1024*1024*1024) + 2*(1024*1024*1024) + 512*(1024*1024) + (500 * 1024*1024)
32*(1024*1024*1024) + 2*(1024*1024*1024) + 512*(1024*1024) + (500 * 1024*1024)Copy to Clipboard Copied! Toggle word wrap Toggle overflow - The resulting value can now be converted to hugepages. Since there are 2048 MB in a single hugepage we perform the following:
37568380928 / (2*1024*1024)
37568380928 / (2*1024*1024)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
As root, set the number of huge pages determined from the previous steps to be allocated to the operating system:echo "vm.nr_hugepages = 17914" >> /etc/sysctl.conf
# echo "vm.nr_hugepages = 17914" >> /etc/sysctl.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Assign Shared Memory Segment Permissions
As root, set the ID of the user group that is allowed to create shared memory segments using thehugetlb_shm_groupfile. This value should match the group id of the user running the JVM:echo "vm.hugetlb_shm_group = 500" >> /etc/sysctl.conf
# echo "vm.hugetlb_shm_group = 500" >> /etc/sysctl.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the Resource Limits
To allow a user to lock the required amount of memory, update the resource limits in the/etc/security/limits.conffile by adding the following:jboss soft memlock unlimited jboss hard memlock unlimited
jboss soft memlock unlimited jboss hard memlock unlimitedCopy to Clipboard Copied! Toggle word wrap Toggle overflow This change allows the userjbossto lock the system's available memory.Configure Authentication using PAM
Linux's PAM handles authentication for applications and services. Ensure that the configured system resource limits apply when usingsuandsudoas follows:Configure PAM for su
Add the following line to the/etc/pam.d/sufile:session required pam_limits.so
session required pam_limits.soCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure PAM for sudo
Add the following line to the/etc/pam.d/sudofile:session required pam_limits.so
session required pam_limits.soCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Reboot the system for the changes to take effect. Since Huge Pages allocate a contiguous block of memory these must be allocated at system boot; attempts to claim these dynamically while the system is running may result in system hangs if the memory is unable to be reclaimed.
Procedure 3.2. Configure Page Memory for the JVM
Set the Heap Size
Use the-Xmsand-Xmxparameters to set the minumum and maximum heap sizes for your JVM, as discussed in Chapter 2, Java Virtual Machine Settings.Enable Large Pages
Enabled large pages for the JVM by adding the following parameter, as discussed in Chapter 2, Java Virtual Machine Settings:-XX:+UseLargePages
-XX:+UseLargePagesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Networking Configuration Copy linkLink copied to clipboard!
4.1. TCP Settings Copy linkLink copied to clipboard!
TCP) is a core part of the Internet Protocol (IP) communication protocol suite. Computers use the TCP/IP protocol to communicate with each other when using the Internet.
4.1.1. Adjusting TCP Send/Receive Window Settings Copy linkLink copied to clipboard!
Procedure 4.1. Set the TCP Send and Receive Windows
Adjust the Send and Receive Window Sizes
Adjust the size of the send and receive windows by adding the following lines to the/etc/sysctl.conffile as root:- Add the following line to set the send window size to the recommended value (
640KB):net.core.wmem_max=655360
net.core.wmem_max=655360Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Add the following line to set the receive window size to the recommended value (
25MB):net.core.rmem_max=26214400
net.core.rmem_max=26214400Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Apply Changes Immediately
Optionally, to load the new values into a running kernel (without a reboot), enter the following command as root:sysctl -p
# sysctl -pCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the user reboots after the first step, the second step is unnecessary.
Chapter 5. Return Values Copy linkLink copied to clipboard!
5.1. About Return Values Copy linkLink copied to clipboard!
5.2. Disabling Return Values Copy linkLink copied to clipboard!
put() and remove() API operations return the original or previous values in the cache. However, if the original value was not required, this operation is wasteful.
put() and remove()). Implement this solution as follows:
Procedure 5.1. Disable Return Values
- Set the
IGNORE_RETURN_VALUESflag. This flag signals that the operation's return value is ignored. For example:cache.getAdvancedCache().withFlags(Flag.IGNORE_RETURN_VALUES)
cache.getAdvancedCache().withFlags(Flag.IGNORE_RETURN_VALUES)Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Set the
SKIP_CACHE_LOADflag. This flag does not load entries from any configured CacheStores. For example:cache.getAdvancedCache().withFlags(Flag.SKIP_CACHE_LOAD)
cache.getAdvancedCache().withFlags(Flag.SKIP_CACHE_LOAD)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 6. Marshalling Copy linkLink copied to clipboard!
- transform data for relay to other JBoss Data Grid nodes within the cluster.
- transform data to be stored in underlying cache stores.
6.1. About the JBoss Marshalling Framework Copy linkLink copied to clipboard!
java.io.ObjectOutput and java.io.ObjectInput implementations compared to the standard java.io.ObjectOutputStream and java.io.ObjectInputStream.
6.2. Customizing Marshalling Copy linkLink copied to clipboard!
java.io.Externalizable so that a custom method of marshalling/unmarshalling classes is performed. With this approach the target class may be created in a variety of ways (direct instantiation, factory methods, reflection, etc.) and the developer has complete control over using the provided stream.
To configure a class for custom marshalling an implementation of org.infinispan.marshall.AdvancedExternalizer must be provided. Typically this is performed in a static inner class, as seen in the below externalizer for a Book class:
writeObject() and readObject() methods have been implemented the Externalizer may be linked up with the classes they externalize; this is accomplished with the getTypeClasses() method seen in the above example.
getId() method above. This value is used to identify the Externalizer at runtime. A list of values used by JBoss Data Grid, which should be avoided in custom Externalizer implementations, may be found at Section 6.3, “JBoss Data Grid Externalizer IDs”.
Custom Marshallers may be registered with JBoss Data Grid programmatically or declaratively, as seen in the following examples:
Example 6.1. Declaratively Register a Custom Marshaller
Example 6.2. Programmatically Register a Custom Marshaller
GlobalConfigurationBuilder builder = ... builder.serialization() .addAdvancedExternalizer(new Book.BookExternalizer());
GlobalConfigurationBuilder builder = ...
builder.serialization()
.addAdvancedExternalizer(new Book.BookExternalizer());
6.3. JBoss Data Grid Externalizer IDs Copy linkLink copied to clipboard!
| Module Name | ID Range |
|---|---|
| Infinispan Tree Module | 1000-1099 |
| Infinispan Server Modules | 1100-1199 |
| Hibernate Infinispan Second Level Cache | 1200-1299 |
| Infinispan Lucene Directory | 1300-1399 |
| Hibernate OGM | 1400-1499 |
| Hibernate Search | 1500-1599 |
| Infinispan Query Module | 1600-1699 |
| Infinispan Remote Query Module | 1700-1799 |
| Infinispan Scripting Module | 1800-1849 |
| Infinispan Server Event Logger Module | 1850-1899 |
| Infinispan Remote Store | 1900-1999 |
Chapter 7. JMX Copy linkLink copied to clipboard!
7.1. About Java Management Extensions (JMX) Copy linkLink copied to clipboard!
MBeans.
7.2. Using JMX with Red Hat JBoss Data Grid Copy linkLink copied to clipboard!
7.3. Enabling JMX with Red Hat JBoss Data Grid Copy linkLink copied to clipboard!
com.sun.management.jmxremote.port parameter. In addition, it is recommended to secure the remote connection when this is used in a production environment.
Example 7.1. Enable JMX for Remote Connections using the OpenJDK
keystore has already been created, and will configure a standalone instance to accept incoming connections on port 3333 while using the created keystore.
Chapter 8. Hot Rod Server Copy linkLink copied to clipboard!
8.1. About Hot Rod Copy linkLink copied to clipboard!
8.2. About Hot Rod Servers in Red Hat JBoss Data Grid Copy linkLink copied to clipboard!
8.3. Worker Threads in the Hot Rod Server Copy linkLink copied to clipboard!
8.3.1. About Worker Threads Copy linkLink copied to clipboard!
8.3.2. Change Number of Worker Threads Copy linkLink copied to clipboard!
160, and may be changed . The number of worker threads may be specified as an attribute on each interface, as seen in the following example:
<hotrod-connector socket-binding="hotrod" cache-container="local" worker-threads="200">
<!-- Additional configuration here -->
</hotrod-connector>
<hotrod-connector socket-binding="hotrod" cache-container="local" worker-threads="200">
<!-- Additional configuration here -->
</hotrod-connector>
Appendix A. Revision History Copy linkLink copied to clipboard!
| Revision History | ||||||||
|---|---|---|---|---|---|---|---|---|
| Revision 7.0-1 | Wed 20 July 2016 | |||||||
| ||||||||
| Revision 7.0-0 | Tue 31 May 2016 | , , | ||||||
| ||||||||