Chapter 2. Known Issues
The following issues are known to exist in JBoss Data Grid 6 and will be fixed in a subsequent release.
- BZ#745865 - ServerWorker thread naming problem
- If a server is used for both Hot Rod and Memcached, it is not possible to distinguish between the worker threads for each protocol because they are all named "MemcachedServerWorker". This does not affect the functionality of the server.This behavior persists in JBoss Data Grid 6.
- BZ#760895 - Reopened: Error detecting crashed member during shutdown of EDG 6.0.0.Beta
- Occasionally, when shutting down nodes in a cluster, the following message is reported: "ERROR [org.infinispan.server.hotrod.HotRodServer] ISPN006002: Error detecting crashed member: java.lang.IllegalStateException: Cache '___hotRodTopologyCache' is in 'STOPPING' state and this is an invocation not belonging to an on-going transaction, so it does not accept new invocations. Either restart it or recreate the cache container."This is due to the fact that a node has detected another node's shutdown and is attempting to update the topology cache while itself is also shutting down. The message is harmless, and it will be removed in a future releaseThis behavior persists in JBoss Data Grid 6.
- BZ#807674 - JDBC Cache Stores using a JTA Data Source do not participate in cache transactions
- In JBoss Data Grid's library mode, JDBC cache stores can be configured to use a JTA-aware datasource. However operations performed on a cache backed by such a store during a JTA transaction, will be persisted to the store outside of the transaction's scope. This issue is not applicable to JBoss Data Grid's Remote Client-Server mode because all cache operations are non-transactional.This behavior persists in JBoss Data Grid 6.
- BZ#807741 - Reopened: Invalid magic number
- The issue is a race condition on the Hot Rod server, which can lead to topologies being sent erroneously as a result of addition of a new node to the cluster. When the issue appears, clients will start seeing "Invalid magic number" error messages as a result of unexpected data within the stream.When this problem is encountered, the recommended approach is to restart the client. If the client is not restarted, on some occasions, the client may recover after the unexpected data is consumed but this is not guaranteed. If the client recovers with a restart, the view topology displayed does not display one of the nodes added, resulting in uneven request distribution.This behavior persists in JBoss Data Grid 6.
- BZ#808623 - Some entries not available during view change
- In rare circumstances, when a node leaves the cluster, instead of going directly to a new cluster view that displays all nodes save the note that has departed, the cluster splits into two partitions which then merge after a short amount of time. During this time, some nodes do not have access to all the data that previously existed in the cache. After the merge, all nodes regain access to all the data, but changes made during the split may be lost or be visible only to a part of the cluster.Normally, when the view changes because a node joins or leaves, the cache data is rebalanced on the new cluster members. However, if the number of nodes that leaves the cluster in quick succession equals or is greater than the value of numOwners, keys for the departed nodes are lost. This occurs during a network split as well - regardless of the reasons for the partitions forming, at least one partition will not have all the data (assuming cluster size is greater than numOwners).While there are multiple partitions, each one can make changes to the data independently, so a remote client will see inconsistencies in the data. When merging, JBoss Data Grid does not attempt to resolve these inconsistencies, so different nodes may hold different values even after the merge.This behavior persists in JBoss Data Grid 6.
- BZ#810155 - Default configuration may not be optimal for a 2-node cluster
- The default JBoss Data Grid JGroups configuration (using UNICAST2 and RSVP) does not lead to optimal performance in a two node cluster scenario. Two node clusters show improved performance with UNICAST and without RSVP.This behavior persists in JBoss Data Grid 6.
- BZ#818863 - Reopened: Tests for UNORDERED and LRU eviction strategies fail on IBM JDK
- The LinkedHashMap implementation in IBM's JDK sometimes behaves erratically when extended (as is done by the eviction strategy code). This wrong behaviour is exposed by the JDG Test Suite. The recommendation is, if using eviction, to use other JDKs (Oracle or OpenJDK) which are not affected by this issue.This behavior persists in JBoss Data Grid 6.
- BZ#822815 - NPE during JGroups Channel Service startup
- Occasionally, when starting a JDG server, the JGroups subsystem would not start because of a NullPointerException during service installation, leaving the server in an unusable state. This situation does not affect data integrity within the cluster, and simply killing the server and restarting it solves the problem.This behavior persists in JBoss Data Grid 6.
- BZ#829813 - Command line bind address being ignored for HTTP and Infinispan servers
- The JBoss Data Grid server by default binds its listening ports to a loopback address (
127.0.0.1
). The-b
switch can be used to modify the address on which to bind the public interface. However, since the JBoss Data Grid endpoints are bound to the management interface for security reasons, the-b
switch does not affect them. The user should modify thestandalone.xml
configuration file to place the endpoints on the public interface:<socket-binding name="hotrod" interface="public" port="11222"/>
After the above modification, the-b
switch will determine the network address on which the hot rod port will be bound.This behavior persists in JBoss Data Grid 6.