Chapter 2. Known Issues


The following issues are known to exist in JBoss Data Grid 6.0.1 and will be fixed in a subsequent release.
BZ#745865 - ServerWorker thread naming problem
If a server is used for both Hot Rod and Memcached, it is not possible to distinguish between the worker threads for each protocol because they are all named "MemcachedServerWorker". This does not affect the functionality of the server.
This behavior persists in JBoss Data Grid 6.0.1.
BZ#760895 - Reopened: Error detecting crashed member during shutdown of EDG 6.0.0.Beta
Occasionally, when shutting down nodes in a cluster, the following message is reported: "ERROR [org.infinispan.server.hotrod.HotRodServer] ISPN006002: Error detecting crashed member: java.lang.IllegalStateException: Cache '___hotRodTopologyCache' is in 'STOPPING' state and this is an invocation not belonging to an on-going transaction, so it does not accept new invocations. Either restart it or recreate the cache container."
This is due to the fact that a node has detected another node's shutdown and is attempting to update the topology cache while itself is also shutting down. The message is harmless, and it will be removed in a future release
This behavior persists in JBoss Data Grid 6.0.1.
BZ#807674 - JDBC Cache Stores using a JTA Data Source do not participate in cache transactions
In JBoss Data Grid's library mode, JDBC cache stores can be configured to use a JTA-aware datasource. However operations performed on a cache backed by such a store during a JTA transaction, will be persisted to the store outside of the transaction's scope. This issue is not applicable to JBoss Data Grid's Remote Client-Server mode because all cache operations are non-transactional.
This behavior persists in JBoss Data Grid 6.0.1.
BZ#807741 - Reopened: Invalid magic number
The issue is a race condition on the Hot Rod server, which can lead to topologies being sent erroneously as a result of addition of a new node to the cluster. When the issue appears, clients view "Invalid magic number" error messages due to unexpected data within the stream.
In such a situation, the recommended approach is to restart the client. If the client is not restarted, the client may recover after the unexpected data is consumed but this is not guaranteed. If the client recovers with a restart, the view topology displayed does not display one of the nodes added, resulting in uneven request distribution.
This behavior persists in JBoss Data Grid 6.0.1.
BZ#808623 - Some entries not available during view change
In rare circumstances, when a node leaves the cluster, instead of going directly to a new cluster view that displays all nodes save the note that has departed, the cluster splits into two partitions which then merge after a short amount of time. During this time, some nodes do not have access to all the data that previously existed in the cache. After the merge, all nodes regain access to all the data, but changes made during the split may be lost or be visible only to a part of the cluster.
Normally, when the view changes because a node joins or leaves, the cache data is rebalanced on the new cluster members. However, if the number of nodes that leaves the cluster in quick succession equals or is greater than the value of numOwners, keys for the departed nodes are lost. This occurs during a network split as well - regardless of the reasons for the partitions forming, at least one partition will not have all the data (assuming cluster size is greater than numOwners).
While there are multiple partitions, each one can make changes to the data independently, so a remote client will see inconsistencies in the data. When merging, JBoss Data Grid does not attempt to resolve these inconsistencies, so different nodes may hold different values even after the merge.
This behavior persists in JBoss Data Grid 6.0.1.
BZ#818092 - NPE in Externalizer on shutdown
A workaround has been added to avoid the NullPointerException while AS7 implements the right order to stop Infinispan services.
This behavior persists in JBoss Data Grid 6.0.1.
BZ#818863 - Reopened: Tests for UNORDERED and LRU eviction strategies fail on IBM JDK
The LinkedHashMap implementation in IBM's JDK sometimes behaves erratically when extended (as is done by the eviction strategy code). This wrong behaviour is exposed by the JBoss Data Grid Test Suite. The recommendation is, if using eviction, to use other JDKs (Oracle or OpenJDK) which are not affected by this issue.
This behavior persists in JBoss Data Grid 6.0.1.
BZ#822815 - NPE during JGroups Channel Service startup
Occasionally, when starting a JBoss Data Grid server, the JGroups subsystem would not start because of a NullPointerException during service installation, leaving the server in an unusable state. This situation does not affect data integrity within the cluster, and simply killing the server and restarting it solves the problem.
This behavior persists in JBoss Data Grid 6.0.1.
BZ#829813 - Command line bind address being ignored for HTTP and Infinispan servers
The JBoss Data Grid server by default binds its listening ports to a loopback address (127.0.0.1). The -b switch can be used to modify the address on which to bind the public interface. However, since the JBoss Data Grid endpoints are bound to the management interface for security reasons, the -b switch does not affect them. The user should modify the standalone.xml configuration file to place the endpoints on the public interface:
<socket-binding name="hotrod" interface="public" port="11222"/>
After the above modification, the -b switch will determine the network address on which the hot rod port will be bound.
This behavior persists in JBoss Data Grid 6.0.1.
BZ#841889 - Transaction leak caused by reordering between prepare and commit
This may cause locks to not be released and as a result, other transactions may not be able to write that data (though they are be able to read it). As a workaround, one can enable transaction recovery, then use the JMX recovery hooks in order to cleanup the pending lock.
This behavior persists in JBoss Data Grid 6.0.1.
BZ#841891 - Potential tx lock leaks when nodes are added to the cluster
This situation only occurs during topology changes and results in certain keys remaining locked after a transaction is finished. This means that no other transaction is able to write to the given key while it is locked (though it can still be read). As a workaround, one can enable transaction recovery and then use the JMX recovery hooks in order to cleanup the pending lock.
This behavior persists in JBoss Data Grid 6.0.1.
BZ#847062 - Pre-Invocation flag PUT_FOR_EXTERNAL_READ throws exception
Users should not pass in Flag.PUT_FOR_EXTERNAL_READ as it is designed for internal use only. Alternatively, call Cache.putForExternalRead() operation.
This behavior persists in JBoss Data Grid 6.0.1.
BZ#847809 - Cluster with non-shared JDBC cache store has too many entries after node failure
The root cause of this problem still needs to be analyzed. As a workaround, use a shared cache store instead of local cache stores.
This behavior persists in JBoss Data Grid 6.0.1.
BZ#854665 - Coordinator tries to install new view after graceful shutdown
In JBoss Data Grid 6.0.0, when gracefully stopping a coordinator node, the node itself would log an attempt to install a new clustering view which would fail. This is harmless, since the new coordinator would in fact perform the proper view installation.
This behavior persists in JBoss Data Grid 6.0.1.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.