Chapter 3. Resolved Issues


The following issues have been resolved in JBoss Data Grid 6.
BZ#745923 - NPE in CacheLoaderInterceptor
The startInterceptor method was called before it was initialized. When four nodes started, NullPointerException errors displayed. This is fixed so that the startInterceptor method is now called after it is initialized. As a result, when four nodes are now started, they operate as expected with no errors.
BZ#758178 - Jdbc cache stores do not work with ManagedConnectionFactory in a transactional context
Previously, when the JDBC Cache Store was configured without specifying connnectionFactoryClass, the ManagedConnectionFactory was selected by default. As a result, ManagedConnectionFactory could not connect to the database. This behavior is now fixed and a connection to the database is established as expected when no connectionFactoryClass is specified for the JDBC Cache Store.
BZ#765759 - StateTransferInProgressException during cluster startup
Previously, when state transfer was started with different relays on different nodes, the lock was denied and a StateTransferInProgressException occurred to prevent a deadlock. Despite the timeout not expiring, a "??:??:??,??? ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (undefined) ISPN000136: Execution error: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view ?" error appeared on the server. This is fixed and the StateTransferInProgressException no longer displays when a cluster starts up.
BZ#786202 - State transfer taking too long on node join
With JBoss Data Grid cluster of size 4 nodes or higher, if a node crashes and then is brought up again, the subsequent state transfer takes a very long time to conclude. This occurs despite a mild data load (5% of heap and client load). The data load concludes within a minute while the state transfer unexpectedly requires more time than this.
This problem has only occurred once and has not been reproduced in tests since. It is a low risk performance problem.
BZ#791206 - Include appropriate quickstarts / examples for each edition
Quickstarts and examples for JBoss Data Grid 6 are now available within the jboss-datagrid-quickstarts-1.0.0.zip file.
BZ#801296 - Storing a byte array via Memcached client fails on Windows
A byte array's storage was occasionally misinterpreted by the server as an invalid command. This was caused by the memcached server not always consuming the CR/LF delimiter which marks the end of a client request. The server then attempted to decode the delimiter as the header of the following request and incorrectly reported this as a bad message. This is now fixed and running the server operates as expected.
BZ#806855 - SuspectedException blocks cluster formation
When a new node joined an existing JBoss Data Grid cluster, in some instances the new node splits into a separate network partition before the state transfer concludes and logged a SuspectException. As a result, the existing nodes do not receive a new JGroups view for ten minutes, after which the state transfer fails. The cluster did not form with all three members as expected.
This problem occurs rarely, due to two problems. First, the SuspectException on the new node does not allow a new cluster with just the single new node to form. Secondly, the two existing nodes in the cluster did not install a new JGroups cluster view for ten minutes, during which time state transfer remains blocked.
BZ#808422 - Missing XSD files inf docs/schema directory
The latest version of some of the configuration schema files was missing from the docs/schema directory. This did not affect the functionality of the server. The relevant configuration schema files are now available in the docs/schema directory, namely jboss-as-infinispan_1_3.xsd, jboss-as-jgroups_1_1.xsd, jboss-as-config_1_3.xsd and jboss-as-threads_1_1.xsd.
BZ#809060 - Getting CNFE: org.infinispan.loaders.jdbc.connectionfactory.ManagedConnectionFactory when using jdbc cache store
As a result of a race condition between the server module and the Infinispan subsystem, a server configured with JDBC cache store may occasionally fail to start. The server either started as expected or failed to start.
This occurred when the Memcached server attempted to obtain the memcachedCache before the Infinispan subsystem had a chance to start it (even in EAGER mode). As the server module was forcing its own classloader as TCCL, the cache could not find the necessary classes needed by the cache loaders.
This behavior is now fixed and a server configured with the JDBC cache store no longer experiences and unexpected starting failures.
BZ#809631 - Uneven request balancing after node restore
Previously, after a node crashed and rejoined the cluster, it did not receive client load at the same level as the other nodes. The Hot Rod server is now fixed so that the view identifier is not updated until the topology cache contains the addresses of all nodes.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.