Chapter 5. Resolved Issues
- BZ#1122162 - Infinispan Core module for EAP not importing Services from the Query module
- Previously in Red Hat JBoss Data Grid, the module definition for the productized Infinispan Core in
$EAP_MODULES_LIBRARY/modules/org/infinispan/jdg-6.3
depended on the Query module but failed to import its services. Not loading the services allowed the Indexing feature to be enabled, although several operations were failing at run time. This issue is now fixed in JBoss Data Grid 6.3.1. - BZ#1134184 - Hot Rod client receives ArrayIndexOutOfBoundsException and InvalidResponseException when topology changes
- Previously in JBoss Data Grid, the new segment-based topology information sent to Hot Rod clients would incorrectly compute the segments. As a result, segment entries were incorrectly added for nodes that were not present. This presented a runtime error when the distributed cache view changes were processed by the Hot Rod client. This is fixed in JBoss Data Grid 6.3.1 so that servers that are not part of the topology are filtered out from the segment information sent to Hot Rod clients. As a result, the Hot Rod clients correctly process topology views.
- BZ#1136109 - Improve SyncConsistentHashFactory key distribution
- Previously in Red Hat JBoss Data Grid, data distribution was unexpectedly uneven when using SyncConsistentHashFactory and TopologyAwareSyncConsistentHashFactory due to a problem that prevented these two classes from distributing the segments randomly between nodes as expected. This is now fixed in JBoss Data Grid 6.3.1. SyncConsistentHashFactory and TopologyAwareSyncConsistentHashFactory can now distribute data correctly so that each node is only given a certain percentage of data compared to other nodes. As a result, data is now distributed evenly between nodes.
- BZ#1136064 - Externalizers not being used in the Lucene Directory
- Red Hat JBoss Data Grid includes a Wildfly module responsible for the query feature. This module included declared Externalizers that were not being loaded. As a result, a plain Java serialization was used to transfer index files instead. This affected indexed queries that used the Infinispan Lucene directory for storage. This is now fixed in JBoss Data Grid 6.3.1 and querying now works as expected.
- BZ#1135924 - Race condition during unmarshalling in the Infinispan Lucene Directory
- Previously in Red Hat JBoss Data Grid, if Hibernate Search was configured to use the Infinispan directory provider, deploying an application using Hibernate Search resulted in a java.lang.ClassCastException. This issue is fixed in JBoss Data Grid 6.3.1 and Hibernate Search works as expected when using Infinispan directory provider.
- BZ#1135580 - LuceneCacheLoader doing unecessary IO
- Previously in JBoss Data Grid, after a read-only Lucene index was preloaded via the cacheloader, the backing index was still accessed for small segments. This is now fixed in JBoss Data Grid 6.3.1 so that preloading a read-only Lucene index works as expected.
- BZ#1135553 - QueryInterceptor & ClusterRegistry racing conditions fixes
- Previously in Red Hat JBoss Data Grid, race conditions caused intermittent timeouts in indexed queries and missing @Indexed annotation errors. This is now fixed in JBoss Data Grid 6.3.1 and these race conditions work as expected and do not cause unexpected timeouts and annotation errors.
- BZ#1136438 - Caching of parsed HQL query objects
- Previously in Red Hat JBoss Data Grid, the same queries were periodically parsed multiple times, which results in a minor performance issue. This is fixed in JBoss Data Grid 6.3.1 so that a local cache is introduces to prevent the same query being unnecessarily parsed more than once.
- BZ#1131117 - Log rebalancing messages to specific category
- Previously in Red Hat JBoss Data Grid, users had to enable the DEBUG logging level to view re-balancing events. This is now fixed in JBoss Data Grid 6.3.1 so that logging major re-balancing events such as re-balancing started/enabled/suspended are now logged under a specific category (org.infinispan.CLUSTER) and the INFO level.
- BZ#1113585 - LevelDBStore.stop() crashes JVM in native code
- Previously in Red Hat JBoss Data Grid, when a cache using LevelDB cache store was stopped (for example, as a consequence of stopping the cache manager), the LevelDB native implementation caused a segmentation fault in the JVM process. As a result of this segmentation fault, the process crashed. This issue is now fixed in JBoss Data Grid 6.3.1 so that using the LevelDB cache store native implementation works as expected.
- BZ#1130493 - Inserting into cache with indexing fails for XA transactions
- Previously in Red Hat JBoss Data Grid, when a transactional cache was configured as an XA resource (useSynchronization="false") and indexing is enabled on this cache, inserting an entry into this cache may fail due to deadlock between this cache and internal caches. This deadlock occurs under race conditions and only once per a type (class) of value. This is fixed in JBoss Data Grid 6.3.1 and the transactional cache works as expected when configured as an XA resource and with indexing enabled.
- BZ#1128791 - Remove timeout from Map/Reduce jobs
- Previously in Red Hat JBoss Data grid, users had to extend the timeout value of map/reduce jobs because the default timeout value was the RPC timeout value. In JBoss Data Grid 6.3.1, this is fixed so that the timeout value for map/reduce jobs has been removed and the job will wait for an infinite period before timing out. If required, users can lower the timeout. As a result, users do not need to set the timeout as often as before.
- BZ#1122269 - Replace fails with cache loader
- Previously in Red Hat JBoss Data Grid, the cache.replace(key, oldValue, newValue) operation was comparing the requested new value with previous value, and if they differed it turned into a no-operation. However, CacheLoaderInterceptor did not load entries for a ReplaceCommand. If the entry only existed in the loader and not in memory, this caused the replace operation to fail. This issue is now fixed in JBoss Data Grid 6.3.1 and the replace operation works correctly even if the old value is only in the cache store and not in memory.
- BZ#1131645 - RemoveCommand does not activate the key in Passivation
- Previously in Red Hat JBoss Data Grid, when a persistent cache store was used with Passivation, the
remove
command did not activate the entry in cache store, which resulted in an inconsistency because the entry was never removed. It was removed from internal DataContainer but not from the cache store and during any operation, the old value could be loaded from the cache store back again. This is fixed in JBoss Data Grid 6.3.1 and the remove operation now works correctly. - BZ#1080359 - ConfigurationTest.testTableProperties fails constantly on all environments with JDK6
- Previously in Red Hat JBoss Data Grid, users were unable to use the
.withProperties()
method to configure JDBC cache store when running JDK 1.6. This was caused by the fact that the JDK could not find the correct property editor for the DatabaseType class. This problem does not occur with JDK 1.7 because com.sun.beans.editors.EnumEditor is used by default. However, this class is not present in JDK 1.6 so a specific property editor was added for deployments on JDK 1.6. This issue is now fixed in JBoss Data Grid 6.3.1. - BZ#1119780 - CDI annotations
- Previously in Red Hat JBoss Data Grid, certain user code failed to compile because some Infinispan CDI classes had been refactored to different packages located in other modules. In JBoss Data Grid 6.3.1, the Infinispan CDI classes have been moved back to the original package and module. As a result, the user code no longer fails to compile because the CDI classes are not available where expected.
- BZ#1132912 - The JGroups configuration files included with JDG reference the unsupported tom.TOA protocol
- Previously in Red Hat JBoss Data Grid, the UDP, TCP, and EC2 JGroups configuration files included the <tom.TOA/> element. However, the TOA protocol is not supported in JBoss Data Grid. This issue is now resolved in JBoss Data Grid 6.3.1 and the TOA protocol is now removed.
- BZ#1134085 - Loading LDAP roles fails when some principal doesn't have an LDAP record
- Previously in Red Hat JBoss Data Grid, user authentication failed with a NamingException because the principal containing the remote client’s network address could not be located in the LDAP directory. As a result, when resolving the roles associated with a Hot Rod authenticated user against an LDAP directory, the list of principals included an InetAddressPrincipal which contained the network address of the remote client. This is fixed in JBoss Data Grid 6.3.1. The role resolution logic has been modified so that the network address principal (InetAddressPrincipal) is not included in the list of principals that are verified against the LDAP directory. As a result, the role resolution of users authentication over Hot Rod works as expected.
- BZ#1124743 - Include EAP 6.2.x schemas for the server config and update example configs accordingly
- Previously, Red Hat JBoss Data Grid did not include the Red Hat JBoss Enterprise Application Platform 6.3.x schemas. The
jboss-as-config_1_5.xsd
schema in particular is useful because it documents support for LDAP in the security realm authorization element. This is fixed in JBoss Data Grid 6.3.1 with the schema file included in thedocs/schema
directory in the server distribution.