4.2. Resolved Issues
- BZ-1244133 - NearCache: it is possible to read stale data with a put/remove followed by a get
- When using a near cache in
LAZY
mode there was a possibility that a read operation following a write operation on the same key within the same thread would return outdated data. For example, the below calls from the same thread could result inv1
being returned:put(k, v1) put(k, v2) get(k)
This issue is resolved as of Red Hat JBoss Data Grid 6.5.1. - BZ-1243671 - Null is returned for a not expired entry in Hot Rod client
- When using the HotRod client to update the lifespan of an expirable entry so that it will be immortal (i.e. performing
put
withlifespan
set to -1), the entry was unexpectedly removed from the cache.This issue is resolved as of Red Hat JBoss Data Grid 6.5.1. - BZ-1235140 - Async stores can lose updates
AsyncCacheWriter
could drop updates if it was unable to writemodificationQueueSize
entries within theshutdownTimeout
, as older changes overwrote newer ones in the back-end store. This issue was primarily caused by a large queue size or a slow back-end store with a relatively smallAsyncStore
shutdownTimeout
(25 seconds by default).NowAsyncStore
will write everything to the backing store before shutting down, and theshutdownTimeout
has been removed.This issue is resolved as of Red Hat JBoss Data Grid 6.5.1.- BZ-1235134 - AsyncCacheLoader.load() may return stale data
- A race condition existed that allowed
AsyncCacheLoader.load()
/State.get()
to return stale data. AsState.get()
scans the states ofAsyncCacheWriter
from head to tail, while theAsyncStoreCoordinator
may move conflicting entries from the current state to the head state. IfState.get()
/AsyncCacheLoader.load()
is preempted after checking the head state then it may not see modifications moved by the coordinator thread and load stale data instead.This issue is resolved as of Red Hat JBoss Data Grid 6.5.1; theAsyncStoreCoordinator
was redesigned so that conflicting keys are collected in a separate list which is processed after theAsyncStoreProcessors
complete. - BZ-1234927 - HotRod size method doesn't check BULK_READ permission
- When
size()
is performed using the HotRod client, a MapReduce task is invoked internally to determine the total number of entries present in the cache. Previously this operation did not require any permissions; however, a check has been added to ensure the role invokingsize()
has the BULK_READ permission.This issue is resolved as of Red Hat JBoss Data Grid 6.5.1. - BZ-1229875 - The server erroneusly prints 'Ignoring XML attribute isolation, please remove from configuration file'
- The only isolation level supported by server configuration in the
<locking>
element of cache configuration was READ_COMMITED. Due to this, the server incorrectly printed 'Ignoring XML attribute isolation, please remove from configuration file' log message, even if isolation attribute was not present in the<locking>
element.This issue is resolved as of Red Hat JBoss Data Grid 6.5.1 as the server currently supports specifying other isolation levels, resulting in the log message no longer being displayed. - BZ-1229791 - Cluster listener state can be invalid due to exception
- When a clustered listener is registered and it throws an exception which is not caught, the listener is put in an inconsistent state. This might result in further exceptions when processing the next events, even if those events should generate no error or exception.This issue is resolved as of Red Hat JBoss Data Grid 6.5.1.
- BZ-1228676 - Allocated buffer in key and value converter should be released
- The Hot Rod decoder was not reporting all errors produced, resulting in a combination of errors not being logged and byte buffers being leaked as these errors accumulated. As these buffers grew they could potentially cause OutOfMemory errors.This issue is resolved as of Red Hat JBoss Data Grid 6.5.1, as the Hot Rod decoder extends a different Netty decoder that handles all potential failures.
- BZ-1228026 - Invalid message id under high load
- The C++ HotRod client was occasionally logging the following exception when there were many concurrent
Put
andGet
operations:"Invalid message id. Expected $OLD_MSGID and received $NEW_MSGID"
This issue was caused by the message id counter not being thread safe.This issue is resolved as of Red Hat JBoss Data Grid 6.5.1. - BZ-1255783 - Server should enable writeSkew for some configurations by default
- The server configuration did not allow users to specify write skew check option. This behavior could lead to inconsistencies on failure scenarios when using conditional operations with optimistic locking caches.This issue is resolved as of Red Hat JBoss Data Grid 6.5.1, where write skew check is automatically enabled when using a combination of optimistic locking, synchronous cache mode, and
REPEATABLE_READ
isolation level. - BZ-1255213 - Retry failed saying "Failed to connect" under high load
- The HotRod C++ client's failover was failing sporadically under high load. This was caused by the
Transport
object throwing aTransportException
during an operation, resulting in aConnectionPool::invalidateObject
which removed theTransport
from the busy queue, destroyed theTransport
, and then added a newTransport
to the idle queue. When the newTransport
is created, it tries to connect to the server socket which fails. This prevents retry attempts and client failover from happening.This issue is resolved as of Red Hat JBoss Data Grid 6.5.1. - BZ-1254321 - ConcurrentTimeoutException if a HotRod client uses getAll(...) and the owners < numOfNodes
- When compatibility mode was enabled in distributed mode and the number of key owners was less than the number of nodes in the cluster, the
getAll
operation on the HotRod client performed a localgetAll
, which sometimes resulted in not returning all entries.This issue is resolved as of Red Hat JBoss Data Grid 6.5.1. - BZ-1252986 - putIfAbsentAsync not enforcing withFlags(Flag.FORCE_RETURN_VALUE)
- The call to
putIfAbsentAsync
on a remote HotRod client and usingwithFlags(Flag.FORCE_RETURN_VALUE)
did not work as expected. The previous value was not returned as expected unless the flag was also configured in HotRod client properties file.This issue is resolved as of Red Hat JBoss Data Grid 6.5.1. - BZ-1223395 - Operation [Registering Deployed Cache Store service...] happens too late on slower machines
- While using a deployed Custom Cache Store for a particular cache, there was a potential race condition in the JBoss Data Grid server. It was possible that the cache start-up would occur before the Custom Cache Store library registration was successfully finished, resulting in the cache being unable to find the requested resources during the start-up process and failing to start.This issue is resolved as of Red Hat JBoss Data Grid 6.5.1.