4.2. Resolved Issues


BZ-1168043 - Better documentation of state-transfer configuration and behaviour

In previous versions of Red Hat JBoss Data Grid, for a cache store where sharing is set to false, the fetchInMemory and fetchPersistence parameters must also be set to true or different nodes may contain different copies of the same data.

For a cache store where the shared parameter is set to true, set fetchPersistence must be set to false because the persistence is shared and enabling it results in unnecessary state transfers.

However, the fetchInMemory parameter can be set to either true or false. Setting this to true loads the in-memory state via the network and resulted in a faster start up. Setting the value to false loads the data from persistence without transferring the data remotely from other nodes.

The Administration and Configuration Guide has been updated to include additional information regarding state-transfer configuration and behaviour.
BZ-1184373 - During state transfer, previously removed entry was revived

In previous versions of Red Hat JBoss Data Grid, an entry removed or updated during ongoing state-transfer after previous coordinator leave can be overwritten on some nodes. This results in stale entry with old value (before the removal or update).

The handling of JGroups views was improved and cluster topology that sent from old coordinators is now ignored. Additionally, the response returned from a transaction that was rolled back has been fixed.
BZ-1185779 - Transaction cannot be recommitted in new topology

In previous versions of Red Hat JBoss Data Grid, each transaction can be executed on each node only once. When a transaction affects multiple entries, one of them owned and another not owned, only the owned entry is modified during execution. However, if there is a topology change (change of entry ownership) during the transaction, the node may become owner of some entry while receiving old (pre-transaction) value via the state-transfer. The transaction should execute again on the newly owned entries, but it does not so since it is already marked as completed.

This issue was addressed by reworking how transactions are marked as completed.
BZ-1200360 - Hot Rod client does not full recover after a complete server shutdown

In previous versions of Red Hat JBoss Data Grid, after a complete server shutdown the Hot Rod client was only able to reconnect to the last known server; in addition, the cluster view was no longer updated.

The Hot Rod client will now check to see if the cluster is available, and if not will try to restore the initial host configuration and see if the cluster has been restarted. If the Hot Rod client is not able to connect to the cluster it will fail and log a message indicating that the cluster is not available.
BZ-1192324 - Inconsistent logging for starting and finished rebalance

In previous versions of Red Hat JBoss Data Grid the finished cluster-wide rebalance event was previously logged at DEBUG level which was not consistent with other events related to re-balancing.

This issue is now fixed and the event is logged on INFO level.
BZ-1198452 - LIRS eviction strategy fixes

In previous versions of Red Hat JBoss Data Grid, LIRS eviction can cause some elements to be evicted prematurely, resulting in data not being passivated to the cache store.

The eviction policies have been updated, and the data container now ensures atomicity when passivating and activating entries to address this issue.
BZ-1200514 - Persistent Store containing a Map written by JDG <=6.3 can not read with JDG 6.4.0

After upgrading to Red Hat JBoss Data Grid 6.4.0 from a prior version users were unable to load data where a Map was used as a key or value.

This issue has been addressed by updating Map serialization to support both the 6.3 and the 6.4.0 method, allowing maps from either version to be read successfully.
BZ-1190001 - Avoid invalid topology

In previous versions of Red Hat JBoss Data Grid, when clients make requests to the server while there is a topology view change ongoing the server may send back partial topology updates; this behavior results in clients talking to the server with an incorrect view which may result in segments having no owners.

Now if a segment is found to have no owners, then the first node of the topology is sent as a segment owner. Once the cluster has stabilized, then the fully formed topology will be received by the clients.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.