Chapter 5. Known and Resolved Issues
5.1. Known Issues Copy linkLink copied to clipboard!
Copy linkLink copied to clipboard!
- BZ-1178965 - Fail fast when using two phase commit with ASYNC backup strategy
- In Red Hat JBoss Data Grid, using the two phase commit with the ASYNC backup strategy results in one phase commit unexpectedly being used instead.This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue.
- BZ-1163665 - Node can temporarily read removed data when another node joins the cluster, leaves or crashes
- In Red Hat JBoss Data Grid, the distribution of entries in the cluster changes when a node joins, leaves or crashes during a brain split. During this brief period, a read on the previous node owner can return stale data. When the rebalance process is completed, further reads return up-to-date data.This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue.
- BZ-1175272 - CDI fails when both remote and embedded uber-jar are present
- In Red Hat JBoss Data Grid, when both
infinispan-remote
andinfinispan-embedded
dependencies are on the classpath, the Infinispan CDI extension does not work as expected. This is due to bundling the Infinispan CDI extension in both jar files. As a result, CDI fails with ambiguous dependencies exception.This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue. - BZ-1012036 - RELAY2 logs error when site unreachable
- In Red Hat JBoss Data Grid, when a site is unreachable, JGroups's
RELAY2
logs an error for each dropped message. Infinispan has configurable fail policies (ignore
/warn
/abort
), but the log is filled with errors despite theignore
policy.This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue. - BZ-1024373 - Default optimistic locking configuration leads to inconsistency
- In Red Hat JBoss Data Grid, transactional caches are configured with optimistic locking by default. Concurrent
replace()
calls can return true under contention and transactions might unexpectedly commit.Two concurrent commands,replace(key, A, B)
andreplace(key, A, C)
may both overwrite the entry. The command which is finalized later wins, overwriting an unexpected value with new value.This is a known issue in JBoss Data Grid 6.4. As a workaround, enable write skew check and theREPEATABLE_READ
isolation level. This results in concurrent replace operations working as expected. - BZ-1107613 - SASL GSSAPI auth doesn't use principal configured login_module
- In Red Hat JBoss Data Grid, the server principal is always constructed as
jgroups/server_name
and is not loaded from the Kerberos login module. Using a different principal results in an authentication failure.This is a known issue in JBoss Data Grid 6.4 and the workaround for this issue is to usejgroups/server_name
as the server principal. - BZ-1114080 - HR client SASL MD5 against LDAP fails
- In Red Hat JBoss Data Grid, the server does not support pass-through MD5 authentication against LDAP. As a result, the Hot Rod client is unable to authenticate to the JBoss Data Grid server via MD5 is the authentication is backed by the LDAP server.This is a known issue in JBoss Data Grid 6.4 and a workaround is to use the PLAIN authentication over end-to-end SSL encryption.
- BZ-881791 - Special characters in file path to JDG server are causing problems
- In Red Hat JBoss Data Grid, when special characters are used in the directory path, the JBoss Data Grid server either fails to start or a configuration file used for logging cannot be loaded properly. Special characters that cause problems include spaces,
#
(hash sign),!
(exclamation mark),%
(percentage sign), and$
(dollar sign).This is a known issue in JBoss Data Grid 6.4. A workaround for this issue is to avoid using special characters in the directory path. - BZ-1092403 - JPA cachestore fails to guess dialect for Oracle12c and PostgresPlus 9
- In Red Hat JBoss Data Grid, JPA Cache Store does not work with Oracle12c and Postgres Plus 9 as Hibernate, an internal dependency of JPA Cache Store, is not able to determine which dialect to use for communication with the database.This is a known issue in JBoss Data Grid 6.4. As a workaround for around this issue, specify the Hibernate dialect directly by adding the following element in the
persistence.xml
file:<property name="hibernate.dialect" value="${hibernate.dialect}" />
<property name="hibernate.dialect" value="${hibernate.dialect}" />
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set ${hibernate.dialect} toorg.hibernate.dialect.Oracle10gDialect
ororg.hibernate.dialect.PostgresPlusDialect
for Oracle12c or Postgres Plus 9 respectively. - BZ-1101512 - CLI UPGRADE command fails when testing data stored via CLI with REST encoding
- In Red Hat JBoss Data Grid, the CLI
upgrade
command fails to migrate data from the old cluster to the new cluster if the data being migrated was stored in the old cluster via CLI with REST encoding (e.g by issuing a command such asput --codec=rest key1 val1
). This issue does not occur if data is stored via REST clients directly.This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue. - BZ-1158559 - C++ HotRod Client, RemoteCache.clear() will throw out exception when data is more than 1M
- In Red Hat JBoss Data Grid, when a cache contains a large number of entries, the clear() operation can take an unexpectedly long time and possibly result in communication timeouts. In this case, the exception is reported to the Hot Rod client.This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue.
- BZ-# - Title
- In Red Hat JBoss Data Grid, for a cache store where
sharing
is set tofalse
, thefetchInMemory
andfetchPersistence
parameters must also be set totrue
or different nodes may contain different copies of the same data.For a cache store where theshared
parameter is set totrue
, setfetchPersistence
must be set tofalse
because the persistence is shared and enabling it results in unnecessary state transfers.However, thefetchInMemory
parameter can be set to eithertrue
orfalse
. Setting this totrue
loads the in-memory state via the network and resulted in a faster start up. Setting the value tofalse
loads the data from persistence without transferring the data remotely from other nodes.This is a known issue in JBoss Data Grid 6.4 and the only workaround for this issue is to follow the stated guidelines above and to ensure that each node in the cluster uses the same configuration to prevent unexpected results based on the start up order. - BZ-881080 - Silence SuspectExceptions
- In Red Hat JBoss Data Grid,
SuspectExceptions
are routinely raised when nodes shut down because they are unresponsive as they shut down. As a result, aSuspectException
error is added to the logs. TheSuspectExceptions
do not affect data integrity.This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue. - BZ-807674 - JDBC Cache Stores using a JTA Data Source do not participate in cache transactions
- In Red Hat JBoss Data Grid's library mode, JDBC cache stores can be configured to use a JTA-aware datasource. However, operations performed on a cache backed by such a store during a JTA transaction will be persisted to the store outside of the transaction's scope. This issue is not applicable to JBoss Data Grid's Remote Client-Server mode because all cache operations are non-transactional.This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue.
- BZ-1088073 - Move rebalancing settings in JON from cache level to global cluster level
- In Red Hat JBoss Data Grid, it is possible to change rebalancing settings using the JBoss Operations Network UI when navigating to the JBoss Data Grid Server/Cache Container/Cache/Configuration (current)/Distributed Cache Attributes/Rebalancing.This operation is currently misrepresented as a cache-level operation when the changed rebalancing settings automatically apply for all the parent cache manager's caches and for all nodes in a particular cluster.This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue.
- BZ-1158839 - Clustered cache with FileStore (shared=false) is inconsistent after restarting one node if entries are deleted during restart
- In Red Hat JBoss Data Grid, when a node restarts, it does not automatically purge entries from its local cache store. As a result, the Administrator starting the node must change the node configuration manually to set the cache store to be purged when the node is starting. If the configuration is not changed, the cache may be inconsistent (removed entries can appear to be present).This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue.