Search

6.3. Using JDBC with the High Performance Journal

download PDF

Overview

The journaled JDBC store is deprecated in this release. The journaled JDBC store was designed to optimize performance where there is a slow connection to the remote database. With modern high-speed networks, however, the advantage of this optimization is negligible.
Warning
The journaled JDBC store is deprecated from JBoss A-MQ 6.2 onwards and may be removed in a future release.
Warning
The journaled JDBC store is incompatible with the JDBC master/slave failover pattern—see Fault Tolerant Messaging.

Prerequisites

Before you can use the journaled JDBC persistence store you need to ensure that the activeio-core-3.1.4.jar bundle is installed in the container.
The bundle is available in the archived ActiveMQ installation included in the InstallDir/extras folder or can be downloaded from Maven at http://mvnrepository.com/artifact/org.apache.activemq/activeio-core/3.1.4.

Example

Example 6.4, “Configuring Red Hat JBoss A-MQ to use the Journaled JDBC Persistence Adapter” shows a configuration fragment that configures the journaled JDBC adapter to use a MySQL database.

Example 6.4. Configuring Red Hat JBoss A-MQ to use the Journaled JDBC Persistence Adapter

<beans ... >
  <broker ...>
    ...
1  <persistenceFactory>
2    <journalPersistenceAdapterFactory journalLogFiles="5" dataDirectory="${data}/kahadb" dataSource="#mysql-ds" useDatabaseLock="true" useDedicatedTaskRunner="false />
    </persistenceFactory>
    ...
  <broker>
  ...
3<bean id="mysql-ds"
      class="org.apache.commons.dbcp.BasicDataSource"
      destroy-method="close">
    <property name="driverClassName" value="com.mysql.jdbc.Driver"/>
    <property name="url" value="jdbc:mysql://localhost/activemq?relaxAutoCommit=true"/>
    <property name="username" value="activemq"/>
    <property name="password" value="activemq"/>
    <property name="poolPreparedStatements" value="true"/>
  </bean>
1
The persistenceFactory element wraps the configuration for the JDBC persistence adapter.
2
The journaledJDBC element specifies that the broker will use the JDBC persistence adapter with the high performance journal. The element's attributes configure the following properties:
  • The journal will span five log files.
  • The configuration for the JDBC driver is specified in a bean element with the ID, mysql-ds.
  • The data for the journal will be stored in ${data}/kahadb.
3
The bean element specified the configuration for the MySQL JDBC driver.

Configuration

Table 6.2, “Attributes for Configuring the Journaled JDBC Persistence Adapter” describes the attributes used to configure the journaled JDBC persistence adapter.
Table 6.2. Attributes for Configuring the Journaled JDBC Persistence Adapter
AttributeDefault ValueDescription
adapter  Specifies the strategy to use when accessing a non-supported database. For more information see the section called “Using generic JDBC providers”.
createTablesOnStartup trueSpecifies whether or not new database tables are created when the broker starts. If the database tables already exist, the existing tables are reused.
dataDirectory activemq-dataSpecifies the directory into which the default Derby database writes its files.
dataSource #derbySpecifies the id of the Spring bean storing the JDBC driver's configuration. For more information see the section called “Configuring your JDBC driver”.
journalArchiveDirectory  Specifies the directory used to store archived journal log files.
journalLogFiles 2Specifies the number of log files to use for storing the journal.
journalLogFileSize 20MBSpecifies the size for a journal's log file.
journalThreadPriority 10Specifies the thread priority of the thread used for journaling.
useJournal trueSpecifies whether or not to use the journal.
useLocktrueSpecifies in the adapter uses file locking.
lockKeepAlivePeriod30000Specifies the time period, in milliseconds, at which the current time is saved in the locker table to ensure that the lock does not timeout. 0 specifies unlimited time.
checkpointInterval1000 * 60 * 5Specifies the time period, in milliseconds, between writing metadata cache to disk. .
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.