Red Hat AMQ 6
As of February 2025, Red Hat is no longer supporting Red Hat AMQ 6. If you are using AMQ 6, please upgrade: Migrating to AMQ 7.Ce contenu n'est pas disponible dans la langue sélectionnée.
6.3. Using JDBC with the High Performance Journal
Overview Copier lienLien copié sur presse-papiers!
Copier lienLien copié sur presse-papiers!
The journaled JDBC store is deprecated in this release. The journaled JDBC store was designed to optimize performance where there is a slow connection to the remote database. With modern high-speed networks, however, the advantage of this optimization is negligible.
Warning
The journaled JDBC store is deprecated from AMQ 6.2 onwards and may be removed in a future release.
Warning
The journaled JDBC store is incompatible with the JDBC master/slave failover pattern—see Fault Tolerant Messaging.
Prerequisites Copier lienLien copié sur presse-papiers!
Copier lienLien copié sur presse-papiers!
Before you can use the journaled JDBC persistence store you need to ensure that the
activeio-core-3.1.4.jar
bundle is installed in the container.
The bundle is available in the archived ActiveMQ installation included in the
InstallDir/extras
folder or can be downloaded from Maven at http://mvnrepository.com/artifact/org.apache.activemq/activeio-core/3.1.4.
Example Copier lienLien copié sur presse-papiers!
Copier lienLien copié sur presse-papiers!
Example 6.4, “Configuring Red Hat JBoss A-MQ to use the Journaled JDBC Persistence Adapter” shows a configuration fragment that configures the journaled JDBC adapter to use a MySQL database.
Example 6.4. Configuring Red Hat JBoss A-MQ to use the Journaled JDBC Persistence Adapter
The configuration in Example 6.4, “Configuring Red Hat JBoss A-MQ to use the Journaled JDBC Persistence Adapter” has three noteworthy elements:
- 1
- The
persistenceFactory
element wraps the configuration for the JDBC persistence adapter. - 2
- The
journaledJDBC
element specifies that the broker will use the JDBC persistence adapter with the high performance journal. The element's attributes configure the following properties:- The journal will span five log files.
- The configuration for the JDBC driver is specified in a
bean
element with the ID,mysql-ds
. - The data for the journal will be stored in
${data}/kahadb
.
- 3
- The
bean
element specified the configuration for the MySQL JDBC driver.
Configuration Copier lienLien copié sur presse-papiers!
Copier lienLien copié sur presse-papiers!
Table 6.2, “Attributes for Configuring the Journaled JDBC Persistence Adapter” describes the attributes used to configure the journaled JDBC persistence adapter.
Attribute | Default Value | Description |
---|---|---|
adapter | Specifies the strategy to use when accessing a non-supported database. For more information see the section called “Using generic JDBC providers”. | |
createTablesOnStartup | true | Specifies whether or not new database tables are created when the broker starts. If the database tables already exist, the existing tables are reused. |
dataDirectory | activemq-data | Specifies the directory into which the default Derby database writes its files. |
dataSource | #derby | Specifies the id of the Spring bean storing the JDBC driver's configuration. For more information see the section called “Configuring your JDBC driver”. |
journalArchiveDirectory | Specifies the directory used to store archived journal log files. | |
journalLogFiles | 2 | Specifies the number of log files to use for storing the journal. |
journalLogFileSize | 20MB | Specifies the size for a journal's log file. |
journalThreadPriority | 10 | Specifies the thread priority of the thread used for journaling. |
useJournal | true | Specifies whether or not to use the journal. |
useLock | true | Specifies in the adapter uses file locking. |
lockKeepAlivePeriod | 30000 | Specifies the time period, in milliseconds, at which the current time is saved in the locker table to ensure that the lock does not timeout. 0 specifies unlimited time. |
checkpointInterval | 1000 * 60 * 5 | Specifies the time period, in milliseconds, between writing metadata cache to disk. . |