Search

18.2. Using a JDBC Lock System

download PDF

Overview

The JDBC locking mechanism is intended for failover deployments where Red Hat JBoss Fuse instances exist on separate machines.
In this scenario, the master instance holds a lock on a locking table hosted on a database. If the master loses the lock, a waiting slave process gains access to the locking table and fully starts its container.

Adding the JDBC driver to the classpath

In a JDBC locking system, the JDBC driver needs to be on the classpath for each instance in the master/slave setup. Add the JDBC driver to the classpath as follows:
  1. Copy the JDBC driver JAR file to the ESBInstallDir/lib directory for each Red Hat JBoss Fuse instance.
  2. Modify the bin/karaf start script so that it includes the JDBC driver JAR in its CLASSPATH variable.
    For example, given the JDBC JAR file, JDBCJarFile.jar, you could modify the start script as follows (on a *NIX operating system):
        ...
        # Add the jars in the lib dir
        for file in "$KARAF_HOME"/lib/karaf*.jar
        do
            if [ -z "$CLASSPATH" ]; then
                CLASSPATH="$file"
            else
                CLASSPATH="$CLASSPATH:$file"
            fi
        done
     CLASSPATH="$CLASSPATH:$KARAF_HOME/lib/JDBCJarFile.jar"
    Note
    If you are adding a MySQL driver JAR or a PostgreSQL driver JAR, you must rename the driver JAR by prefixing it with the karaf- prefix. Otherwise, Apache Karaf will hang and the log will tell you that Apache Karaf was unable to find the driver.

Configuring a JDBC lock system

To configure a JDBC lock system, update the etc/system.properties file for each instance in the master/slave deployment as shown

Example 18.2. JDBC Lock File Configuration

karaf.lock=true
karaf.lock.class=org.apache.karaf.main.DefaultJDBCLock
karaf.lock.level=50
karaf.lock.delay=10000
karaf.lock.jdbc.url=jdbc:derby://dbserver:1527/sample
karaf.lock.jdbc.driver=org.apache.derby.jdbc.ClientDriver
karaf.lock.jdbc.user=user
karaf.lock.jdbc.password=password
karaf.lock.jdbc.table=KARAF_LOCK
karaf.lock.jdbc.clustername=karaf
karaf.lock.jdbc.timeout=30
In the example, a database named sample will be created if it does not already exist. The first Red Hat JBoss Fuse instance to acquire the locking table is the master instance. If the connection to the database is lost, the master instance tries to gracefully shutdown, allowing a slave instance to become master when the database service is restored. The former master will require manual restart.

Configuring JDBC locking on Oracle

If you are using Oracle as your database in a JDBC locking scenario, the karaf.lock.class property in the etc/system.properties file must point to org.apache.karaf.main.OracleJDBCLock.
Otherwise, configure the system.properties file as normal for your setup, as shown:

Example 18.3. JDBC Lock File Configuration for Oracle

karaf.lock=true 
karaf.lock.class=org.apache.karaf.main.OracleJDBCLock 
karaf.lock.jdbc.url=jdbc:oracle:thin:@hostname:1521:XE 
karaf.lock.jdbc.driver=oracle.jdbc.OracleDriver 
karaf.lock.jdbc.user=user 
karaf.lock.jdbc.password=password 
karaf.lock.jdbc.table=KARAF_LOCK 
karaf.lock.jdbc.clustername=karaf 
karaf.lock.jdbc.timeout=30
Note
The karaf.lock.jdbc.url requires an active Oracle system ID (SID). This means you must manually create a database instance before using this particular lock.

Configuring JDBC locking on Derby

If you are using Derby as your database in a JDBC locking scenario, the karaf.lock.class property in the etc/system.properties file should point to org.apache.karaf.main.DerbyJDBCLock. For example, you could configure the system.properties file as shown:

Example 18.4. JDBC Lock File Configuration for Derby

karaf.lock=true
karaf.lock.class=org.apache.karaf.main.DerbyJDBCLock
karaf.lock.jdbc.url=jdbc:derby://127.0.0.1:1527/dbname
karaf.lock.jdbc.driver=org.apache.derby.jdbc.ClientDriver
karaf.lock.jdbc.user=user
karaf.lock.jdbc.password=password
karaf.lock.jdbc.table=KARAF_LOCK
karaf.lock.jdbc.clustername=karaf
karaf.lock.jdbc.timeout=30

Configuring JDBC locking on MySQL

If you are using MySQL as your database in a JDBC locking scenario, the karaf.lock.class property in the etc/system.properties file must point to org.apache.karaf.main.MySQLJDBCLock. For example, you could configure the system.properties file as shown:

Example 18.5. JDBC Lock File Configuration for MySQL

karaf.lock=true
karaf.lock.class=org.apache.karaf.main.MySQLJDBCLock
karaf.lock.jdbc.url=jdbc:mysql://127.0.0.1:3306/dbname
karaf.lock.jdbc.driver=com.mysql.jdbc.Driver
karaf.lock.jdbc.user=user
karaf.lock.jdbc.password=password
karaf.lock.jdbc.table=KARAF_LOCK
karaf.lock.jdbc.clustername=karaf
karaf.lock.jdbc.timeout=30

Configuring JDBC locking on PostgreSQL

If you are using PostgreSQL as your database in a JDBC locking scenario, the karaf.lock.class property in the etc/system.properties file must point to org.apache.karaf.main.PostgreSQLJDBCLock. For example, you could configure the system.properties file as shown:

Example 18.6. JDBC Lock File Configuration for PostgreSQL

karaf.lock=true
karaf.lock.class=org.apache.karaf.main.PostgreSQLJDBCLock
karaf.lock.jdbc.url=jdbc:postgresql://127.0.0.1:5432/dbname
karaf.lock.jdbc.driver=org.postgresql.Driver
karaf.lock.jdbc.user=user
karaf.lock.jdbc.password=password
karaf.lock.jdbc.table=KARAF_LOCK
karaf.lock.jdbc.clustername=karaf
karaf.lock.jdbc.timeout=0

JDBC lock classes

The following JDBC lock classes are currently provided by Apache Karaf:
org.apache.karaf.main.DefaultJDBCLock
org.apache.karaf.main.DerbyJDBCLock
org.apache.karaf.main.MySQLJDBCLock
org.apache.karaf.main.OracleJDBCLock
org.apache.karaf.main.PostgreSQLJDBCLock
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.