Chapter 5. Using the Replicated LevelDB Persistence Adapter


Abstract

The Replicated LevelDB persistence adapter uses Apache ZooKeeper to select a master from a cluster of broker nodes that are configured to replicate a levelDB store. The Replicated LevelDB persistence adapter then synchronizes all slave LevelDB stores with the master LevelDB store by replicating the master broker's updates to all slave brokers in the cluster.
Because the Replicated LevelDB store uses the same data files as the regular LevelDB store, you can switch a broker's LevelDB configuration between replicated and regular at any time.
Warning
The Replicated LevelDB persistence adapter is a technical preview only, and is not supported in JBoss A-MQ 6.1.

Overview

Important
The Replicated LevelDB persistence adapter is provided for technical preview only. It is not ready for use in the production environment.
The Replicated LevelDB message store uses the same file-based store implemented using Google's LevelDB library. As such, it provides the same advantages and runs on the same platforms as the LevelDB persistence adapter (for details, see Using the LevelDB Persistence Adapter).
The Replicated LevelDB store uses Apache ZooKeeper to coordinate and select which broker node in the cluster becomes master. Only the master accepts and starts client connections. All other broker nodes enter slave mode and connect to the master, synchronizing their persistence state with it. The master node then replicates all persistent operations to the connected slaves.
When the master dies, Apache ZooKeeper selects a slave that has the latest updates to be the new master. Once the new master is activated, the old master can be brought back online, at which time it enters slave mode.
All messaging operations that require a sync-to-disk wait for the update to be replicated to a quorum of slave nodes before the operations complete. For example, a store configured with replicas="3" has a quorum size of (3/2)+1=2. In this case, the master stores the update locally, then waits for at least one slave to store the update before it reports success.
To select a new master, a quorum of nodes must be online for ZooKeeper to find a slave with the latest updates. Therefore, it's recommend that you run with at least three replica nodes, so you can take one down without suffering a service outage.

Deployment tips

  • Clients should use the Failover Transport to connect to the broker nodes in the replication cluster; for example, using a URL like this:
    failover:(tcp://broker1:61616,tcp://broker2:61616,tcp://broker3:61616)
    
  • To enable highly available ZooKeeper service, run at least three ZooKeeper server nodes.
    Note
    Avoid overcommitting ZooKeeper servers. An overworked ZooKeeper server might infer that a live broker replication node has gone offline due to delayed processing of keep-alive messages.
    For details on setting up and running a distributed cluster of Apache ZooKeeper servers, see the ZooKeeper Getting Started document.
  • To enable highly available ActiveMQ service, run at least three replicated broker nodes.
Note
Though Example 5.1, “Configuring the Replicated LevelDB Message Store” configures three replicated broker nodes and three ZooKeeper servers, having the same number of ZooKeeper nodes as replicated broker nodes is not required. Both the ZooKeeper service and the messaging service operate under the same outage probability formula, wherein running three nodes allows one node to fail without incurring a service outage, running five nodes allows two nodes to fail simultaneously without incurring a service outage, and so on. Applications that must meet stringent high availability requirements might configure more ZooKeeper nodes than replicated broker nodes, as the messaging service depends on the ZooKeeper service and is limited by its availability.

Basic configuration

To configure the Replicated LevelDB message store, place a replicatedLevelDB element in the persistenceAdapter element of your broker's configuration, and use the replicatedLevelDB element's attributes to configure the message store.
Important
All broker nodes in the same replication cluster must use the same value in the brokerName attribute.
Example 5.1, “Configuring the Replicated LevelDB Message Store” shows a basic configuration of the Replicated LevelDB message store. The Replicated LevelDB files are stored under the activemq-data directory.

Example 5.1. Configuring the Replicated LevelDB Message Store

<broker brokerName="broker" persistent="true" ... >
  ...
  <persistenceAdapter>
    <replicatedLevelDB
         directory="activemq-data" />
         replicas="3"
         bind="tcp://0.0.0.0;0"
         zkAddress="zoo1.example.org:2181,zoo2.example.org:2181,zoo3.example.org:2181"
         zkPassword="password"
         zkPath="/activemq/leveldb-stores"
         />
  </persistenceAdapter>
  ...
</broker>

Configuration attributes

The attributes listed in Table 5.1, “Configuration Properties of the Replicated LevelDB Message Store—attributes which must be identical on all broker nodes in a replication cluster” must be configured with the same value on all broker nodes in a replication cluster.
Table 5.1. Configuration Properties of the Replicated LevelDB Message Store—attributes which must be identical on all broker nodes in a replication cluster
AttributeDefault ValueDescription
replicas 3 Specifies the number of replicated stores the replication cluster will contain. At least (replicas/2)+1 nodes must be online to avoid messaging service outages.
securityToken Specifies the security token to use, which must match on all replication nodes in the cluster for the nodes to accept each other's replication requests.
zkAddress 127.0.0.1:2181 A comma-separated list of addresses that specify the ZooKeeper servers managing the LevelDB stores in the cluster.
zkPassword Specifies the password to use for connecting to the ZooKeeper servers.
zkPath /default Specifies the path to the ZooKeeper directory in which information about master/slave selection is exchanged.
zkSessionTimeout 2s
Specifies the time limit by which the broker will detect a network failure. Valid units are:
  • s = seconds
  • m = minutes
  • h = hours
  • d = days
  • w = weeks
  • M = months
  • y = years
Specifying a number without a suffix (s, m, h,...y) selects milliseconds.
You can also combine units for more fine-grained scheduling; for example, 10m30s.
sync quorum_mem
Controls where updates are stored before they are considered as having completed.
The options are: local_mem, local_disk, remote_mem, remote_disk, quorum_mem, and quorum_disk
If you specify multiple options—in a comma-separated list—the stronger guarantee is used.
For example, specifying local_mem,local_disk is the same as specifying local_disk; specifying quorum_mem is the same as specifying local_mem,remote_mem; and quorum_disk is the same as specifying local_disk,remote_disk.
Important
The broker uses zkSessionTimeout to detect when it has been disconnected from the ZooKeeper server due to a network failure. When set to 2s, the replicated broker nodes will detect a disconnect within two seconds of a network failure. Once the disconnect is detected, the master broker gives up the master role, and the slave brokers begin the election process. The lower the timeout value, the faster the process to select a new master. However, setting the timeout value too low can result in false positives, causing masters to switch when no disconnect has occurred.
Table 5.2. Configuration Properties of the Replicated LevelDB Message Store—attributes which can be unique for each broker node in a replication cluster
AttributeDefault ValueDescription
bind tcp://0.0.0.0:61619
Specifies the address and port to which the broker will bind to service the replication protocol, when it becomes master.
To configure dynamic ports, use tcp://0.0.0.0:0.
hostname
Specifies the hostname to use for advertising the replication service when the broker node becomes master. When left unset, the messaging service automatically determines the hostname.
It's possible for the messaging service to incorrectly determine the hostname. For example, it might select localhost, which would prevent remote slave brokers from connecting to the master broker.
Note
Except for the Pluggable Storage Lockers, the Replicated LevelDB store supports all of the standard LevelDB store configuration attributes. For details, see Table 4.1, “Configuration Properties of the LevelDB Message Store—standard LevelDB attributes” in Using the LevelDB Persistence Adapter.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.