Tuning Guide
Optimize Red Hat JBoss A-MQ for your environment
Copyright © 2011-2014 Red Hat, Inc. and/or its affiliates.
Abstract
Chapter 1. Introduction to Performance Tuning Copy linkLink copied to clipboard!
Limiting factors Copy linkLink copied to clipboard!
- The speed at which messages are written to and read from disk (persistent brokers only).
- The speed at which messages can be marshalled and sent over the network.
- Context switching, due to multi-threading.
Non-persistent and persistent brokers Copy linkLink copied to clipboard!
Broker networks Copy linkLink copied to clipboard!
Chapter 2. General Tuning Techniques Copy linkLink copied to clipboard!
Abstract
2.1. System Environment Copy linkLink copied to clipboard!
Overview Copy linkLink copied to clipboard!
Disk speed Copy linkLink copied to clipboard!
Network performance Copy linkLink copied to clipboard!
Hardware specification Copy linkLink copied to clipboard!
Memory available to the JVM Copy linkLink copied to clipboard!
-Xmx
option. For example, to increase JVM memory to 2048 MB, add -Xmx2048M
(or equivalently, -Xmx2G
) as a JVM option.
2.2. Co-locating the Broker Copy linkLink copied to clipboard!
Overview Copy linkLink copied to clipboard!
vm://
transport.
Figure 2.1. Broker Co-located with Producer
The vm:// transport Copy linkLink copied to clipboard!
vm://
endpoint from a producer or a consumer in just the same way as you connect to a tcp://
endpoint (or any other protocol supported by Red Hat JBoss A-MQ). But the effect of connecting to a vm://
endpoint is quite different from conecting to a tcp://
endpoint: whereas a tcp://
endpoint initiates a connection to a remote broker instance, the vm://
endpoint actually creates a local, embedded broker instance. The embedded broker runs inside the same JVM as the client and messages are sent to the broker through an internal channel, bypassing the network.
vm://brokerName
vm://brokerName
brokerConfig
option. For example, to create a myBroker
instance that takes its configuration from the activemq.xml
configuration file, define the following VM endpoint:
vm://myBroker?brokerConfig=xbean:activemq.xml
vm://myBroker?brokerConfig=xbean:activemq.xml
A simple optimization Copy linkLink copied to clipboard!
vm://brokerName?async=false
vm://brokerName?async=false
optimizedDispatch
, and the consumer option, dispatchAsync
, are also configured to disable asynchronous behavior, the calling thread can actually dispatch directly to consumers.
2.3. Optimizing the Protocols Copy linkLink copied to clipboard!
Overview Copy linkLink copied to clipboard!
TCP transport Copy linkLink copied to clipboard!
- Socket buffer size—the default TCP socket buffer size is 64 KB. While this is adequate for the speed of networks in use at the time TCP was originally designed, this buffer size is sub-optimal for modern high-speed networks. The following rule of thumb can be used to estimate the optimal TCP socket buffer size:Buffer Size = Bandwidth x Round-Trip-TimeWhere the Round-Trip-Time is the time between initially sending a TCP packet and receiving an acknowledgement of that packet (ping time). Typically, it is a good idea to try doubling the socket buffer size to 128 KB. For example:
tcp://hostA:61617?socketBufferSize=131072
tcp://hostA:61617?socketBufferSize=131072
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more details, see the Wikipedia article on Network Improvement. - I/O buffer size—the I/O buffer is used to buffer the data flowing between the TCP layer and the protocol that is layered above it (such as OpenWire). The default I/O buffer size is 8 KB and you could try doubling this size to achieve better performance. For example:
tcp://hostA:61617?ioBufferSize=16384
tcp://hostA:61617?ioBufferSize=16384
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
OpenWire protocol Copy linkLink copied to clipboard!
Parameter | Default | Description |
---|---|---|
cacheEnabled | true | Specifies whether to cache commonly repeated values, in order to optimize marshaling. |
cacheSize | 1024 | The number of values to cache. Increase this value to improve performance of marshaling. |
tcpNoDelayEnabled | false | When true , disable the Nagles algorithm. The Nagles algorithm was devised to avoid sending tiny TCP packets containing only one or two bytes of data; for example, when TCP is used with the Telnet protocol. If you disable the Nagles algorithm, packets can be sent more promptly, but there is a risk that the number of very small packets will increase. |
tightEncodingEnabled | true | When true , implement a more compact encoding of basic data types. This results in smaller messages and better network performance, but comes at a cost of more calculation and demands made on CPU time. A trade off is therefore required: you need to determine whether the network or the CPU is the main factor that limits performance. |
wireFormat.
prefix. For example, to double the size of the OpenWire cache, you can specify the cache size on a URI as follows:
tcp://hostA:61617?wireFormat.cacheSize=2048
tcp://hostA:61617?wireFormat.cacheSize=2048
Enabling compression Copy linkLink copied to clipboard!
useCompression
option on the ActiveMQConnectionFactory
class. For example, to initialize a JMS connection with compression enabled in a Java client, insert the following code:
jms.useCompression
option on a producer URI—for example:
tcp://hostA:61617?jms.useCompression=true
tcp://hostA:61617?jms.useCompression=true
2.4. Message Encoding Copy linkLink copied to clipboard!
Message body type Copy linkLink copied to clipboard!
StreamMessage
MapMessage
TextMessage
ObjectMessage
BytesMessage
BytesMessage
(a stream of uninterpreted bytes) is the fastest, while ObjectMessage
(serialization of a Java object) is the slowest.
Encoding recommendation Copy linkLink copied to clipboard!
BytesMessage
whenever possible. We suggest that you use Google's Protobuf, which has excellent performance characteristics.
2.5. Threading Optimizations Copy linkLink copied to clipboard!
Optimized dispatch Copy linkLink copied to clipboard!
optimizedDispatch
option to true
on all queue destinations. When this option is enabled, the broker no longer uses a dedicated thread to dispatch messages to each destination.
optimizedDispatch
option on all queue destinations, insert the following policy entry into the broker configuration:
queue
attribute, >
, is a wildcard that matches all queue names.
2.6. Vertical Scaling Copy linkLink copied to clipboard!
Definition Copy linkLink copied to clipboard!
Tricks to optimize vertical scaling Copy linkLink copied to clipboard!
- NIO transport on the broker—to reduce the number of threads required, use the NIO transport (instead of the TCP transport) when defining transport connectors in the broker. Do not use the NIO transport in clients, it is only meant to be used in the broker.
- Allocate more memory to broker—to increase the amount of memory available to the broker, pass the
-Xmx
option to the JVM. - Reduce initial thread stack size—to allocate a smaller initial stack size for threads, pass the
-Xss
option to the JVM.
2.7. Horizontal Scaling Copy linkLink copied to clipboard!
Overview Copy linkLink copied to clipboard!
Figure 2.2. Scaling with Multiple Brokers
Broker networks Copy linkLink copied to clipboard!
Static scales better than dynamic Copy linkLink copied to clipboard!
Asynchronous network connection establishment Copy linkLink copied to clipboard!
networkConnectorStartAsync
attribute on the broker
element to true
, as follows:
<beans ...> <broker ... networkConnectorStartAsync="true">...</broker> </beans>
<beans ...>
<broker ... networkConnectorStartAsync="true">...</broker>
</beans>
Client-side traffic partitioning Copy linkLink copied to clipboard!
- You can use all the tuning techniques for vertical scaling.
- You can achieve better horizontal scalability than a network of brokers (because there is less broker crosstalk).
2.8. Integration with Spring and Camel Copy linkLink copied to clipboard!
Overview Copy linkLink copied to clipboard!
JmsTemplate
, which allows you to hide some of the lower level JMS details when sending messages and so on. One thing to bear in mind about JmsTemplate
, however, is that it creates a new connection, session, and producer for every message it sends, which is very inefficient. It is implemented like this in order to work inside an EJB container, which typically provides a special JMS connection factory that supports connection pooling.
org.apache.activemq.pool.PooledConnectionFactory
, from the activemq-pool
artifact, which pools JMS resources to work efficiently with Spring's JmsTemplate
or with EJBs.
Creating a pooled connection factory Copy linkLink copied to clipboard!
PooledConnectionFactory
is implemented as a wrapper class that is meant to be chained with another connection factory instance. For example, you could use a PooledConnectionFactory
instance to wrap a plain Red Hat JBoss A-MQ connection factory, or to wrap an ActiveMQSslConnectionFactory
, and so on.
Example Copy linkLink copied to clipboard!
jmsFactory
, that works efficiently with the Spring JmsTemplate
instance, myJmsTemplate
, define the following bean instances in your Spring configuration file:
ActiveMQConnectionFactory
instance that opens connections to the tcp://localhost:61616
broker endpoint.
2.9. Optimizing Memory Usage in the Broker Copy linkLink copied to clipboard!
Optimize message paging Copy linkLink copied to clipboard!
policyEntry
element, you can tune the message paging to match the amount of memory available in the broker. For example, if there is very large queue and lots of destination memory, increasing the maxBrowsePage
attribute would allow more of those messages to be visible when browsing a queue.
Destination policies to control paging Copy linkLink copied to clipboard!
maxPageSize
- The maximum number of messages paged into memory for sending to a destination.
maxBrowsePageSize
- NoteThe number of messages paged in for browsing cannot exceed the destination's
memoryLimit
setting. maxExpirePageSize
- The maximum number of messages paged into memory to check for expired messages.
Chapter 3. Consumer Performance Copy linkLink copied to clipboard!
3.1. Acknowledgment Modes Copy linkLink copied to clipboard!
Overview Copy linkLink copied to clipboard!
Supported acknowledgment modes Copy linkLink copied to clipboard!
Session.AUTO_ACKNOWLEDGE
- (Default) In this mode, the JMS session automatically acknowledges messages as soon as they are received. In particular, the JMS session acknowledges messages before dispatching them to the application layer. For example, if the consumer application calls
MessageConsumer.receive()
, the message has already been acknowledged before the call returns. Session.CLIENT_ACKNOWLEDGE
- In this mode, the client application code explicitly calls the
Message.acknowledge()
method to acknowledge the message. In Apache Camel, this acknowledges not just the message on which it is invoked, but also any other messages in the consumer that have already been completely processed. Session.DUPS_OK_ACKNOWLEDGE
- In this mode, the JMS session automatically acknowledges messages, but does so in a lazy manner. If JMS fails while this mode is used, some messages that were completely processed could remain unacknowledged. When JMS is restarted, these messages will be re-sent (duplicate messages).This is one of the fastest acknowledgment modes, but the consumer must be able to cope with possible duplicate messages (for example, by detecting and discarding duplicates).
Session.SESSION_TRANSACTED
- When using transactions, the session implicitly works in
SESSION_TRANSACTED
mode. The response to the transaction commit is then equivalent to message acknowledgment.When JMS transactions are used to group multiple messages, transaction mode is very efficient. But avoid using a transaction to send a single message, because this incurs the extra overhead of committing or rolling back the transaction. ActiveMQSession.INDIVIDUAL_ACKNOWLEDGE
- This non-standard mode is similar to
CLIENT_ACKNOWLEDGE
, except that it acknowledges only the message on which it is invoked. It does not flush acknowledgments for any other completed messages.
optimizeAcknowledge option Copy linkLink copied to clipboard!
optimizeAcknowledge
option is exposed on the ActiveMQConnectionFactory
class and must be used in conjunction with the Session.AUTO_ACKNOWLEDGE
mode. When set to true
, the consumer acknowledges receipt of messages in batches, where the batch size is set to 65% of the prefetch limit. Alternatively, if message consumption is slow, the batch acknowledgment will be sent after 300ms. Default is false
.
tcp://hostA:61617?jms.optimizeAcknowledge=true
tcp://hostA:61617?jms.optimizeAcknowledge=true
optimizeAcknowledge
option is only supported by the JMS client API.
Choosing the acknowledgment mode Copy linkLink copied to clipboard!
DUPS_OK_ACKNOWLEDGE
mode, which requires you to implement duplicate detection code in your consumer.
3.2. Reducing Context Switching Copy linkLink copied to clipboard!
Overview Copy linkLink copied to clipboard!
Optimize message dispatching on the broker side Copy linkLink copied to clipboard!
consumer.dispatchAsync
option to false
on the transport URI used by the consumer. For example, to disable asynchronous dispatch to the TEST.QUEUE
queue, use the following URI on the consumer side:
TEST.QUEUE?consumer.dispatchAsync=false
TEST.QUEUE?consumer.dispatchAsync=false
dispatchAsync
property to false on the ActiveMQ connection factory—for example:
// Java ((ActiveMQConnectionFactory)connectionFactory).setDispatchAsync(false);
// Java
((ActiveMQConnectionFactory)connectionFactory).setDispatchAsync(false);
Optimize message reception on the consumer side Copy linkLink copied to clipboard!
Session
threads and the MessageConsumer
threads. In the special case where only one session is associated with a connection, the two layers are redundant and it is possible to optimize the threading model by eliminating the thread associated with the session layer. This section explains how to enable this consumer threading optimization.
Default consumer threading model Copy linkLink copied to clipboard!
javax.jms.Session
instance. The second thread layer consists of a pool of threads, where each thread is associated with a javax.jms.MessageConsumer
instance. Each thread in this layer picks the relevant messages out of the session queue, inserting each message into a queue inside the javax.jms.MessageConsumer
instance.
Figure 3.1. Default Consumer Threading Model
Optimized consumer threading model Copy linkLink copied to clipboard!
MessageConsumer
threads can then pull messages directly from the transport layer.
Figure 3.2. Optimized Consumer Threading Model
Prerequisites Copy linkLink copied to clipboard!
- There must only be one JMS session on the connection. If there is more than one session, a separate thread is always used for each session, irrespective of the value of the
alwaysSessionAsync
flag. - One of the following acknowledgment modes must be selected:
Session.DUPS_OK_ACKNOWLEDGE
Session.AUTO_ACKNOWLEDGE
alwaysSessionAsync option Copy linkLink copied to clipboard!
alwaysSessionAsync
option to false
on the ActiveMQConnectionFactory
(default is true
).
optimizeAcknowledge
option is only supported by the JMS client API.
Example Copy linkLink copied to clipboard!
alwaysSessionAsync
flag:
3.3. Prefetch Limit Copy linkLink copied to clipboard!
Overview Copy linkLink copied to clipboard!
Prefetch limits Copy linkLink copied to clipboard!
- Queue consumer
- Default prefetch limit is 1000.If you are using a collection of consumers to distibute the workload (many consumers processing messages from the same queue), you typically want this limit to be small. If one consumer is allowed to accumulate a large number of unacknowledged messages, it could starve the other consumers of messages. Also, if the consumer fails, there would be a large number of messages unavailable for processing until the failed consumer is restored.
- Queue browser
- Default prefetch limit is 500.
- Topic consumer
- Default prefetch limit is 32766.The default limit of 32766 is the largest value of a short and is the maximum possible value of the prefetch limit.
- Durable topic subscriber
- Default prefetch limit is 100.You can typically improve the efficientcy of a consumer by increasing this prefetch limit.
Optimizing prefetch limits Copy linkLink copied to clipboard!
- Queue consumers—if you have just a single consumer attached to a queue, you can leave the prefetch limit at a fairly large value. But if you are using a group of consumers to distribute the workload, it is usually better to restrict the prefetch limit to a very small number—for example, 0 or 1.
- Durable topic subscribers—the efficiency of topic subscribers is generally improved by increasing the prefetch limit. Try increasing the limit to 1000.
Chapter 4. Producer Performance Copy linkLink copied to clipboard!
4.1. Async Sends Copy linkLink copied to clipboard!
Overview Copy linkLink copied to clipboard!
Configuring on a transport URI Copy linkLink copied to clipboard!
jms.useAsyncSend
option to true
on the transport URI that you use to connect to the broker. For example:
tcp://locahost:61616?jms.useAsyncSend=true
tcp://locahost:61616?jms.useAsyncSend=true
Configuring on a connection factory Copy linkLink copied to clipboard!
useAsyncSend
property to true
directly on the ActiveMQConnectionFactory
instance. For example:
// Java ((ActiveMQConnectionFactory)connectionFactory).setUseAsyncSend(true);
// Java
((ActiveMQConnectionFactory)connectionFactory).setUseAsyncSend(true);
Configuring on a connection Copy linkLink copied to clipboard!
useAsyncSend
property to true
directly on the ActiveMQConnection
instance. For example:
// Java ((ActiveMQConnection)connection).setUseAsyncSend(true);
// Java
((ActiveMQConnection)connection).setUseAsyncSend(true);
4.2. Flow Control Copy linkLink copied to clipboard!
Overview Copy linkLink copied to clipboard!
Flow control enabled Copy linkLink copied to clipboard!
Figure 4.1. Broker with Flow Control Enabled
Flow control disabled Copy linkLink copied to clipboard!
Figure 4.2. Broker with Flow Control Disabled
Discarding messages Copy linkLink copied to clipboard!
PRICES.>
pattern (that is, topic names prefixed by PRICES.
), configure the broker as follows:
How to turn off flow control Copy linkLink copied to clipboard!
producerFlowControl
attribute to false
on a policyEntry
element.
FOO.
, insert a policy entry like the following into the broker's configuration:
Defining the memory limits Copy linkLink copied to clipboard!
- Per-broker—to set global memory limits on a broker, define a
systemUsage
element as a child of thebroker
element, as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where the preceding sample specifies three distinct memory limits, as follows:memoryUsage
—specifies the maximum amount of memory allocated to the broker.storeUsage
—for persistent messages, specifies the maximum disk storage for the messages.NoteIn certain scenarios, the actual disk storage used by JBoss A-MQ can exceed the specified limit. For this reason, it is recommended that you setstoreUsage
to about 70% of the intended maximum disk storage.tempUsage
—for temporary messages, specifies the maximum amount of memory.
The values shown in the preceding example are the defaults. - Per-destination—to set a memory limit on a destination, set the
memoryLimit
attribute on thepolicyEntry
element. The value ofmemoryLimit
can be a string, such as10 MB
or512 KB
. For example, to limit the amount of memory on theFOO.BAR
queue to 10 MB, define a policy entry like the following:<policyEntry queue="FOO.BAR" memoryLimit="10 MB"/>
<policyEntry queue="FOO.BAR" memoryLimit="10 MB"/>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Making a producer aware of flow control Copy linkLink copied to clipboard!
send()
operation to block, until enough memory is freed up in the broker for the producer to resume sending messages. If you want the producer to be made aware of the fact that the send()
operation is blocked due to flow control, you can enable either of the following attributes on the systemUsage
element:
sendFailIfNoSpace
- If
true
, the broker immediately returns an error when flow control is preventing producersend()
operations; otherwise, revert to default behavior. sendFailIfNoSpaceAfterTimeout
- Specifies a timeout in units of milliseconds. When flow control is preventing producer
send()
operations, the broker returns an error, after the specified timeout has elapsed.
send()
operations:
Chapter 5. Managing Slow Consumers Copy linkLink copied to clipboard!
Overview Copy linkLink copied to clipboard!
- limiting the number of messages retained for a consumerWhen using non-durable topics, you can specify the number of messages that a destination will hold for a consumer. Once the limit is reached, older messages are discarded when new messages arrive.
- aborting slow consumersJBoss A-MQ determines slowness by monitoring how often a consumer's dispatch buffer is full. You can specify that consistently slow consumers be aborted by closing its connection to the broker.
Limiting message retention Copy linkLink copied to clipboard!
pendingMessageLimitStrategy
) on a topic to control the number of messages that are held for slow consumers. When set, the topic will retain the specified number of messages in addition to the consumer's prefetch limit.
-1
, which means that the topic will retain all of the unconsumed messages for a consumer.
- specifying a constant number of messages over the prefetch limitThe
constantPendingMessageLimitStrategy
implementation allows you to specify constant number of messages to retain as shown in Example 5.1, “Constant Pending Message Limiter”.Example 5.1. Constant Pending Message Limiter
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - specifying a multiplier that is applied to the prefetch limitThe
prefetchRatePendingMessageLimitStrategy
implementation allows you to specify a multiplier that is applied to the prefect limit. Example 5.2, “Prefectch Limit Based Pending Message Limiter” shown configuration that retains twice the prefect limit. So if the prefect limit is 3, the destination will retain 6 pending messages for each consumer.Example 5.2. Prefectch Limit Based Pending Message Limiter
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Aborting slow consumers Copy linkLink copied to clipboard!
- a consumer is considered slow for specified amount of time
- a consumer is considered slow a specified number of times
Example 5.3. Aborting Slow Consumers
abortSlowConsumerStrategy
element activates the abort slow consumer strategy with default settings. Consumers that are considered slow for more than 30 seconds are aborted. You can modify when slow consumers are aborted using the attributes described in Table 5.1, “Settings for Abort Slow Consumer Strategy”.
Attribute | Default | Description |
---|---|---|
maxSlowCount | -1 | Specifies the number of times a consumer can be considered slow before it is aborted. -1 specifies that a consumer can be considered slow an infinite number of times. |
maxSlowDuration | 30000 | Specifies the maximum amount of time, in milliseconds, that a consumer can be continuously slow before it is aborted. |
checkPeriod | 30000 | Specifies, in milliseconds, the time between checks for slow consumers. |
abortConnection | false | Specifies whether the broker forces the consumer connection to close. The default value specifies that the broker will send a message to the consumer requesting it to close its connection. true specifies that the broker will automatically close the consumer's connection. |
Example 5.4. Aborting Repeatedly Slow Consumers
<abortSlowConsumerStrategy maxSlowCount="30" />
<abortSlowConsumerStrategy maxSlowCount="30" />
Chapter 6. Persistent Messaging Copy linkLink copied to clipboard!
Abstract
6.1. Serializing to Disk Copy linkLink copied to clipboard!
KahaDB message store Copy linkLink copied to clipboard!
Synchronous dispatch through a persistent broker Copy linkLink copied to clipboard!
Figure 6.1. Synchronous Dispatch through a Persistent Broker
- The broker pushes the message into the message store. Assuming that the
enableJournalDiskSyncs
option istrue
, the message store also writes the message to disk, before the broker proceeds. - The broker now sends the message to all of the interested consumers (but does not wait for consumer acknowledgments). For topics, the broker dispatches the message immediately, while for queues, the broker adds the message to a destination cursor.
- The broker then sends a receipt back to the producer. The receipt can thus be sent back before the consumers have finished acknowledging messages (in the case of topic messages, consumer acknowledgments are usually not required anyway).
Concurrent store and dispatch Copy linkLink copied to clipboard!
Figure 6.2. Concurrent Store and Dispatch
- The broker pushes the message onto the message store and, concurrently, sends the message to all of the interested consumers. After sending the message to the consumers, the broker then sends a receipt back to the producer, without waiting for consumer acknowledgments or for the message store to synchronize to disk.
- As soon as the broker receives acknowledgments from all the consumers, the broker removes the message from the message store. Because consumers typically acknowledge messages faster than a message store can write them to disk, this often means that write to disk is optimized away entirely. That is, the message is removed from the message store before it is ever physically written to disk.
Configuring concurrent store and dispatch Copy linkLink copied to clipboard!
concurrentStoreAndDispatchQueues
flag and the concurrentStoreAndDispatchTopics
flag. By default, it is enabled for queues, but disabled for topics. To enable concurrent store and dispatch for both queues and topics, configure the kahaDB
element in the broker configuration as follows:
Reducing memory footprint of pending messages Copy linkLink copied to clipboard!
reduceMemoryFootprint
option, as follows:
reduceMemoryFootprint
option is enabled, a message's marshalled content is cleared immediately after the message is written to persistent storage. This results in approximately a 50% reduction in the amount of memory occupied by the pending messages.
6.2. KahaDB Optimization Copy linkLink copied to clipboard!
Overview Copy linkLink copied to clipboard!
KahaDB architecture Copy linkLink copied to clipboard!
Figure 6.3. KahaDB Architecture
Sample configuration Copy linkLink copied to clipboard!
persistenceAdapter
element containing a kahaDB
child element:
directory
property specifies the directory where the KahaDB files are stored and the journalMaxFileLength
specifies the maximum size of a data log file.
Performance optimization Copy linkLink copied to clipboard!
kahaDB
element):
indexCacheSize
—(default10000
) specifies the size of the cache in units of pages (where one page is 4 KB by default). Generally, the cache should be as large as possible, to avoid swapping pages in and out of memory. Check the size of your metadata store file,db.data
, to get some idea of how big the cache needs to be.indexWriteBatchSize
—(default1000
) defines the threshold for the number of dirty indexes that are allowed to accumulate, before KahaDB writes the cache to the store. If you want to maximize the speed of the broker, you could set this property to a large value, so that the store is updated only during checkpoints. But this carries the risk of losing a large amount of metadata, in the event of a system failure (causing the broker to restart very slowly).journalMaxFileLength
—(default32mb
) when the throughput of a broker is very large, you can fill up a journal file quite quickly. Because there is a cost associated with closing a full journal file and opening a new journal file, you can get a slight performance improvement by increasing the journal file size, so that this cost is incurred less frequently.enableJournalDiskSyncs
—(defaulttrue
) normally, the broker performs a disk sync (ensuring that a message has been physically written to disk) before sending the acknowledgment back to a producer. You can obtain a substantial improvement in broker performance by disabling disk syncs (setting this property tofalse
), but this reduces the reliability of the broker somewhat.WarningIf you need to satisfy the JMS durability requirement and be certain that you do not lose any messages, do not disable journal disk syncs.
6.3. vmCursor on Destination Copy linkLink copied to clipboard!
Overview Copy linkLink copied to clipboard!
Configuring destinations to use the vmCursor Copy linkLink copied to clipboard!
vmCursor
for all topics and queues, add the following lines to your broker configuration:
>
, that matches all destination names. You could also specify a more selective destination pattern, so that the VM cursor would be enabled only for those destinations where you are sure that consumers can keep up with the message flow.
Reference Copy linkLink copied to clipboard!
6.4. JMS Transactions Copy linkLink copied to clipboard!
Improving efficiency using JMS transactions Copy linkLink copied to clipboard!
Legal Notice Copy linkLink copied to clipboard!
Trademark Disclaimer
Legal Notice Copy linkLink copied to clipboard!
Third Party Acknowledgements
- JLine (http://jline.sourceforge.net) jline:jline:jar:1.0License: BSD (LICENSE.txt) - Copyright (c) 2002-2006, Marc Prud'hommeaux
mwp1@cornell.edu
All rights reserved.Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:- Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
- Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
- Neither the name of JLine nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - Stax2 API (http://woodstox.codehaus.org/StAX2) org.codehaus.woodstox:stax2-api:jar:3.1.1License: The BSD License (http://www.opensource.org/licenses/bsd-license.php)Copyright (c) <YEAR>, <OWNER> All rights reserved.Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
- Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
- Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - jibx-run - JiBX runtime (http://www.jibx.org/main-reactor/jibx-run) org.jibx:jibx-run:bundle:1.2.3License: BSD (http://jibx.sourceforge.net/jibx-license.html) Copyright (c) 2003-2010, Dennis M. Sosnoski.All rights reserved.Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
- Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
- Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
- Neither the name of JiBX nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - JavaAssist (http://www.jboss.org/javassist) org.jboss.javassist:com.springsource.javassist:jar:3.9.0.GA:compileLicense: MPL (http://www.mozilla.org/MPL/MPL-1.1.html)
- HAPI-OSGI-Base Module (http://hl7api.sourceforge.net/hapi-osgi-base/) ca.uhn.hapi:hapi-osgi-base:bundle:1.2License: Mozilla Public License 1.1 (http://www.mozilla.org/MPL/MPL-1.1.txt)