Chapter 19. Tuning guidelines
Review the following guidelines to tune AMQ Broker.
19.1. Tuning persistence
Review the following information for tips on improving persistence performance.
Persist messages to a file-based journal.
Use a file-based journal for message persistence. AMQ Broker can also persist messages to a Java Database Connectivity (JDBC) database, but this has a performance cost when compared to using a file-based journal.
Put the message journal on its own physical volume.
One of the advantages of an append-only journal is that disk head movement is minimized. This advantage is lost if the disk is shared. When multiple processes, such as a transaction coordinator, databases, and other journals, read and write from the same disk, performance is impacted because the disk head must skip around between different files. If you are using paging or large messages, make sure that they are also put on separate volumes.
Tune the
journal-min-files
parameter value.Set the
journal-min-files
parameter to the number of files that fits your average sustainable rate. If new files are created frequently in the journal data directory, meaning that much data is being persisted, you need to increase the minimum number of files that the journal maintains. This allows the journal to reuse, rather than create, new data files.Optimize the journal file size.
Align the value of the
journal-file-size
parameter to the capacity of a cylinder on the disk. The default value of 10 MB should be enough on most systems.Use the asynchronous IO (AIO) journal type.
For Linux operating systems, keep your journal type as AIO. AIO scales better than Java new I/O (NIO).
Tune the
journal-buffer-timeout
parameter value.Increasing the value of the
journal-buffer-timeout
parameter results in increased throughput at the expense of latency.Tune the
journal-max-io
parameter value.If you are using AIO, you might be able to improve performance by increasing the
journal-max-io
parameter value. Do not change this value if you are using NIO.Tune the
journal-pool-files
parameter.Set the
journal-pool-files
parameter, which is the upper threshold of the journal file pool, to a number that is close to your maximum expected load. When required, the journal expands beyond the upper threshold, but shrinks to the threshold, when possible. This allows reuse of files without consuming more disk space than required. If you see new files being created too often in the journal data directory, increase thejournal-pool-size
parameter. Increasing this parameter allows the journal to reuse more existing files instead of creating new files, which improves performance.Disable the
journal-data-sync
parameter if you do not require durability guarantees on journal writes.If you do not require guaranteed durability on journal writes if a power failure occurs, disable the
journal-data-sync
parameter and use a journal type ofNIO
orMAPPED
for better performance.
19.2. Tuning Java Message Service (JMS)
If you use the JMS API, review the following information for tips on how to improve performance.
Disable the message ID.
If you do not need message IDs, disable them by using the
setDisableMessageID()
method on theMessageProducer
class. Setting the value totrue
eliminates the need to create a unique ID and decreases the size of the message.Disable the message timestamp.
If you do not need message timestamps, disable them by using the
setDisableMessageTimeStamp()
method on theMessageProducer
class. Setting the value totrue
eliminates the overhead of creating the timestamp and decreases the size of the message.Avoid using
ObjectMessage
.ObjectMessage
is used to send a message that has a serialized object, meaning the body of the message, or payload, is sent over the wire as a stream of bytes. The Java serialized form of even small objects is quite large and takes up significant space on the wire. It is also slow when compared to custom marshalling techniques. UseObjectMessage
only if you cannot use one of the other message types, for example, if you do not know the type of the payload until runtime.Avoid
AUTO_ACKNOWLEDGE
.The choice of acknowledgment mode for a consumer impacts performance because of the additional overhead and traffic incurred by sending the acknowledgment message over the network.
AUTO_ACKNOWLEDGE
incurs this overhead because it requires that an acknowledgment is sent from the server for each message received on the client. If possible, useDUPS_OK_ACKNOWLEDGE
, which acknowledges messages in a lazy manner orCLIENT_ACKNOWLEDGE
, meaning the client code will call a method to acknowledge the message. Or, batch up many acknowledgments with one acknowledge or commit in a transacted session.Avoid durable messages.
By default, JMS messages are durable. If you do not need durable messages, set them to be non-durable. Durable messages incur additional overhead because they are persisted to storage.
Use
TRANSACTED_SESSION
mode to send and receive messages in a single transaction.By batching messages in a single transaction, AMQ Broker requires only one network round trip on the commit, not on every send or receive.
19.3. Tuning transport settings
Review the following information for tips on tuning transport settings.
- If your operating system supports TCP auto-tuning, as is the case with later versions of Linux, do not increase the TCP send and receive buffer sizes to try to improve performance. Setting the buffer sizes manually on a system that has auto-tuning can prevent auto-turing from working and actually reduce broker performance. If your operating system does not support TCP auto-tuning and the broker is running on a fast machine and network, you might improve the broker performance by increasing the TCP send and receive buffer sizes. For more information, see Appendix A, Acceptor and Connector Configuration Parameters.
If you expect many concurrent connections on your broker, or if clients are rapidly opening and closing connections, ensure that the user running the broker has permission to create enough file handles. The way you do this varies between operating systems. On Linux systems, you can increase the number of allowable open file handles in the
/etc/security/limits.conf
file. For example, add the lines:serveruser soft nofile 20000 serveruser hard nofile 20000
This example allows the
serveruser
user to open up to 20000 file handles.-
Set a value for the
batchDelay
netty TCP parameter and set thedirectDeliver
netty TCP parameter tofalse
to maximize throughput for very small messages.
19.4. Tuning the broker virtual machine
Review the following information for tips on how to tune various virtual machine settings.
- Use the latest Java virtual machine for best performance.
Allocate as much memory as possible to the server.
AMQ Broker can run with low memory by using paging. However, you get improved performance if AMQ Broker can keep all queues in memory. The amount of memory you require depends on the size and number of your queues and the size and number of your messages. Use the -Xms and -Xmx JVM arguments to set the available memory.
Tune the heap size.
During periods of high load, it is likely that AMQ Broker generates and destroys large numbers of objects, which can result in a build up of stale objects. This increases the risk of the broker running out of memory and causing a full garbage collection, which might introduce pauses and unintentional behaviour. To reduce this risk, ensure that the maximum heap size (-Xmx) for the JVM is set to at least five times the value of the
global-max-size
parameter. For example, if the broker is under high load and running with aglobal-max-size
of 1 GB, set the maximum heap size to 5 GB.
19.5. Tuning other settings
Review the following information for additional tips on improving performance.
Use asynchronous send acknowledgements.
If you need to send non-transactional, durable messages and do not need a guarantee that they have reached the server by the time the call to send() returns, do not set them to be sent blocking. Instead, use asynchronous send acknowledgements to get the send acknowledgements returned in a separate stream. However, in the case of a server crash, some messages might be lost.
Use pre-acknowledge mode.
With pre-acknowledge mode, messages are acknowledged before they are sent to the client. This reduces the amount of acknowledgment traffic on the wire. However, if that client crashes, messages are not redelivered if the client reconnects.
Disable security.
A small performance improvement results from disabling security by setting the
security-enabled
parameter tofalse
.Disable persistence.
You can turn off message persistence by setting the
persistence-enabled
parameter tofalse
.Sync transactions lazily.
Setting the
journal-sync-transactional
parameter tofalse
provides better performance when persisting transactions, at the expense of some possibility of loss of transactions on failure.Sync non-transactional lazily.
Setting the
journal-sync-non-transactional
parameter tofalse
provides better performance when persisting non-transactions, at the expense of some possibility of loss of durable messages on failure.Send messages non-blocking.
To avoid waiting for a network round trip for every message sent, set the
block-on-durable-send
andblock-on-non-durable-send
parameters tofalse
if you are using Java Messaging Service (JMS) and Java Naming and Directory Interface (JNDI). Or, set them directly on theServerLocator
by calling thesetBlockOnDurableSend()
andsetBlockOnNonDurableSend()
methods.Optimize the
consumer-window-size
.If you have very fast consumers, you can increase the value of the
consumer-window-size
parameter to effectively disable consumer flow control.Use the core API instead of the JMS API.
JMS operations must be translated into core operations before the server can handle them, resulting in lower performance than when you use the core API. When using the core API, try to use methods that take
SimpleString
as much as possible.SimpleString
, unlike java.lang.String, does not require copying before it is written to the wire. Therefore, if you reuseSimpleString
instances between calls, you can avoid some unnecessary copying. Note that the core API is not portable to other brokers.
19.6. Avoiding anti patterns
Reuse connections, sessions, consumers, and producers where possible.
The most common messaging anti-pattern is the creation of a new connection, session, and producer for every message sent or consumed. These objects take time to create and might involve several network round trips, which is a poor use of resources.
NoteSome popular libraries such as the Spring JMS Template use these anti-patterns. If you are using the Spring JMS Template, you might see poor performance. The Spring JMS Template can be used safely only on an application server which caches JMS sessions, for example, using Java Connector Architecture, and then only for sending messages. It cannot be used safely to consume messages synchronously, even on an application server.
Avoid fat messages.
Verbose formats such as XML take up significant space on the wire and performance suffers as a result. Avoid XML in message bodies if you can.
Do not create temporary queues for each request.
This common anti-pattern involves the temporary queue request-response pattern. With the temporary queue request-response pattern, a message is sent to a target, and a reply-to header is set with the address of a local temporary queue. When the recipient receives the message, they process it and then send back a response to the address specified in the reply-to header. A common mistake made with this pattern is to create a new temporary queue on each message sent, which drastically reduces performance. Instead, the temporary queue should be reused for many requests.
Do not use message-driven beans unless it is necessary.
Using message-driven beans to consume messages is slower than consuming messages by using a simple JMS message consumer.