此内容没有您所选择的语言版本。

2.3. Optimizing the Protocols


Overview

Protocol optimizations can be made in different protocol layers, as follows:

TCP transport

In general, it is usually possible to improve the performance of the TCP layer by increasing buffer sizes, as follows:
  • Socket buffer size—the default TCP socket buffer size is 64 KB. While this is adequate for the speed of networks in use at the time TCP was originally designed, this buffer size is sub-optimal for modern high-speed networks. The following rule of thumb can be used to estimate the optimal TCP socket buffer size:
    Buffer Size = Bandwidth x Round-Trip-Time
    Where the Round-Trip-Time is the time between initially sending a TCP packet and receiving an acknowledgement of that packet (ping time). Typically, it is a good idea to try doubling the socket buffer size to 128 KB. For example:
    tcp://hostA:61617?socketBufferSize=131072
    For more details, see the Wikipedia article on Network Improvement.
  • I/O buffer size—the I/O buffer is used to buffer the data flowing between the TCP layer and the protocol that is layered above it (such as OpenWire). The default I/O buffer size is 8 KB and you could try doubling this size to achieve better performance. For example:
    tcp://hostA:61617?ioBufferSize=16384

OpenWire protocol

The OpenWire protocol exposes several options that can affect performance, as shown in Table 2.1, “OpenWire Parameters Affecting Performance”.
Table 2.1. OpenWire Parameters Affecting Performance
ParameterDefaultDescription
cacheEnabledtrueSpecifies whether to cache commonly repeated values, in order to optimize marshaling.
cacheSize1024The number of values to cache. Increase this value to improve performance of marshaling.
tcpNoDelayEnabledfalseWhen true, disable the Nagles algorithm. The Nagles algorithm was devised to avoid sending tiny TCP packets containing only one or two bytes of data; for example, when TCP is used with the Telnet protocol. If you disable the Nagles algorithm, packets can be sent more promptly, but there is a risk that the number of very small packets will increase.
tightEncodingEnabledtrueWhen true, implement a more compact encoding of basic data types. This results in smaller messages and better network performance, but comes at a cost of more calculation and demands made on CPU time. A trade off is therefore required: you need to determine whether the network or the CPU is the main factor that limits performance.
To set any of these options on an Apache Camel URI, you must add the wireFormat. prefix. For example, to double the size of the OpenWire cache, you can specify the cache size on a URI as follows:
tcp://hostA:61617?wireFormat.cacheSize=2048

Enabling compression

If your application sends large messages and you know that your network is slow, it might be worthwhile to enable compression on your connections. When compression is enabled, the body of each JMS message (but not the headers) is compressed before it is sent across the wire. This results in smaller messages and better network performance. On the other hand, it has the disadvantage of being CPU intensive.
To enable compression, enable the useCompression option on the ActiveMQConnectionFactory class. For example, to initialize a JMS connection with compression enabled in a Java client, insert the following code:
// Java
...
// Create the connection.
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(user, password, url);
connectionFactory.setUseCompression(true);
Connection connection = connectionFactory.createConnection();
connection.start();
Alternatively, you can enable compression by setting the jms.useCompression option on a producer URI—for example:
tcp://hostA:61617?jms.useCompression=true
Red Hat logoGithubRedditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

© 2024 Red Hat, Inc.