Search

Chapter 10. Detecting duplicate messages

download PDF

You can configure the broker to automatically detect and filter duplicate messages. This means that you do not have to implement your own duplicate detection logic.

Without duplicate detection, in the event of an unexpected connection failure, a client cannot determine whether a message it sent to the broker was received. In this situation, the client might assume that the broker did not receive the message, and resend it. This results in a duplicate message.

For example, suppose that a client sends a message to the broker. If the broker or connection fails before the message is received and processed by the broker, the message never arrives at its address. The client does not receive a response from the broker due to the failure. If the broker or connection fails after the message is received and processed by the broker, the message is routed correctly, but the client still does not receive a response.

In addition, using a transaction to determine success does not necessarily help in these cases. If the broker or connection fails while the transaction commit is being processed, the client is still unable to determine whether it successfully sent the message.

In these situations, to correct the assumed failure, the client resends the most recent message. The result might be a duplicate message that negatively impacts your system. For example, if you are using the broker in an order-fulfilment system, a duplicate message might mean that a purchase order is processed twice.

The following procedures show how to configure duplicate message detection to protect against these types of situations.

10.1. Configuring the duplicate ID cache

To enable the broker to detect duplicate messages, producers must provide unique values for the message property _AMQ_DUPL_ID when sending each message. The broker maintains caches of received values of the _AMQ_DUPL_ID property. When a broker receives a new message on an address, it checks the cache for that address to ensure that it has not previously processed a message with the same value for this property.

Each address has its own cache. Each cache is circular and fixed in size. This means that new entries replace the oldest ones as cache space demands.

The following procedure shows how to globally configure the ID cache used by each address on the broker.

Procedure

  1. Open the <broker_instance_dir>/etc/broker.xml configuration file.
  2. Within the core element, add the id-cache-size and persist-id-cache properties and specify values. For example:

    <configuration>
      <core>
        ...
        <id-cache-size>5000</id-cache-size>
        <persist-id-cache>false</persist-id-cache>
      </core>
    </configuration>
    id-cache-size

    Maximum size of the ID cache, specified as the number of individual entries in the cache. The default value is 20,000 entries. In this example, the cache size is set to 5,000 entries.

    Note

    When the maximum size of the cache is reached, it is possible for the broker to start processing duplicate messages. For example, suppose that you set the size of the cache to 3000. If a previous message arrived more than 3,000 messages before the arrival of a new message with the same value of _AMQ_DUPL_ID, the broker cannot detect the duplicate. This results in both messages being processed by the broker.

    persist-id-cache
    When the value of this property is set to true, the broker persists IDs to disk as they are received. The default value is true. In the example above, you disable persistence by setting the value to false.

Additional resources

  • To learn how to set the duplicate ID message property using the AMQ Core Protocol JMS client, see Using duplicate message detection in the AMQ Core Protocol JMS client documentation.

10.2. Configuring duplicate detection for cluster connections

You can configure cluster connections to insert a duplicate ID header for each message that moves across the cluster.

Prerequisites

Procedure

  1. Open the <broker_instance_dir>/etc/broker.xml configuration file.
  2. Within the core element, for a given cluster connection, add the use-duplicate-detection property and specify a value. For example:

    <configuration>
      <core>
        ...
        <cluster-connections>
           <cluster-connection name="my-cluster">
             <use-duplicate-detection>true</use-duplicate-detection>
             ...
           </cluster-connection>
             ...
        </cluster-connections>
      </core>
    </configuration>
    use-duplicate-detection
    When the value of this property is set to true, the cluster connection inserts a duplicate ID header for each message that it handles.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.