Search

Chapter 7. Known issues

download PDF

This section describes known issues in AMQ Broker 7.8.

  • ENTMQBR-17 - AMQ222117: Unable to start cluster connection

    A broker cluster may fail to initialize properly in environments that support IPv6. The failure is due to a SocketException that is indicated by the log message Can’t assign requested address. To work around this issue, set the java.net.preferIPv4Stack system property to true.

  • ENTMQBR-463 - Attributes in clustering settings have order restrictions. Would be nice to either have better error message or simply ignore the order

    Currently the sequence of the elements in the cluster connection configuration has to be in a specific order. The workaround is to adhere to the order in the configuration schema.

  • ENTMQBR-520 - Receiving from address named the same as a queue bound to another address should not be allowed

    A queue with the same name as an address must only be assigned to address. Creating a queue with the same name as an existing address, but bound to an address with a different name, is an invalid configuration. Doing so can result in incorrect messages being routed to the queue.

  • ENTMQBR-522 - Broker running on windows write problems with remove temp files when shutting down

    On Windows, the broker does not successfully clean up temporary files when it shuts down. This issue causes the shutdown process to be slow. In addition, temporary files not deleted by the broker accumulate over time.

  • ENTMQBR-569 - Conversion of IDs from OpenWire to AMQP results in sending IDs as binary

    When communicating cross-protocol from an A-MQ 6 OpenWire client to an AMQP client, additional information is encoded in the application message properties. This is benign information used internally by the broker and can be ignored.

  • ENTMQBR-599 - Define truststore and keystore by Artemis cli

    Creating a broker instance by using the --ssl-key, --ssl-key-password, --ssl-trust, and --ssl-trust-password parameters does not work. To work around this issue, set the corresponding properties manually in bootstrap.xml after creating the broker.

  • ENTMQBR-636 - Journal breaks, causing JavaNullPointerException, under perf load (mpt)

    To prevent IO-related issues from occurring when the broker is managing heavy loads, verify that the JVM is allocated with enough memory and heap space. See the section titled "Tuning the VM" in the Performance Tuning chapter of the ActiveMQ Artemis documentation.

  • ENTMQBR-648 - JMS Openwire client is unable to send messages to queue with defined purgeOnNoConsumer or queue filter

    Using an A-MQ 6 JMS client to send messages to an address that has a queue with purgeOnNoConsumer set to true fails if the queue has no consumers. It is recommended that you do not set the purgeOnNoConsumer option when using A-MQ 6 JMS clients.

  • ENTMQBR-652 - List of known amq-jon-plugin bugs

    This version of amq-jon-plugin has known issues with the MBeans for broker and queue.

    Issues with the broker MBean:

    • Closing a connection throws java.net.SocketTimeoutException exception
    • listSessions() throws java.lang.ClassCastException
    • Adding address settings throws java.lang.IllegalArgumentException
    • getConnectorServices() operation cannot be found
    • listConsumersAsJSON() operation cannot be found
    • getDivertNames() operation cannot be found
    • Listing network topology throws IllegalArgumentException
    • Remove address settings has wrong parameter name

    Issues with the queue MBean:

    • expireMessage() throws argument type mismatch exception
    • listDeliveringMessages() throws IllegalArgumentException
    • listMessages() throws java.lang.Exception
    • moveMessages() throws IllegalArgumentException with error message argument type mismatch
    • removeMessage() throws IllegalArgumentException with error message argument type mismatch
    • removeMessages() throws exception with error Can’t find operation removeMessage with 2 arguments
    • retryMessage() throws argument type mismatch IllegalArgumentException
  • ENTMQBR-655 - [AMQP] Unable to send message when populate-validated-user is enabled

    The configuration option populate-validated-user is not supported for messages produced using the AMQP protocol.

  • ENTMQBR-738 - Unable to build AMQ 7 examples offline with provided offline repo

    You cannot build the examples included with AMQ Broker in an offline environment. This issue is caused by missing dependencies in the provided offline Maven repository.

  • ENTMQBR-897 - Openwire client/protocol issues with special characters in destination name

    Currently AMQ OpenWire JMS clients cannot access queues and addresses that include the following characters in their name: comma (','), hash ('#'), greater than ('>'), and whitespace.

  • ENTMQBR-944 - [A-MQ7, Hawtio, RBAC] User gets no feedback if operation access was denied by RBAC

    The console can indicate that an operation attempted by an unauthorized user was successful when it was not.

  • ENTMQBR-1498 - Diagram in management console for HA (replication, sharedstore) does not reflect real topology

    If you configure a broker cluster with some extra, passive slaves, the cluster diagram in the web console does not show these passive slaves.

  • ENTMQBR-1848 - "javax.jms.JMSException: Incorrect Routing Type for queue, expecting: ANYCAST" occurs when qpid-jms client consumes a message from a multicast queue as javax.jms.Queue object with FQQN

    Currently, sending a message by using the Qpid JMS client to a multicast queue by using FQQN (fully qualified queue name) to an address that has multiple queues configured generates an error message on the client, and the message cannot be sent. To work around this issue, modify the broker configuration to resolve the error and unblock the client.

  • ENTMQBR-1875 - [AMQ 7, ha, replicated store] backup broker appear not to go "live" or shutdown after - ActiveMQIllegalStateException errorType=ILLEGAL_STATE message=AMQ119026: Backup Server was not yet in sync with live

    Removing the paging disk of a master broker while a backup broker is trying to sync with the master broker causes the master to fail. In addition, the backup broker cannot become live because it continues trying to sync with the master.

  • ENTMQBR-2068 - some messages received but not delivered during HA fail-over, fail-back scenario

    Currently, if a broker fails over to its slave while an OpenWire client is sending messages, messages being delivered to the broker when failover occurs could be lost. To work around this issue, ensure that the broker persists the messages before acknowledging them.

  • ENTMQBR-2452 - Upgraded broker AMQ 7.3.0 from AMQ 7.2.4 on Windows cannot log

    If you intend to upgrade a broker instance from 7.2.4 to 7.3.0 on Windows, logging will not work unless you specify the correct log manager version during your upgrade process. For more information, see Upgrading from 7.2.x to 7.3.0 on Windows.

  • ENTMQBR-2470 - [AMQ7, openwire,redelivery] redelivery counter for message increasing, if consumer is closed without consuming any messages

    If a broker sends a message to an Openwire consumer, but the consumer is closed before consuming the message, the broker wrongly increments the redelivery count for the pending message. If the number of occurrences of this behavior exceeds the value of the max-delivery-attempts configuration parameter, the broker sends the message to the dead letter queue (DLQ) or drops the message, based on your configuration. This issue does not affect other protocols, such as the Core protocol.

  • ENTMQBR-2593 - broker does not set message ID header on cross protocol consumption

    A Qpid JMS client successfully retrieves a message ID only if the message was produced by another Qpid JMS client. If the message was produced by a Core JMS or OpenWire client, the Qpid JMS client cannot read the message ID.

  • ENTMQBR-2678 - After isolated master is live again it is unable to connect to the cluster

    In a cluster of three or more live-backup groups that is using the replication high availability (HA) policy, the live broker shuts down when its replication connection fails. However, when the replication connection is restored and the original live broker is restarted, the broker is sometimes unable to rejoin the broker cluster. To enable the original live broker to rejoin the cluster, first stop the new live (original backup) broker, restart the original live broker, and then restart the original backup broker.

  • ENTMQBR-2928 - Broker Operator unable to recover from CR changes causing erroneous state

    If the AMQ Broker Operator encounters an error when applying a Custom Resource (CR) update, the Operator does not recover. Specifically, the Operator stops responding as expected to further updates to your CRs.

    For example, say that a misspelling in the value of the image attribute in your main broker CR causes broker Pods to fail to deploy, with an associated error message of ImagePullBackOff. If you then fix the misspelling and apply the CR changes, the Operator does not deploy the specified number of broker Pods. In addition, the Operator does not respond to any further CR changes.

    To work around this issue, you must delete the CRs that you originally deployed, before redeploying them. To delete an existing CR, use a command such as oc delete -f <CR name>.

  • ENTMQBR-2942 - Pod #0 tries to contact non-existent Pods

    If you change the size attribute of your Custom Resource (CR) instance to scale down a broker deployment, the first broker Pod in the cluster can make repeated attempts to connect to the drainer Pods that started up to migrate messages from the brokers that shut down, before they shut down themselves. To work around this issue, follow these steps:

    1) Scale your deployment to a single broker Pod.

    2) Wait for all drainer Pods to start, complete message migration, and then shut down.

    3) If the single remaining broker Pod has log entries for an “unknown host exception”, scale the deployment down to zero broker Pods, and then back to one.

    4) When you have verified that the single remaining broker Pod is not recording exception-based log entries, scale your deployment back to its original size.

  • ENTMQBR-3131 - Topology Fails to Update correctly for Backup Brokers when Master is Killed

    When a live broker fails in a cluster with more than four live-backup pairs, the live brokers, including the newly-elected live broker, all correctly report the updated topology. However, the remaining backup brokers might show the wrong topology in the following ways:

    • If a backup broker has failed over in place of the failed live broker, the remaining backup brokers show this backup broker twice in the topology.
    • If a backup broker has not yet failed over in place of the failed live broker, the remaining backup brokers still show the failed live broker in the topology.

    To work around this issue, ensure that the first connector-ref element in the cluster-connection > static-connectors configuration of each backup broker specifies the expected live broker.

  • ENTMQBR-3604 - Enabling Pooling for the LDAP Login Module Causes Shutdown to Hang

    If you enable connection pooling for an LDAP provider (that is, by setting connectionPool to true in the LDAPLoginModule section of the login.config configuration file), this can cause connections to the LDAP provider to remain open indefinitely, even when you stop the broker clients. As a result, if you try to shut down the broker in the normal way, the broker does not shut down. Instead, you need to use a Linux command such as SIGKILL to terminate the broker process. This situation occurs even if you specify a pool timeout in the JVM arguments for the broker (for example, -Dcom.sun.jndi.ldap.connect.pool.timeout=30000) and there are no active clients when you try to shut down the broker.

    To work around this issue, set a value for the connectionTimeout property in the LDAPLoginModule section of the login.config configuration file. When connection pooling has been requested for a connection, the connectionTimeout property specifies the maximum time that the broker waits for a connection when the maximum pool size has already been reached and all connections in the pool are in use. For more information, see Using LDAP for Authentication in Configuring AMQ Broker.

  • ENTMQBR-3724 - OperatorHub displays inappropriate variant of AMQ Broker Operator

    If you use OperatorHub to deploy the AMQ Broker Operator on OpenShift Container Platform 4.5 or earlier, OperatorHub displays a variant of the Operator that is not appropriate for your host platform. This makes it possible to select the incorrect Operator variant. In particular, regardless of your host platform, OperatorHub displays both the Red Hat Integration - AMQ Broker Operator (the Operator for OpenShift Container Platform) and the AMQ Broker Operator (the Operator for OpenShift Container Platform on IBM Z).

    To work around this issue, select the Operator variant that is appropriate to your platform, as described above. Alternatively, install the Operator using the OpenShift command-line interface (CLI).

    In OpenShift Container Platform 4.6, this issue is resolved. OperatorHub displays only the Operator variant that corresponds to your host platform.

  • ENTMQBR-3846 - MQTT client does not reconnect on broker restart

    When you restart a broker, or a broker fails over, the active broker does not restore connections for previously-connected MQTT clients. To work around this issue, to reconnect an MQTT client, you need to manually call the subscribe() method on the client.

  • ENTMQBR-4023 - AMQ Broker Operator: Pod Status pod names do not reflect the reality

    For an Operator-based broker deployment in a given OpenShift project, if you use the oc get pod command to list the broker Pods, the ordinal values for the Pods start at 0, for example, amq-operator-test-broker-ss-0. However, if you use the oc describe command to get the status of broker Pods created from the activemqartmises Custom Resource (that is, oc describe activemqartemises), the Pod ordinal values incorrectly start at 1, for example, amq-operator-test-broker-ss-1. There is no way to work around this issue.

  • ENTMQBR-4127 - AMQ Broker Operator: Route name generated by Operator might be too long for OpenShift

    For each broker Pod in an Operator-based deployment, the default name of the Route that the Operator creates for access to the AMQ Broker management console includes the name of the Custom Resource (CR) instance, the name of the OpenShift project, and the name of the OpenShift cluster. For example, my-broker-deployment-wconsj-0-svc-rte-my-openshift-project.my-openshift-domain. If some of these names are long, the default Route name might exceed the limit of 63 characters that OpenShift enforces. In this case, in the OpenShift Container Platform web console, the Route shows a status of Rejected.

    To work around this issue, use the OpenShift Container Platform web console to manually edit the name of the Route. In the console, click the Route. On the Actions drop-down menu in the top-right corner, select Edit Route. In the YAML editor, find the spec.host property and edit the value.

  • ENTMQBR-4140 - AMQ Broker Operator: Installation becomes unusable if storage.size is improperly specified

    If you configure the storage.size property of a Custom Resource (CR) instance to specify the size of the Persistent Volume Claim (PVC) required by brokers in a deployment for persistent storage, the Operator installation becomes unusable if you do not specify this value properly. For example, suppose that you set the value of storage.size to 1 (that is, without specifying a unit). In this case, the Operator cannot use the CR to create a broker deployment. In addition, even if you remove the CR and deploy a new version with storage.size specified correctly, the Operator still cannot use this CR to create a deployment as expected.

    To work around this issue, first stop the Operator. In the OpenShift Container Platform web console, click Deployments. For the Pod that corresponds to the AMQ Broker Operator, click the More options menu (three vertical dots). Click Edit Pod Count and set the value to 0. When the Operator Pod has stopped, create a new version of the CR with storage.size correctly specified. Then, to restart the Operator, click Edit Pod Count again and set the value back to 1.

  • ENTMQBR-4141 - AMQ Broker Operator: Increasing Persistent Volume size requires manual involvement even after recreating Stateful Set

    If you try to increase the size of the Persistent Volume Claim (PVC) required by brokers in a deployment for persistent storage, the change does not take effect without further manual steps. For example, suppose that you configure the storage.size property of a Custom Resource (CR) instance to specify an initial size for the PVC. If you modify the CR to specify a different value of storage.size, the existing brokers continue to use the original PVC size. This is the case even if you scale the deployment down to zero brokers and then back up to the original number. However, if you scale the size of the deployment up to add additional brokers, the new brokers use the new PVC size.

    To work around this issue, and ensure that all brokers in the deployment use the same PVC size, use the OpenShift Container Platform web console to expand the PVC size used by the deployment. In the console, click Storage Persistent Volume Claims. Click your deployment. On the Actions drop-down menu in the top-right corner, select Expand PVC and enter a new value.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.