Search

Chapter 8. Known issues

download PDF

This section describes known issues in AMQ Broker 7.11.

  • ENTMQBR-8106 - AMQ Broker Drainer pod doesn’t function properly after changing MessageMigration in CR

    You cannot change the value of the messageMigration attribute in a running broker deployment. To work around this issue, you must set the required value for the messageMigration attribute in a new ActiveMQ Artemis CR and create a new broker deployment.

  • ENTMQBR-8166 - Self-signed certificate with UseClientAuth=true prevents communication of Operator with Jolokia

    If the useClientAuth attribute is set to true in the console section of the ActiveMQ Artemis CR, the Operator is unable to configure certain features, for example, create addresses, on the broker. In the Operator log, you see an error message that ends with remote error: tls: bad certificate.

  • ENTMQBR-7385 - Message flops around the federation queue on slow consumers

    If the local application consumers are very slow or unable to consume message, a message can be sent back-and-forth over a federated connection a large number of times before it is finally consumed by an application consumer.

  • ENTMQBR-7820 - [Operator] Supported versions listed in 7.11.0 OPR1 operator log are incorrect

    The Operator logs lists support for the following AMQ Broker image versions : 7.10.0 7.10.1 7.10.2 7.11.0 7.8.1 7.8.2 7.8.3 7.9.0 7.9.1 7.9.2 7.9.3 7.9.4. The Operator actually supports AMQ Broker image versions beginning with 7.10.0.

  • ENTMQBR-7359 - Change to current handling of credential secret with 7.10.0 Operator

    The Operator stores the administrator username and password for connecting to the broker in a secret. The default secret name is in the form <custom-resource-name>-credentials-secret. You can create a secret manually or allow the Operator to create a secret.

    If the adminUser and adminPassword attributes are configured in a Custom Resource prior to 7.10.0, the Operator updates a manually-created secret with the values of these attributes. Starting in 7.10.0, the Operator no longer updates a secret that was created manually. Therefore, if you change the values of the adminUser and adminPassword attributes in the CR, you must either:

  • Update the secret with the new username and password
  • Delete the secret and allow the Operator to create a secret. When the Operator creates a secret, it adds the values of the adminUser and adminPassword attributes if these are specified in the CR. If these attributes are not in the CR, the Operator generates random credentials for the secret.
  • ENTMQBR-7111 - 7.10 versions of operator tend to remove StatefulSet during upgrade

    If you are upgrading to or from AMQ Broker Operator 7.10.0, the new Operator automatically deletes the existing StatefulSet for each deployment during the reconciliation process. When the Operator deletes the StatefulSet, the existing broker pods are deleted, which causes a temporary broker outage.

    You can work around this issue by running the following command to manually delete the StatefulSet and orphan the running pods before the Operator gets to delete the StatefulSet: oc delete statefulset <statefulset-name> --cascade=orphan

    Manually deleting the StatefulSet during the upgrade process allows the new Operator to reconcile the StatefulSet without deleting the running pods. For more information, see Upgrading the Operator using OperatorHub in Deploying AMQ Broker on OpenShift.

  • ENTMQBR-6473 - Incompatible configuration due to schema URL change

    When you try to use a broker instance configuration from a previous release with a version 7.9 or 7.10 instance, an incompatible configuration as a result of a schema URL change causes the broker to crash. To work around this issue, update the schema URL in the relevant configuration files as outlined in Upgrading from 7.9.0 to 7.10.0 on Linux.

  • ENTMQBR-4813 AsynchronousCloseException with large messages and multiple C++ subscribers

    If multiple C++ Publisher clients that uses the AMQP protocol are running on the same host as subscribers and the broker, and a publisher sends a large message, one of the subscribers crashes.

  • ENTMQBR-5749 - Remove unsupported operators that are visible in OperatorHub

    Only the Operators and Operator channels mentioned in Deploying the Operator from OperatorHub are supported. For technical reasons associated with Operator publication, other Operator and channels are visible in the OperatorHub and should be ignored. For reference, the following list shows which Operators are visible, but not supported:

    • Red Hat Integration - AMQ Broker LTS - all channels
    • Red Hat Integration - AMQ Broker - alpha, current, and current-76
  • ENTMQBR-569 - Conversion of IDs from OpenWire to AMQP results in sending IDs as binary

    When communicating cross-protocol from an A-MQ 6 OpenWire client to an AMQP client, additional information is encoded in the application message properties. This is benign information used internally by the broker and can be ignored.

  • ENTMQBR-655 - [AMQP] Unable to send message when populate-validated-user is enabled

    The configuration option populate-validated-user is not supported for messages produced using the AMQP protocol.

  • ENTMQBR-1875 - [AMQ 7, ha, replicated store] backup broker appear not to go "live" or shutdown after - ActiveMQIllegalStateException errorType=ILLEGAL_STATE message=AMQ119026: Backup Server was not yet in sync with live

    Removing the paging disk of a primary broker while a backup broker is trying to sync with the primary broker causes the primary to fail. In addition, the backup broker cannot become live because it continues trying to sync with the primary broker.

  • ENTMQBR-2068 - some messages received but not delivered during HA fail-over, fail-back scenario

    Currently, if a broker fails over to its backup while an OpenWire client is sending messages, messages being delivered to the broker when failover occurs could be lost. To work around this issue, ensure that the broker persists the messages before acknowledging them.

  • ENTMQBR-3331 - Stateful set controller can’t recover from CreateContainerError, blocking the operator

    If the AMQ Broker Operator creates a stateful set from a Custom Resource (CR) that has a configuration error, the stateful set controller is unable to roll out the updated stateful set when the error is resolved.

    For example, a misspelling in the value of the image attribute in your main broker CR causes the status of the first Pod created by the stateful set controller to remain Pending. If you then fix the misspelling and apply the CR changes, the AMQ Broker Operator updates the stateful set. However, a Kubernetes known issue prevents the stateful set controller from rolling out the updated stateful set. The controller waits indefinitely for the Pod that has a Pending status to become Ready, so the new Pods are not deployed.

    To work around this issue, you must delete the Pod that has a Pending status to allow the stateful set controller to deploy the new Pods. To check which Pod has a Pending status, use the following command: oc get pods --field-selector=status.phase=Pending. To delete a Pod, use the oc delete pod <pod name> command.

  • ENTMQBR-3846 - MQTT client does not reconnect on broker restart

    When you restart a broker, or a broker fails over, the active broker does not restore connections for previously-connected MQTT clients. To work around this issue, to reconnect an MQTT client, you need to manually call the subscribe() method on the client.

  • ENTMQBR-4127 - AMQ Broker Operator: Route name generated by Operator might be too long for OpenShift

    For each broker Pod in an Operator-based deployment, the default name of the Route that the Operator creates for access to the AMQ Broker management console includes the name of the Custom Resource (CR) instance, the name of the OpenShift project, and the name of the OpenShift cluster. For example, my-broker-deployment-wconsj-0-svc-rte-my-openshift-project.my-openshift-domain. If some of these names are long, the default Route name might exceed the limit of 63 characters that OpenShift enforces. In this case, in the OpenShift Container Platform web console, the Route shows a status of Rejected.

    To work around this issue, use the OpenShift Container Platform web console to manually edit the name of the Route. In the console, click the Route. On the Actions drop-down menu in the top-right corner, select Edit Route. In the YAML editor, find the spec.host property and edit the value.

  • ENTMQBR-4140 - AMQ Broker Operator: Installation becomes unusable if storage.size is improperly specified

    If you configure the storage.size property of a Custom Resource (CR) instance to specify the size of the Persistent Volume Claim (PVC) required by brokers in a deployment for persistent storage, the Operator installation becomes unusable if you do not specify this value properly. For example, suppose that you set the value of storage.size to 1 (that is, without specifying a unit). In this case, the Operator cannot use the CR to create a broker deployment. In addition, even if you remove the CR and deploy a new version with storage.size specified correctly, the Operator still cannot use this CR to create a deployment as expected.

    To work around this issue, first stop the Operator. In the OpenShift Container Platform web console, click Deployments. For the Pod that corresponds to the AMQ Broker Operator, click the More options menu (three vertical dots). Click Edit Pod Count and set the value to 0. When the Operator Pod has stopped, create a new version of the CR with storage.size correctly specified. Then, to restart the Operator, click Edit Pod Count again and set the value back to 1.

  • ENTMQBR-4141 - AMQ Broker Operator: Increasing Persistent Volume size requires manual involvement even after recreating Stateful Set

    If you try to increase the size of the Persistent Volume Claim (PVC) required by brokers in a deployment for persistent storage, the change does not take effect without further manual steps. For example, suppose that you configure the storage.size property of a Custom Resource (CR) instance to specify an initial size for the PVC. If you modify the CR to specify a different value of storage.size, the existing brokers continue to use the original PVC size. This is the case even if you scale the deployment down to zero brokers and then back up to the original number. However, if you scale the size of the deployment up to add additional brokers, the new brokers use the new PVC size.

    To work around this issue, and ensure that all brokers in the deployment use the same PVC size, use the OpenShift Container Platform web console to expand the PVC size used by the deployment. In the console, click Storage Persistent Volume Claims. Click your deployment. On the Actions drop-down menu in the top-right corner, select Expand PVC and enter a new value.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.