Chapter 4. Configuring a broker


You can modify the settings for a broker by updating the custom resource (CR) that you used to create the broker.

Before you use custom resource (CR) instances to configure your broker deployment, learn how the Operator uses the configuration in a CR to generate the final broker.xml configuration file.

When you create an Operator-based broker deployment, a Pod for each broker runs in a StatefulSet in your OpenShift project. An application container for the broker runs within each Pod.

The Operator runs a type of container called an Init Container when initializing each Pod. In OpenShift Container Platform, Init Containers are specialized containers that run before application containers. Init Containers can include utilities or setup scripts that are not present in the application image.

By default, the AMQ Broker Operator uses a built-in Init Container. The Init Container uses the main CR instance for your deployment to generate the configuration used by each broker application container.

If you specify address settings in the CR, the Operator generates a default configuration and then merges or replaces that configuration with the configuration specified in the CR. This process is described in the section that follows.

A default address settings configuration is generated for the broker. If you specify a custom address settings configuration in the broker custom resource (CR) instance for your deployment, the default configuration is updated.

  1. The Operator runs the Init Container before the broker application container. The Init Container generates a default address settings configuration. The default address settings configuration is shown below.

    <address-settings>
        <!--
        if you define auto-create on certain queues, management has to be auto-create
        -->
        <address-setting match="activemq.management#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!--
            with -1 only the global-max-size is in use for limiting
            -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
        </address-setting>
    
        <!-- default for catch all -->
        <address-setting match="#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!--
            with -1 only the global-max-size is in use for limiting
            -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
        </address-setting>
    <address-settings>
  2. If you also specified an address settings configuration in your Custom Resource (CR) instance, the Init Container processes that configuration and converts it to XML.
  3. Based on the value of the applyRule property in the CR, the Init Container merges or replaces the default address settings configuration shown above with the configuration that you have specified in the CR. The result of this merge or replacement is the final address settings configuration that the broker will use.
  4. When the Init Container has finished generating the broker configuration (including address settings), the broker application container starts. When starting, the broker container copies its configuration from the installation directory previously used by the Init Container. You can inspect the address settings configuration in the broker.xml configuration file. For a running broker, this file is located in the /home/jboss/amq-broker/etc directory.

4.1.2. Directory structure of a broker Pod

Get familiar with the directory structure of a AMQ Broker Pod to help you locate configuration files, data storage directories, and logs.

When you create a broker deployment, a Pod for each broker runs in a StatefulSet in your OpenShift project. An application container for the broker runs within each Pod.

The Operator runs a type of container called an Init Container when initializing each Pod. In OpenShift Container Platform, Init Containers are specialized containers that run before application containers. Init Containers can include utilities or setup scripts that are not present in the application image.

When generating the configuration for a broker instance, the Init Container uses files contained in a default installation directory. This installation directory is on a volume that the Operator mounts to the broker Pod and which the Init Container and broker container share. The path that the Init Container uses to mount the shared volume is defined in an environment variable called CONFIG_INSTANCE_DIR. The default value of CONFIG_INSTANCE_DIR is /amq/init/config. In the documentation, this directory is referred to as <install_dir>.

Note

You cannot change the value of the CONFIG_INSTANCE_DIR environment variable.

By default, the installation directory has the following sub-directories:

Expand
Table 4.1. Pod directories
Sub-directoryContents

<install_dir>/bin

Binaries and scripts needed to run the broker.

<install_dir>/etc

Configuration files.

<install_dir>/data

The broker journal.

<install_dir>/lib

JARs and libraries needed to run the broker.

<install_dir>/log

Broker log files.

<install_dir>/tmp

Temporary web application files.

When the Init Container has finished generating the broker configuration, the broker application container starts. When starting, the broker container copies its configuration from the installation directory previously used by the Init Container. When the broker Pod is initialized and running, the broker configuration is located in the /home/jboss/amq-broker directory (and subdirectories) of the broker.

4.2. Configuring addresses and queues

By default, AMQ Broker automatically creates a queue when a client sends a message to or attempts to consume a message from a queue. You can also create queues manually.

4.2.1. Configuring addresses and queues

You can configure addresses and queues by using the brokerProperties attribute in the ActiveMQArtemis CR instance for your broker deployment. Or, you can configure addresses and queues in the ActiveMQArtemisAddress CR.

Note

The ActiveMQArtemisAddress CR is deprecated in AMQ Broker 7.12.

You can configure addresses and queues under the brokerProperties attribute in the broker custom resource (CR) and also configure settings for each queue that you create.

Prerequisites

You created a broker deployment. For more information, see Section 3.3.1, “Deploying a single broker instance”.

Procedure

  1. Edit the ActiveMQArtemis CR instance for your broker deployment.
  2. In the spec section of the CR, add a brokerProperties attribute if it is not already in the CR.

    spec:
      ...
      brokerProperties:
      ...
  3. Configure an address in the format:

    - addressConfigurations.<address name>.routingTypes=<routing type>

    For example:

    spec:
      ...
      brokerProperties:
      - addressConfigurations.usa-news-address.routingTypes=MULTICAST
      ...
  4. Configure a queue for the address you created in the format:

    - addressConfigurations.<address name>.queueConfigs.<queue name>.address=<address>

    Important

    The value of <address> for the .address setting must match the <address name> for each queue you create. If these values are different, separate addresses are created for each. In the following example, both the address name and the .address setting have the same value of usa-news-address.

    spec:
      ...
      brokerProperties:
      - addressConfigurations.usa-news-address.queueConfigs.usa-news-queue.address=usa-news-address
      ...
  5. Add a separate line for each setting you want to configure for a queue in the format:

    • addressConfigurations.<address name>.queueConfigs.<queue name>.<queue setting>

    For example:

    spec:
      ...
      brokerProperties:
      - addressConfigurations.usa-news-address.queueConfigs.usa-news-queue.routingType=ANYCAST
      - addressConfigurations.usa-news-address.queueConfigs.usa-news-queue.purgeOnNoConsumers=true
      - addressConfigurations.usa-news-address.queueConfigs.usa-news-queue.maxConsumers=20
      ...
  6. Save the CR.
  7. Check that no errors were detected in the brokerProperties configuration by reviewing the status section of the ActiveMQArtemis CR. For more information, see Section 2.4, “Configuring items not exposed in a custom resource definition (CRD)”.

You can configure addresses and queues in the ActiveMQArtemisAddress CR. You must create a separate, uniquely named, CR instance for each address and/or queues that you want to create on the broker.

Prerequisites

Procedure

  1. Start configuring a custom resource (CR) instance to define addresses and queues for the broker deployment.

    1. Using the OpenShift command-line interface:

      1. Log in to OpenShift as a user that has privileges to deploy CRs in the project for the broker deployment.

        oc login -u <user> -p <password> --server=<host:port>
      2. Open the sample CR file called broker_activemqartemisaddress_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted.
    2. Using the OpenShift Container Platform web console:

      1. Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment.
      2. Start a new CR instance based on the address CRD. In the left pane, click menu:Administration[Custom Resource Definitions].
      3. Click the ActiveMQArtemisAddresss CRD.
      4. Click the Instances tab.
      5. Click Create ActiveMQArtemisAddress.

        Within the console, a YAML editor opens, enabling you to configure a CR instance.

  2. In the spec section of the CR, add lines to define an address, queue, and routing type. For example:

    apiVersion: broker.amq.io/v1beta1
    kind: ActiveMQArtemisAddress
    metadata:
        name: myAddressDeployment0
        namespace: myProject
    spec:
        ...
        addressName: myAddress0
        queueName: myQueue0
        routingType: anycast
        ...

    The preceding configuration defines an address named myAddress0 with a queue named myQueue0 and an anycast routing type.

    Note

    In the metadata section, you need to include the namespace property and specify a value only if you are using the OpenShift Container Platform web console to create your CR instance. The value that you should specify is the name of the OpenShift project for your broker deployment.

  3. Deploy the CR instance.

    1. Using the OpenShift command-line interface:

      1. Save the CR file.
      2. Switch to the project for the broker deployment.

        $ oc project <project_name>
      3. Create the CR instance.

        $ oc create -f <path/to/address_custom_resource_instance>.yaml
    2. Using the OpenShift web console:

      1. When you finish configuring the CR, click Create.

4.2.2. Configuring address settings

You configure address settings in either the addressSettings or the brokerProperties attribute in the ActiveMQArtemis CR instance. The precise location depends on whether you create addresses and queues in an ActiveMQArtemisAddress CR instance or an ActiveMQArtemis broker CR instance.

The following examples show how to use both methods to configure a dead letter address and queue for specific address patterns. A dead letter address and queue can be used by the broker to store messages that cannot be delivered to a client to prevent infinite delivery attempts. A system administrator can later consume undelivered messages from a dead letter queue to inspect the messages.

You can specify settings for individual or groups of addresses under the brokerProperties attribute in the broker custom resource (CR).

Procedure

  1. Edit the ActiveMQArtemis CR instance for your broker deployment.
  2. Create a dead letter address and queue to receive undelivered messages. For example:

    spec:
      ...
      brokerProperties:
      ...
      - addressConfigurations.usDeadLetter.routingTypes=MULTICAST
      - addressConfigurations.usDeadLetter.queueConfigs.usDeadLetter-queue.address=usDeadLetter

    For more information on creating addresses and queues by using brokerProperties, see, Section 4.2.1.1, “Configuring addresses and queues by using brokerProperties.

  3. Add separate lines under the brokerProperties attribute in the format addressSettings.<address name>.<address setting> to:

    • Set the dead letter address for undelivered messages to the dead letter address you created.
    • Specify the number of delivery attempts after which a message that cannot be delivered to a matching address is sent to the dead letter address.

      For example:

      spec:
        ...
        brokerProperties:
        ...
        - addressSettings.usa-news.deadLetterAddress=usDeadLetter
        - addressSettings.usa-news.maxDeliveryAttempts=5
        ...

      You can use an asterisk (*) or a number sign (#) character as wildcards to create address patterns. Matching of patterns is done at each delimiter boundary, which is represented by a period (.). The number sign character matches any sequence of zero or more words and can be used at the end of the address string. The asterisk character matches a single word and can be used anywhere within the address string. For example:

      spec:
        ...
        brokerProperties:
        ...
        - addressSettings."usa-news.*".deadLetterAddress=usDeadLetter
        - addressSettings."europe-news.#".deadLetterAddress=euDeadLetter
        ...

      In the preceding example, the following addresses are matched:

    • The usa-news.* address pattern matches any word that follows the usa-news. string, such as usa-news.domestic and usa-news.intl, but not usa-news.domestic.politics.
    • The europe-news.# address pattern matches any address that starts with europe-news, such as europe-news, europe-news.politics and europe-news.politics.fr.

      Note

      In brokerProperties entries, a period (.) is a reserved character. If you want to create an address pattern that contains a period, you must enclose the address in quotation marks. For example, "usa-news.*"

  4. Save the CR.
  5. Check that no errors were detected in the brokerProperties configuration by reviewing the status section of the ActiveMQArtemis CR. For more information, see Section 2.4, “Configuring items not exposed in a custom resource definition (CRD)”.

If you configure addresses and queues in an ActiveMQArtemisAddress CR, you must configure settings for those addresses under the addressSettings attribute in the broker CR.

The example procedure shows how to configure a setting to limit the delivery attempts for a dead letter address and queue that you configured in an ActiveMQArtemisAddress CR.

Prerequisites

You created an address and queue with the following details.

addressName: myDeadLetterAddress

queueName: myDeadLetterQueue

routingType: anycast

For information on creating addresses and queues, see Section 4.2.1, “Configuring addresses and queues”.

Procedure

  1. Edit the ActiveMQArtemis CR instance for your broker deployment.

     oc edit ActiveMQArtemis <CR instance name> -n <namespace>
    1. Using the OpenShift Container Platform web console:
  2. Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment.
  3. In the left pane, click menu:Operators[Installed Operator].
  4. Click the Red Hat Integration - AMQ Broker for RHEL 9 (Multiarch) operator.
  5. Click the AMQ Broker tab.
  6. Click the name of the ActiveMQArtemis instance name.
  7. Click the YAML tab.

    Within the console, a YAML editor opens, enabling you to edit the CR instance.

    Note

    In the metadata section, you need to include the namespace property and specify a value only if you are using the OpenShift Container Platform web console to create your CR instance. The value that you should specify is the name of the OpenShift project for your broker deployment.

    1. In the spec section of the CR, add a new addressSettings section that contains a single addressSetting section, as shown below.

      spec:
        ...
        addressSettings:
          addressSetting:
    2. Add a single instance of the match property to the addressSetting block. Specify an address-matching expression. For example:

      spec:
        ...
        addressSettings:
          addressSetting:
          - match: myAddress
      match
      Specifies the address, or set of addresses to which the broker applies the configuration that follows. In this example, the value of the match property corresponds to a single address called myAddress.
    3. Add properties related to undelivered messages and specify values. For example:

      spec:
        ...
        addressSettings:
          addressSetting:
          - match: myAddress
            deadLetterAddress: myDeadLetterAddress
            maxDeliveryAttempts: 5
      deadLetterAddress
      Address to which the broker sends undelivered messages.
      maxDeliveryAttempts

      Maximum number of delivery attempts that a broker makes before moving a message to the configured dead letter address.

      In the preceding example, if the broker makes five unsuccessful attempts to deliver a message to an address that begins with myAddress, the broker moves the message to the specified dead letter address, myDeadLetterAddress.

    4. (Optional) Apply similar configuration to another address or set of addresses. For example:

      spec:
        ...
        addressSettings:
          addressSetting:
          - match: myAddress
            deadLetterAddress: myDeadLetterAddress
            maxDeliveryAttempts: 5
          - match: 'myOtherAddresses#'
            deadLetterAddress: myDeadLetterAddress
            maxDeliveryAttempts: 3

      In this example, the value of the second match property includes a hash wildcard character. The wildcard character means that the preceding configuration is applied to any address that begins with the string myOtherAddresses.

      Note

      If you use a wildcard expression as a value for the match property, you must enclose the value in single quotation marks, for example, 'myOtherAddresses#'.

    5. At the beginning of the addressSettings section, add the applyRule property and specify a value. For example:

      spec:
        ...
          applyRule: merge_all
          addressSetting:
          - match: myAddress
            deadLetterAddress: myDeadLetterAddress
            maxDeliveryAttempts: 5
          - match: 'myOtherAddresses#'
            deadLetterAddress: myDeadLetterAddress
            maxDeliveryAttempts: 3

      The applyRule property specifies how the Operator applies the configuration that you add to the CR for each matching address or set of addresses. The values that you can specify are:

      merge_all
      • For address settings specified in both the CR and the default configuration that match the same address or set of addresses:

        • Replace any property values specified in the default configuration with those specified in the CR.
        • Keep any property values that are specified uniquely in the CR or the default configuration. Include each of these in the final, merged configuration.
      • For address settings specified in either the CR or the default configuration that uniquely match a particular address or set of addresses, include these in the final, merged configuration.
      merge_replace
      • For address settings specified in both the CR and the default configuration that match the same address or set of addresses, include the settings specified in the CR in the final, merged configuration. Do not include any properties specified in the default configuration, even if these are not specified in the CR.
      • For address settings specified in either the CR or the default configuration that uniquely match a particular address or set of addresses, include these in the final, merged configuration.
      replace_all
      Replace all address settings specified in the default configuration with those specified in the CR. The final, merged configuration corresponds exactly to that specified in the CR.
      Note

      If you do not explicitly include the applyRule property in your CR, the Operator uses a default value of merge_all.

    6. Save the CR instance.

4.2.2.3. Configurable address and queue settings

The address and queue settings that you can configure for a broker on OpenShift are fully equivalent to those of standalone broker deployed on Linux or Windows. However, the format of address and queue settings names on OpenShift is camel case in contrast to lower case for standalone brokers.

For OpenShift deployments, address and queue settings are in camel case, for example, defaultQueueRoutingType. For standalone deployments, address and queue settings are in lower case and use a dash (-) separator, for example, default-queue-routing-type.

The following table shows some further examples of this naming difference.

Expand
Table 4.2. Examples of differences in the names of configuration items
Configuration item for standalone broker deploymentConfiguration item for OpenShift broker deployment

address-full-policy

addressFullPolicy

auto-create-queues

autoCreateQueues

default-queue-routing-type

defaultQueueRoutingType

last-value-queue

lastValueQueue

4.2.3. Deleting addresses and queues

Depending on how you created addresses and queues, you can delete addresses and queues by removing brokerProperties entries in the ActiveMQArtemis CR for your broker deployment or by using the ActiveMQArtemisAddress CR.

You can delete individual addresses and queues by removing the entries from under the brokerProperties attribute.

Prerequisites

Procedure

  1. Edit the ActiveMQArtemis CR instance for your broker deployment.
  2. Add the following brokerProperties entries to allow the broker to delete any address, which is represented by the number sign (#), and associated queues that it no longer finds in the CR.

    spec:
      ...
      brokerProperties:
      - addressSettings.#.configDeleteAddresses=FORCE
      - addressSettings.#.configDeleteQueues=FORCE
      ...
  3. Under the brokerProperties attribute, delete all the lines that reference an address and queue that you want to remove. For example, delete all the lines that reference the usa-news address to remove this address and queue:

    spec:
      ...
      brokerProperties:
      - addressConfigurations.usa-news.queueConfigs.usa-news-queue.routingType=MULTICAST
      - addressConfigurations.usa-news.queueConfigs.usa-news-queue.purgeOnNoConsumers=true
      - addressConfigurations.usa-news.queueConfigs.usa-news-queue.maxConsumers=20
      ...
  4. Save the CR.

    When the broker applies the updated configuration, it deletes addresses and queues that you removed from the CR.

You can delete addresses and queues in the ActiveMQArtemisAddress CR if you created the addresses and queues in the CR.

Procedure

  1. Ensure that you have an address CR file with the details, for example, the name, addressName and queueName, of the address and queue you want to delete. For example:

    apiVersion: broker.amq.io/v1beta1
    kind: ActiveMQArtemisAddress
    metadata:
        name: myAddressDeployment0
        namespace: myProject
    spec:
        ...
        addressName: myAddress0
        queueName: myQueue0
        routingType: anycast
        ...
  2. In the spec section of the address CR, add the removeFromBrokerOnDelete attribute and set it to a value of true.

    ..
    spec:
       addressName: myAddress1
       queueName: myQueue1
       routingType: anycast
       removeFromBrokerOnDelete: true

    Setting the removeFromBrokerOnDelete attribute to true causes the Operator to remove the address and any associated message for all brokers in the deployment when you delete the address CR.

  3. Apply the updated address CR to set the removeFromBrokerOnDelete attribute for the address you want to delete.

    $ oc apply -f <path/to/address_custom_resource_instance>.yaml
  4. Delete the address CR to delete the address from the brokers in the deployment.

    $ oc delete -f <path/to/address_custom_resource_instance>.yaml

4.3. Configuring authentication and authorization

You can configure security settings, including authentication using Java Authentication and Authorization Service (JAAS) login modules and authorization rules.

By default, AMQ Broker uses a Java Authentication and Authorization Service (JAAS) properties login module to authenticate and authorize users. The configuration for the default JAAS login module is stored in a /home/jboss/amq-broker/etc/login.config file on each broker Pod and reads user and role information from the artemis-users.properties and artemis-roles.properties files in the same directory. You add the user and role information to the properties files in the default login module by updating the ActiveMQArtemisSecurity Custom Resource (CR).

An alternative to updating the ActiveMQArtemisSecurity CR to add user and role information to the default properties files is to configure one or more JAAS login modules in a secret. This secret is mounted as a file on each broker Pod. Configuring JAAS login modules in a secret offers the following advantages over using the ActiveMQArtemisSecurity CR to add user and role information.

  • If you configure a properties login module in a secret, the brokers do not need to restart each time you update the property files. For example, when you add a new user to a properties file and update the secret, the changes take effect without requiring a restart of the broker.
  • You can configure JAAS login modules that are not defined in the ActiveMQArtemisSecurity CRD to authenticate users. For example, you can configure an LDAP login module or any other JAAS login module.

Both methods of configuring authentication and authorization for AMQ Broker are described in the following sections.

You can configure new JAAS login modules, which are stored in an OpenShift secret, to authenticate users with AMQ Broker. In the ActiveMQArtemis custom resource (CR), you must assign broker permissions to roles that are defined in the new login module.

Procedure

  1. Create a text file with your new JAAS login modules configuration and save the file as login.config. By saving the file as login.config, the correct key is inserted in the secret that you create from the text file.

    Note

    You must include the credentials required by the Operator to authenticate with the broker in the login.config file. You can do this by adding the default properties login module, which uses the artemis-users.properties and artemis.roles properties files, as shown in the following examples. Alternatively, you can add the contents of the artemis-users.properties and artemis.roles.properties to other user and roles properties files configured in the login.config file.

    Example login.config files

    The following example has a properties login module that is used to authenticate users and the default properties login module required by the Operator to authenticate with the broker:

    activemq {
       org.apache.activemq.artemis.spi.core.security.jaas.PropertiesLoginModule sufficient
          reload=true
          org.apache.activemq.jaas.properties.user="new-users.properties"
          org.apache.activemq.jaas.properties.role="new-roles.properties";
    
       org.apache.activemq.artemis.spi.core.security.jaas.PropertiesLoginModule sufficient
          reload=false
          org.apache.activemq.jaas.properties.user="artemis-users.properties"
          org.apache.activemq.jaas.properties.role="artemis-roles.properties"
          baseDir="/home/jboss/amq-broker/etc";
    };

    The following example has an LDAP login module that is used to authenticate users and the default properties login module required by the Operator to authenticate with the broker:

    activemq {
       org.apache.activemq.artemis.spi.core.security.jaas.LDAPLoginModule sufficient
           debug=true
           reload=true
           initialContextFactory="com.sun.jndi.ldap.LdapCtxFactory"
           connectionURL="ldap://ldap.company.com:389"
           connectionUsername="cn=read-only-admin,dc=example,dc=com"
           connectionPassword="password"
           connectionProtocol="s"
           connectionTimeout="5000"
           authentication="simple"
           userBase="dc=example,dc=com"
           userSearchMatching="(uid={0})"
           userSearchSubtree=true
           readTimeout="5000"
           roleBase="dc=example,dc=com"
           roleName="cn"
           roleSearchMatching="(uniqueMember={0})"
           roleSearchSubtree=false
           ;
       org.apache.activemq.artemis.spi.core.security.jaas.PropertiesLoginModule sufficient
           reload=false
           org.apache.activemq.jaas.properties.user="artemis-users.properties"
           org.apache.activemq.jaas.properties.role="artemis-roles.properties"
           baseDir="/home/jboss/amq-broker/etc"
           ;
    };
    Note

    Place the properties files that contain the credentials used by the Operator to authenticate with the broker at the end of the login.config file. This ensures that the properties files in all login modules are loaded when the broker authenticates the Operator. Otherwise, the custom resource might have a status message showing that some property files are not visible on the broker, until all login modules are used.

  2. If the login.config file you created includes a properties login module, ensure that the users and roles files specified in the module contain user and role information. For example:

    new-users.properties
    ruben=ruben01!
    anne=anne01!
    rick=rick01!
    bob=bob01!
    new-roles.properties
    admin=ruben, rick
    group1=bob
    group2=anne
  3. Use the oc create secret command to create a secret from the text file that you created with the new login module configuration. If the login module configuration includes a properties login module, also include the associated users and roles files in the secret. For example:

    oc create secret generic custom-jaas-config --from-file=login.config --from-file=new-users.properties --from-file=new-roles.properties
    Note

    The secret name must have a suffix of -jaas-config so the Operator can recognize that the secret contains login module configuration and propagate any updates to each broker pod.

    For more information about how to create secrets, see Secrets in the Kubernetes documentation.

  4. Add the secret you created to the custom resource (CR) instance for your broker deployment.

    1. Create an extraMounts element and a secrets element and add the name of the secret. The following example adds a secret named custom-jaas-config to the CR.

      deploymentPlan:
        ...
        extraMounts:
          secrets:
          - "custom-jaas-config"
        ...
  5. In the CR, grant permissions to the roles that are configured on the broker.

    1. In the spec section of the CR, add a brokerProperties element and add the permissions. You can grant a role permissions to a single address. Or, you can specify a wildcard match using the # sign to grant a role permissions to all addresses. For example:

      spec:
        ...
        brokerProperties:
        - securityRoles.#.group2.send=true
        - securityRoles.#.group1.consume=true
        - securityRoles.#.group1.createAddress=true
        - securityRoles.#.group1.createNonDurableQueue=true
        - securityRoles.#.group1.browse=true
        ...

      In the example, the group2 role is assigned send permissions to all addresses and the group1 role is assigned consume, createAddress, createNonDurableQueue and browse permissions to all addresses.

      Note

      In a Java properties file, a colon (:) is a reserved character that is used to separate a key and a value in a key/value pair. If you want to grant permissions to a fully qualified queue name (FQQN), which consists of an address name and a queue name separated by colons (::), you must use the backslash (\) character to escape the colon characters in the FQQN. For example:

      spec:
        ...
        brokerProperties:
        - 'securityRoles."my-address\:\:my-queue".group2.send=true'
  6. Save the CR.

    The Operator mounts the login.config file in the secret in a /amq/extra/secrets/secret name directory on each pod and configures the broker JVM to read the mounted login.config file instead of the default login.config file. If the login.config file contains a properties login module, the referenced users and roles properties file are also mounted on each pod.

  7. View the status information in the CR to verify that the brokers in your deployment are using the JAAS login modules in the secret for authentication.

    1. In the CR, navigate to the status section.
    2. Verify that a JaasPropertiesApplied type is present, which indicates that the broker is using the JAAS login modules configured in the secret. For example:

      - lastTransitionTime: "2023-02-06T20:50:01Z"
        message: ""
        reason: Applied
        status: "True"
        type: JaasPropertiesApplied

      When you update any of the files in the secret, the value of the reason field shows OutofSync until OpenShift Container Platform propagates the latest files in the secret to each broker pod. For example, if you add a new user to the new-users-properties file and update the secret, you see the following status information until the updated file is propagated to each pod:

      - lastTransitionTime: "2023-02-06T20:55:20Z"
        message: 'new-users.properties status out of sync, expected: 287641156, current: 2177044732'
        reason: OutOfSync
        status: "False"
        type: JaasPropertiesApplied
  8. When you update user or role information in a properties file that is referenced in the secret, use the oc set data command to update the secret. You must add all the files to the secret again, including the login.config file. For example, if you add a new user to the new-users.properties file that you created earlier in this procedure, use the following command to update the custom-jaas-config secret:

    oc set data secret/custom-jaas-config --from-file=login.config=login.config --from-file=new-users.properties=new-users.properties --from-file=new-roles.properties=new-roles.properties
    Note

    The broker JVM reads the configuration in the login.config file only when it starts. If you change the configuration in the login.config file, for example, to add a new login module, and update the secret, the broker does not use the new configuration until the broker is restarted.

You can use the broker’s default JAAS properties login module to authenticate users with AMQ Broker.

For an alternative method of configuring authentication and authorization on AMQ Broker by using secrets, see Section 4.3.1, “Setting up user authentication by creating new JAAS login modules”.

Note

The ActiveMQArtemisSecurity CR is deprecated starting in AMQ Broker 7.12.

If you want to use the default login module to authenticate users, you must add users and roles and assign permissions to those roles in the ActiveMQArtemisSecurity custom resource (CR).

You can deploy the ActiveMQArtemisSecurity CR before or after you create a broker deployment. However, if you deploy the security CR after creating the broker deployment, the broker pod is restarted to accept the new configuration.

Prerequisites

Procedure

  1. Start configuring a Custom Resource (CR) instance to define users and associated security configuration for the broker deployment.

    1. Using the OpenShift command-line interface:

      1. Log in to OpenShift as a user that has privileges to deploy CRs in the project for the broker deployment.

        oc login -u <user> -p <password> --server=<host:port>
      2. Edit the CR for your deployment.

         oc edit ActiveMQArtemis <CR instance name> -n <namespace>
    2. Using the OpenShift Container Platform web console:

      1. Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment.
      2. In the left pane, click menu:Operators[Installed Operator].
      3. Click the Red Hat Integration - AMQ Broker for RHEL 9 (Multiarch) operator.
      4. Click the AMQ Broker tab.
      5. Click the name of the ActiveMQArtemis instance name
      6. Click the YAML tab.

        Within the console, a YAML editor opens, enabling you to configure a CR instance.

  2. In the spec section of the CR, add lines to define users and roles. For example:

    apiVersion: broker.amq.io/v1beta1
    kind: ActiveMQArtemisSecurity
    metadata:
      name: ex-prop
    spec:
      loginModules:
        propertiesLoginModules:
          - name: "prop-module"
            users:
              - name: "sam"
                password: "samspassword"
                roles:
                  - "sender"
              - name: "rob"
                password: "robspassword"
                roles:
                  - "receiver"
      securityDomains:
        brokerDomain:
          name: "activemq"
          loginModules:
            - name: "prop-module"
              flag: "sufficient"
      securitySettings:
        broker:
          - match: "#"
            permissions:
              - operationType: "send"
                roles:
                  - "sender"
              - operationType: "createAddress"
                roles:
                  - "sender"
              - operationType: "createDurableQueue"
                roles:
                  - "sender"
              - operationType: "consume"
                roles:
                  - "receiver"
                  ...
    Note

    Always specify values for the elements in the preceding example. For example, if you do not specify values for securityDomains.brokerDomain or values for roles, the resulting configuration might cause unexpected results.

    The preceding configuration defines two users:

    • a propertiesLoginModule named prop-module that defines a user named sam with a role named sender.
    • a propertiesLoginModule named prop-module that defines a user named rob with a role named receiver.

    The properties of these roles are defined in the brokerDomain and broker sections of the securityDomains section. For example, the send role is defined to allow users with that role to create a durable queue on any address. By default, the configuration applies to all deployed brokers defined by CRs in the current namespace. To limit the configuration to particular broker deployments, use the applyToCrNames option described in Section 8.1.3, “Security Custom Resource configuration reference”.

    Note

    In the metadata section, you need to include the namespace property and specify a value only if you are using the OpenShift Container Platform web console to create your CR instance. The value that you should specify is the name of the OpenShift project for your broker deployment.

  3. Deploy the CR instance.

    1. Using the OpenShift command-line interface:

      1. Save the CR file.
      2. Switch to the project for the broker deployment.

        $ oc project <project_name>
      3. Create the CR instance.

        $ oc create -f <path/to/security_custom_resource_instance>.yaml
    2. Using the OpenShift web console:

      1. When you have finished configuring the CR, click Create.

4.3.2.2. Storing user passwords in a secret

User passwords are stored in clear text in the ActiveMQArtemisSecurity custom resource (CR). You can adopt a more secure way of handling passwords by excluding them from the CR and storing them in a secret instead.

When you apply the CR, the Operator retrieves each user’s password from the secret and inserts it in the artemis-users.properties file on the broker pod.

Procedure

  1. Use the oc create secret command to create a secret and add each user’s name and password. The secret name must follow a naming convention of security-properties-module name, where module name is the name of the login module configured in the CR. For example:

    oc create secret generic security-properties-prop-module \
      --from-literal=sam=samspassword \
      --from-literal=rob=robspassword
  2. In the spec section of the CR, add the user names that you specified in the secret along with the role information, but do not include each user’s password. For example:

    apiVersion: broker.amq.io/v1beta1
    kind: ActiveMQArtemisSecurity
    metadata:
      name: ex-prop
    spec:
      loginModules:
        propertiesLoginModules:
          - name: "prop-module"
            users:
              - name: "sam"
                roles:
                  - "sender"
              - name: "rob"
                roles:
                  - "receiver"
      securityDomains:
        brokerDomain:
          name: "activemq"
          loginModules:
            - name: "prop-module"
              flag: "sufficient"
      securitySettings:
        broker:
          - match: "#"
            permissions:
              - operationType: "send"
                roles:
                  - "sender"
              - operationType: "createAddress"
                roles:
                  - "sender"
              - operationType: "createDurableQueue"
                roles:
                  - "sender"
              - operationType: "consume"
                roles:
                  - "receiver"
                  ...
  3. Deploy the CR instance.

    1. Using the OpenShift command-line interface:

      1. Save the CR file.
      2. Switch to the project for the broker deployment.

        $ oc project <project_name>
      3. Create the CR instance.

        $ oc create -f <path/to/address_custom_resource_instance>.yaml
    2. Using the OpenShift web console:

      1. When you finish configuring the CR, click Create.

Additional resources

4.4. Adding third-party JAR files

You can add third-party JAR files, such as JDBC database drivers, to your deployment so the broker can access external resources at run time.

You must configure the Operator to make the third-party JAR file available on a mounted volume on each broker pod and add the volume path for the JAR file to the broker’s Java classpath.

If a JAR file is less than 1 MB in size, you can add the JAR file to a secret or ConfigMap and configure the Operator to mount the JAR file on each broker pod. If a JAR file is larger than the 1 MB limit for secrets and ConfigMaps, you can configure the Operator to mount a shared volume on each broker pod and download the JAR file to that volume.

If a JAR file for an external resource, such as a JDBC database, is less than 1 MB, you can use a secret or configMap to mount the third-party JAR file on each broker pod. You must also modify the broker’s Java classpath to load the JAR file from the mounted location at runtime.

The size limit of 1 MB is an OpenShift limit for an individual OpenShift secret or configMap.

The following procedure assume that you are using a secret to mount the JAR file.

Procedure

  1. Use the oc create secret command to create a secret that contains the third-party JAR file that you want to add. For example:

    oc create secret generic log4j-template --from-file=log4j-layout-template-json-2.22.1.jar

    For more information about how to create secrets, see Secrets in the Kubernetes documentation.

  2. Edit the CR for your broker deployment and configure the Operator to mount the secret that contains the third-party JAR file on each broker pod. For example, the following configuration mounts a secret named log4j-template.

    deploymentPlan:
      ...
      extraMounts:
        secrets:
        - "log4j-template"
      ...

    The JAR file is mounted in a /amq/extra/secrets/secret name directory on each broker pod. For example, /amq/extra/secrets/postgresql-driver/log4j-template.jar.

  3. Create an ARTEMIS_EXTRA_LIBS environment variable to extend the broker’s Java classpath so the broker loads the JAR file from the mounted directory on each pod. For example:

    spec:
      ...
      env:
      - name: ARTEMIS_EXTRA_LIBS
        value: /amq/extra/secrets/log4j-template
  4. Save the CR.

If a JAR file for an external resource, such as a JDBC database, is larger that 1 MB, you cannot use a secret or configMap to mount the JAR file on each broker pod. Instead, you can configure the Operator to download the JAR file to a persistent shared volume that the Operator mounts on each pod.

Prerequisites

A persistent shared volume is available to mount on each broker pod.

Procedure

  1. Edit the ActiveMQArtemis CR for your broker deployment.
  2. In the broker CR, use the extraVolumes and extraVolumeMounts attributes to add a persistent volume and mount the volume on each broker pod. For example:

    deploymentPlan:
      ...
      extraVolumes:
      - name: extra-volume
        persistentVolumeClaim:
          claimName: extra-jars
      extraVolumeMounts:
      - name: extra-volume
        mountPath: /opt/extra-lib
      ...
  3. Use the resourceTemplates attribute to customize the StatefulSet resource for the deployment. In the customization, use an init container to mount the extra-volume volume that you created on each pod and to download the JAR file to the volume. For example:

    spec:
      ...
      resourceTemplates:
      - selector:
          kind: StatefulSet
        patch:
          kind: StatefulSet
          spec:
            template:
              spec:
                initContainers:
                - name: mysql-jdbc-driver-init
                  volumeMounts:
                  - mountPath: /opt/extra-lib
                    name: extra-volume
                  image: curlimages/curl:8.6.0
                  command:
                  - /bin/sh
                  args:
                  - -c
                  - "if ! [ -f /opt/extra-lib/mysql-connector.jar ]; then curl https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.23/mysql-connector-java-8.0.23.jar --output /opt/extra-lib/mysql-connector.jar ; fi"

    In the example, a curl image is used to download a mysql-connector.jar file to the mounted path of the volume, /opt/extra-lib, if the file is not already on the volume.

  4. Create an ARTEMIS_EXTRA_LIBS environment variable to extend the broker’s Java classpath so the broker loads the JAR file from the shared volume. For example:

    spec:
      ...
      env:
      - name: ARTEMIS_EXTRA_LIBS
        value: /opt/extra-lib
  5. Save the CR.

4.5. Configuring message persistence

By default, AMQ Broker does not persist message data so messages do not survive a broker restart. You can configure AMQ Broker to persist messages in journal files on the filesystem or in a JDBC database of your choice.

If you enable persistence, the default method of persisting messages is to journals on the filesystem.

Note

For current information about which databases and network file systems are supported by AMQ Broker see Red Hat AMQ 7 Supported Configurations on the Red Hat Customer Portal.

4.5.1. Configuring journal-based persistence

When you enable persistence, messages are persisted in journal files on the filesystem by default.

Procedure

  1. Edit the ActiveMQArtemis custom resource (CR) for your broker deployment.
  2. Set the persistenceEnabled attribute to true. For example:

    spec:
      ...
      deploymentPlan:
        persistenceEnabled: true
      ...
    Note

    When persistenceEnabled is set to true, the Operator creates the required resources to deploy broker pods that have persistent volumes. In addition, the Operator configures the broker to create the journal and other data files on the persistent volume.

  3. Save the CR.

4.5.2. Configuring database persistence

You can configure AMQ Broker to persist messages in a database by using a Java Database Connectivity (JDBC) connection. In the ActiveMQArtemis CR for your broker, configure the details required to connect to the database as well as any custom settings for the connection.

When you persist message data in a database, the broker uses a Java Database Connectivity (JDBC) connection to store message and bindings data in database tables. The data in the tables is encoded using AMQ Broker journal encoding. For information about supported databases, see Red Hat AMQ 7 Supported Configurations on the Red Hat Customer Portal.

Important

An administrator might choose to store message data in a database based on the requirements of an organization’s wider IT infrastructure. However, use of a database can negatively effect the performance of a messaging system. Specifically, writing messaging data to database tables via JDBC creates a significant performance overhead for a broker.

Prerequisite

  • A dedicated database for use with AMQ Broker.
  • The required JDBC driver JAR file is available to the broker at runtime. For information on how to make a JAR file available to the broker at runtime, see Section 4.4, “Adding third-party JAR files”.
  • The deployment has a single broker instance. To ensure that the deployment has a single broker instance, ensure that the deployment.size attribute is not in the ActiveMQArtemis custom resource (CR). When the deployment.size attribute is omitted from the CR, a single broker instance is deployed.

Procedure

  1. Edit the ActiveMQArtemis CR for your broker deployment.
  2. Enable JDBC database persistence by using the brokerProperties attribute. For example:

    spec:
      ...
      brokerProperties:
      - storeConfiguration=DATABASE
      - storeConfiguration.jdbcDriverClassName=<class name>
      - storeConfiguration.jdbcConnectionUrl=jdbc:<URL>
      - HAPolicyConfiguration=SHARED_STORE_PRIMARY
      ...
    storeConfiguration
    Specify a value of DATABASE to persist messages to a JDBC database.
    storeConfiguration.jdbcDriverClassName

    Fully-qualified class name of the JDBC database driver. For example, org.postgresql.Driver.

    For information about supported databases, see Red Hat AMQ 7 Supported Configurations on the Red Hat Customer Portal.

    storeConfiguration.jdbcConnectionUrl

    Full JDBC connection URL for your database server including the database name and all configuration parameters. For example:

    jdbc:postgresql://postgresql-service.default.svc.cluster.local:5432/postgres?user=postgres&password=postgres

    In the example, the database name is postgres.

    HAPolicyConfiguration
    Set to SHARED_STORE_PRIMARY to ensure that the broker uses a JDBC lease lock to protect the database tables from concurrent access by multiple brokers. If a second broker instance is deployed unintentionally, the lease lock prevents the second broker from writing to the database.
  3. (Optional) Change the default values for the following attributes, if required:

    storeConfiguration.jdbcNetworkTimeout
    JDBC network connection timeout, in milliseconds. The default value is 20000 milliseconds.
    storeConfiguration.jdbcLockRenewPeriod
    Length, in milliseconds, of the renewal period for the current JDBC lock. When this time elapses, the broker can renew the lock. Set a value that is several times smaller than the value of storeConfiguration.jdbcLockExpiration to give the broker sufficient time to extend the lease and also gives the broker time to try to renew the lock in the event of a connection problem. The default value is 2000 milliseconds.
    storeConfiguration.jdbcLockExpiration
    Time, in milliseconds, that the current JDBC lock is considered owned (that is, acquired or renewed), even if the value of storeConfiguration.jdbcLockRenewPeriod has elapsed. The broker periodically tries to renew a lock that it owns according to the value of storeConfiguration.jdbcLockRenewPeriod. If the broker fails to renew the lock, for example, due to a connection problem, the broker keeps trying to renew the lock until the value of storeConfiguration.jdbcLockExpiration has passed since the lock was last successfully acquired or renewed. An exception to the renewal behavior described above is when another broker acquires the lock. This can happen if there is a time misalignment between the Database Management System (DBMS) and the brokers, or if there is a long pause for garbage collection. In this case, the broker that originally owned the lock considers the lock lost and does not try to renew it. If the JDBC lock has not been renewed by the broker that currently owns it after the expiration time elapses, another broker can establish a JDBC lock. The default value of is 20000 milliseconds.
    storeConfiguration.jdbcJournalSyncPeriod
    Duration, in milliseconds, for which the broker journal synchronizes with JDBC. The default value is 5 milliseconds.
    storeConfiguration.jdbcMaxPageSizeBytes
    Maximum size, in bytes, of each page file when AMQ Broker persists messages to a JDBC database. The default value is 102400, which is 100KB. The value that you specify also supports byte notation such as "K" "MB", and "GB".
  4. Save the CR.

If you do not have container-native storage in your OpenShift cluster, you need to manually provision Persistent Volumes (PVs) and ensure that these are available to be claimed by the Operator by using a Persistent Volume Claim (PVC).

Important

When you manually provision PVs in OpenShift Container Platform, ensure that you set the reclaim policy for each PV to Retain. If the reclaim policy for a PV is not set to Retain and the PVC that the Operator used to claim the PV is deleted, the PV is also deleted. Deleting a PV results in the loss of any data on the volume. For more information, about setting the reclaim policy, see Understanding persistent storage in the OpenShift Container Platform documentation.

By default, a PVC obtains 2 GiB of storage for each broker from the default storage class configured for the cluster. You can override the default size and storage class requested in the PVC, but only by configuring new values in the CR before deploying the CR for the first time.

4.6.1. Configuring storage size and storage class

Persistent Volume Claims (PVCs) act as requests for storage by pods. You can specify the size and class of storage to request in a PVC submitted by a broker pod.

Note

If you change the storage configuration in the CR after you deploy AMQ Broker, the updated configuration is not applied retrospectively to existing Pods. However, the updated configuration is applied to new Pods that are created if you scale up the deployment.

Prerequisites

  • You should be familiar with how to use a CR instance to create a basic broker deployment. See Section 3.3.1, “Deploying a single broker instance”.
  • You must have already provisioned Persistent Volumes (PVs) and made these available to be claimed by the Operator. For example, if you want to create a cluster of two brokers with persistent storage, you need to have two PVs available.

    For more information about provisioning persistent storage, see Understanding persistent storage in the OpenShift Container Platform documentation.

Procedure

  1. Edit the ActiveMQArtemis (CR) instance for your broker deployment.
  2. To specify the broker storage size, in the deploymentPlan section of the CR, add a storage section. Add a size property and specify a value. For example:

    spec:
      deploymentPlan:
        size: 1
        image: placeholder
        requireLogin: false
        persistenceEnabled: true
        journalType: nio
        messageMigration: true
        storage:
          size: 4Gi
    storage.size
    Size, in bytes, of the Persistent Volume Claim (PVC) that each broker Pod requires for persistent storage. This property applies only when persistenceEnabled is set to true. The value that you specify must include a unit using byte notation (for example, K, M, G), or the binary equivalents (Ki, Mi, Gi).
  3. To specify the storage class that each broker Pod requires for persistent storage, in the storage section, add a storageClassName property and specify a value. For example:

    spec:
      deploymentPlan:
        size: 1
        image: placeholder
        requireLogin: false
        persistenceEnabled: true
        journalType: nio
        messageMigration: true
        storage:
          size: 4Gi
          storageClassName: gp3
    storage.storageClassName

    The name of the storage class to request in the Persistent Volume Claim (PVC). Storage classes provide a way for administrators to describe and classify the available storage. For example, different storage classes might map to specific quality-of-service levels, backup policies and so on.

    If you do do not specify a storage class, a persistent volume with the default storage class configured for the cluster is claimed by the PVC.

    Note

    If you specify a storage class, a persistent volume is claimed by the PVC only if the volume’s storage class matches the specified storage class.

  4. Save the CR.

4.7. Configuring resource limits and requests

You can set thresholds on the memory and CPU that can be requested and consumed by the broker container that runs in each Pod. Setting thresholds ensures that containers perform consistently, regardless of the number of pods running on a node.

  • You must add configuration for limits and requests to the CR instance for your broker deployment before deploying the CR for the first time. You cannot add the configuration to a broker deployment that is already running.
  • It is not possible for Red Hat to recommend values for limits and requests because these are based on your specific messaging system use-cases and the resulting architecture that you have implemented. However, it is recommended that you test and tune these values in a development environment before configuring them for your production environment.
  • The Operator runs a type of container called an Init Container when initializing each broker Pod. Any resource limits and requests that you configure for each broker container also apply to each Init Container. For more information about the use of Init Containers in broker deployments, see Section 4.1, “How the Operator generates the broker configuration”.

You can specify the following limit and request values:

CPU limit
For each broker container running in a Pod, this value is the maximum amount of host-node CPU that the container can consume. If a broker container attempts to exceed the specified CPU limit, OpenShift throttles the container. This ensures that containers have consistent performance, regardless of the number of Pods running on a node.
Memory limit
For each broker container running in a Pod, this value is the maximum amount of host-node memory that the container can consume. If a broker container attempts to exceed the specified memory limit, OpenShift terminates the container. The broker Pod restarts.
CPU request

For each broker container running in a Pod, this value is the amount of host-node CPU that the container requests. The OpenShift scheduler considers the CPU request value during Pod placement, to bind the broker Pod to a node with sufficient compute resources.

The CPU request value is the minimum amount of CPU that the broker container requires to run. However, if there is no contention for CPU on the node, the container can use all available CPU. If you have specified a CPU limit, the container cannot exceed that amount of CPU usage. If there is CPU contention on the node, CPU request values provide a way for OpenShift to weigh CPU usage across all containers.

Memory request

For each broker container running in a Pod, this value is the amount of host-node memory that the container requests. The OpenShift scheduler considers the memory request value during Pod placement, to bind the broker Pod to a node with sufficient compute resources.

The memory request value is the minimum amount of memory that the broker container requires to run. However, the container can consume as much available memory as possible. If you have specified a memory limit, the broker container cannot exceed that amount of memory usage.

CPU is measured in units called millicores. Each node in an OpenShift cluster inspects the operating system to determine the number of CPU cores on the node. Then, the node multiplies that value by 1000 to express the total capacity. For example, if a node has two cores, the CPU capacity of the node is expressed as 2000m. Therefore, if you want to use one-tenth of a single core, you specify a value of 100m.

Memory is measured in bytes. You can specify the value by using byte notation (E, P, T, G, M, K) or the binary equivalents (Ei, Pi, Ti, Gi, Mi, Ki). The value that you specify must include a unit.

You can set thresholds on the memory and CPU requested and received by the broker container that runs in each Pod. Setting these thresholds ensures uniformity of resources among all pods running on a node.

  • You must add configuration for limits and requests to the CR instance for your broker deployment before deploying the CR for the first time. You cannot add the configuration to a broker deployment that is already running.
  • It is not possible for Red Hat to recommend values for limits and requests because these are based on your specific messaging system use-cases and the resulting architecture that you have implemented. However, it is recommended that you test and tune these values in a development environment before configuring them for your production environment.

Prerequisites

Procedure

  1. Edit the ActiveMQArtemis (CR) instance for your broker deployment.
  2. In the deploymentPlan section of the CR, add a resources section. Add limits and requests sub-sections. In each sub-section, add a cpu and memory property and specify values. For example:

    spec:
      deploymentPlan:
        size: 1
        image: placeholder
        requireLogin: false
        persistenceEnabled: true
        journalType: nio
        messageMigration: true
        resources:
          limits:
            cpu: "500m"
            memory: "1024M"
          requests:
            cpu: "250m"
            memory: "512M"
    limits.cpu
    Each broker container running in a Pod in the deployment cannot exceed this amount of host-node CPU usage.
    limits.memory
    Each broker container running in a Pod in the deployment cannot exceed this amount of host-node memory usage.
    requests.cpu
    Each broker container running in a Pod in the deployment requests this amount of host-node CPU. This value is the minimum amount of CPU required for the broker container to run.
    requests.memory

    Each broker container running in a Pod in the deployment requests this amount of host-node memory. This value is the minimum amount of memory required for the broker container to run.

    Note

    If you specify limits for a resource, but do not specify requests, a broker container requests the configured limits values for that resource. For example, in the following configuration, a broker container requests the configured limits values of 500m cpu and 1024M memory.

    spec:
      deploymentPlan:
        size: 3
        ...
        resources:
          limits:
            cpu: "500m"
            memory: "1024M"
    Important

    Set limits without setting requests to control the precise amount of memory and CPU requested and to ensure that the same values are requested for each broker container if there are multiple brokers in your deployment.

  3. Save the CR.

4.8. Enabling access to AMQ Management Console

Each broker pod in a deployment hosts its own instance of AMQ Management Console at port 8161. You can expose the console in the custom resource (CR) instance for your broker deployment. After it is exposed, you can use the console to view and monitor brokers in a web browser.

Procedure

  1. Edit the ActiveMQArtemis (CR) instance for your broker deployment.
  2. In the spec section of the CR, add a console attribute. In the console section, add the expose attribute and set the value to true.

    spec:
      ..
      console:
        expose: true

    When you expose the console, the Operator automatically creates a dedicated service and Openshift route for the console on each broker pod in the deployment.

  3. If you want to customize the host name of the routes that are exposed for the console to match the internal routing configuration on your Openshift cluster, you can do one or both of the following:

    • Use the ingressHost attribute to replace the default host name with a custom host name for the console routes.
    • Use the ingressDomain attribute to append a custom domain to the host name. The custom domain is also applied to all other routes, such as routes for acceptors, that are exposed by the CR configuration.
    1. To set a custom host name specifically for the console routes, add the ingressHost attribute and specify the host string. For example:

      spec:
        ..
        console:
          expose: true
          ingressHost: my-console-production.my-subdomain.com
        ..
      Note

      The ingressHost value must be unique on your Openshift cluster. If your broker cluster has multiple broker pods, you can make the ingressHost value unique by including the $(BROKER_ORDINAL) variable in the value. The Operator replaces this variable in the route it creates for each broker pod with the ordinal number the StatefulSet assigned to the pod. For example, an ingressHost value of my-console-$(BROKER_ORDINAL)-production.my-subdomain.com sets the host name of the route to my-console-0-production.my-subdomain.com on the first pod, my-console-1-production.my-subdomain.com on the second pod and so on.

      You can include any the following variables in the custom host name string for the console route:

      Expand
      Table 4.3. Variables in a custom host name string for the console route
      NameDescription

      $(CR_NAME)

      The value of the metadata.name attribute in the CR.

      $(CR_NAMESPACE)

      The namespace of the custom resource.

      $(BROKER_ORDINAL)

      The ordinal number assigned to the broker pod by the StatefulSet.

      $(ITEM_NAME)

      The name of the acceptor.

      $(RES_TYPE)

      The resource type. A route has a resource type of rte. An ingress has a resource type of ing.

      $(INGRESS_DOMAIN)

      The value of the spec.ingressDomain attribute if it is configured in the CR.

    2. To append a custom domain to the host name in routes, add a spec.ingressDomain attribute and specify a custom string. For example:

      spec:
        ...
        ingressDomain: my.domain.com
  4. If your organization’s network policy require that you expose the console by using an ingress instead of a route, complete the following steps:

    1. Add the exposeMode attribute and set the value to ingress.

      spec:
        ..
        console:
          expose: true
          exposeMode: ingress
        ..
    2. If you want to customize the host name of the ingresses that are exposed for the console to match the internal routing configuration on your Openshift cluster, you can do one or both of the following:

      • Use the ingressHost attribute to replace the default host name with a custom host name.
      • Use the ingressDomain attribute to append a custom domain to the host name. The custom domain is also applied to all other ingresses, such as ingresses for acceptors, that are exposed by the CR configuration.

        1. To set a custom host name specifically for the ingresses created for the console, add the ingressHost attribute and specify the host string. For example:

          spec:
            ..
            console:
              expose: true
              exposeMode: ingress
              expose: true
              exposeMode: ingress
              ingressHost: my-console-production.my-subdomain.com
            ...

          You can include the same variables to customize an ingress host as a route host, which are described earlier in this procedure.

        2. To append a custom domain to the host name in ingresses, add a spec.ingressDomain attribute and specify a custom string.

          spec:
            ...
            ingressDomain: my.domain.com

          For the console, the default host name of an ingress is in the format <cr-name>-wconsj-<ordinal>-svc-ing-<namespace>. If, for example, you have a CR named production in the amqbroker name space, an ingressDomain value of mydomain.com gives a host value of production-wconsj-0-svc-ing-mynamespace.amqbroker.com for the ingress created on pod 0.

          For more information on the spec.ingressDomain attribute, see Section 8.1, “Custom resource configuration reference”.

  5. If you want to enable secure connections to the console from clients outside of the OpenShift cluster, complete the following steps:

    1. Add the sslEnabled attribute and set the value to true.

      spec:
        ..
        console:
          expose: true
          exposeMode: ingress
          sslEnabled: true
        ..
    2. Add the sslSecret attribute and specify the name of a secret that contains the certificate to secure the console. For example:

      spec:
        ..
        console:
          expose: true
          exposeMode: ingress
          sslEnabled: true
          sslSecret: console-tls-secret
        ..
    3. Use the spec.env attribute to add an environment variable that configures the console to automatically load a new certificate each time the certificate is renewed. For example:

      spec:
        ..
        env:
        - name: JAVA_ARGS_APPEND
          value: -Dwebconfig.bindings.artemis.sslAutoReload=true
        ..
  6. Save the CR.

You can configure environment variables that are passed to an AMQ Broker container. You can set standard variables, such as TZ for the timezone. Or, you can set a custom AMQ Broker variable, JAVA_ARGS_APPEND, if you need to add to the command line arguments used by the Java launcher.

Procedure

  1. Edit the Custom Resource (CR) instance for your broker deployment.

    1. Using the OpenShift command-line interface:

      1. Enter the following command:

        oc edit ActiveMQArtemis <CR instance name> -n <namespace>
    2. Using the OpenShift Container Platform web console:

      1. Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment.
      2. In the left pane, click menu:Operators[Installed Operator].
      3. Click the Red Hat Integration - AMQ Broker for RHEL 9 (Multiarch) operator.
      4. Click the AMQ Broker tab.
      5. Click the name of the ActiveMQArtemis instance name.
      6. Click the YAML tab.

        Within the console, a YAML editor opens, which enables you to configure the CR instance.

  2. In the spec section of the CR, add an env element and add the environment variables that you want to set for the AMQ Broker container. For example:

    apiVersion: broker.amq.io/v1beta1
    kind: ActiveMQArtemis
    metadata:
      name: ex-aao
    spec:
      ...
      env:
      - name: TZ
        value: Europe/Vienna
      - name: JAVA_ARGS_APPEND
        value: --Hawtio.realm=console
      - name: JDK_JAVA_OPTIONS
        value: -XshowSettings:system
      ...

    In the example, the CR configuration includes the following environment variables:

    • TZ to set the time zone of the AMQ Broker container.
    • JAVA_ARGS_APPEND to configure AMQ Management Console to use a realm named console for authentication.
    • JDK_JAVA_OPTIONS to set the Java -XshowSettings:system parameter, which displays system property settings for the Java Virtual Machine.

      Note

      Values configured using the JDK_JAVA_OPTIONS environment variable are prepended to the command line arguments used by the Java launcher. Values configured using the JAVA_ARGS_APPEND environment variable are appended to the arguments used by the launcher. If an argument is duplicated, the rightmost argument takes precedence.

  3. Save the CR.

    Note

    Red Hat recommends that you do not change AMQ Broker environment variables that have an AMQ_ prefix and that you exercise caution if you want to change the POD_NAMESPACE variable.

By default, the maximum memory available to a broker is half of the maximum memory that is available to the broker’s Java Virtual Machine (JVM). You can override the default memory limit by specifying a new limit that is either a percentage of the maximum JVM memory or an absolute value.

Note

It is recommended that you set memory usage limits for individual addresses instead of relying on a much larger global limit for all addresses. If you have a global limit for all addresses, the number of objects in memory can reach a level that impacts the normal operation of the broker. To set a memory limit for an individual address, use the maxSizeBytes property. For more information, see Section 8.1, “Custom resource configuration reference”.

Procedure

  1. Edit the ActiveMQArtemis custom resource (CR) for your broker deployment.
  2. If you want to set a memory limit for the broker that is a percentage of the maximum memory available to the JVM, add a brokerProperties section within the spec section of the CR. Within the brokerProperties section, add a globalMaxSizePercentOfJvmMaxMemory property and specify a memory limit. In the following example, the memory limit is 60 percent of the maximum memory available to the JVM.

    spec:
      ...
      brokerProperties:
      - globalMaxSizePercentOfJvmMaxMemory=60

    By specifying a percentage value, you avoid the need to change the value if the amount of JVM memory changes.

  3. If you want to specify an absolute value for the new memory limit, add a brokerProperties section within the spec section of the CR. Within the brokerProperties section, add a globalMaxSize property and specify a memory limit. For example:

    spec:
      ...
      brokerProperties:
      - globalMaxSize=500m
      ...

    The default unit for the globalMaxSize property is bytes. To change the default unit, add a suffix of m (for MB) or g (for GB) to the value.

  4. Configure the action that you want the broker to take for all further messages after a memory limit you set is reached. For example:

    spec:
      ...
      brokerProperties:
      - addressSettings.<address match>.addressFullMessagePolicy=<action>
      ...

    The valid actions are:

    PAGE
    The broker pages any further messages to disk.
    DROP
    The broker silently drops any further messages.
    FAIL
    The broker drops any further messages and issues exceptions to client message producers.
    BLOCK
    Client message producers block when they try to send further messages.

    In the following example, messages for the usa-news address are paged when a memory limit is reached.

    spec:
      ...
      brokerProperties:
      - addressSettings.usa-news.addressFullMessagePolicy=PAGE
      ...
  5. Save the CR.

4.11. Specifying a custom Init container image

You can use an Init Container resource to perform tasks before the rest of a pod is deployed. For example, you can build a custom Init container if you want to include extra runtime dependencies, .jar files, in the broker installation directory.

When you build a custom Init Container image, you must follow these important guidelines:

  • In the build script (for example, a Docker Dockerfile or Podman Containerfile) that you create for the custom image, the FROM instruction must specify the latest version of the AMQ Broker Operator built-in Init Container as the base image. In your script, include the following line:

    FROM registry.redhat.io/amq7/amq-broker-init-rhel8:7.13
  • The custom image must include a script called post-config.sh that you include in a directory called /amq/scripts. The post-config.sh script is where you can modify or add to the initial configuration that the Operator generates. When you specify a custom Init Container, the Operator runs the post-config.sh script after it uses your CR instance to generate a configuration, but before it starts the broker application container.
  • As described in Section 4.1.2, “Directory structure of a broker Pod”, the path to the installation directory used by the Init Container is defined in an environment variable called CONFIG_INSTANCE_DIR. The post-config.sh script should use this environment variable name when referencing the installation directory (for example, ${CONFIG_INSTANCE_DIR}/lib) and not the actual value of this variable (for example, /amq/init/config/lib).
  • If you want to include additional resources (for example, .xml or .jar files) in your custom broker configuration, you must ensure that these are included in the custom image and accessible to the post-config.sh script.

The following procedure describes how to specify a custom Init Container image.

Prerequisites

Procedure

  1. Edit the CR instance for the broker deployment.

    1. Using the OpenShift command-line interface:

      1. Log in to OpenShift Container Platform as a user that has privileges to deploy CRs in the project for the broker deployment.
      2. Edit the CR for your deployment.

         oc edit ActiveMQArtemis <CR instance name> -n <namespace>
    2. Using the OpenShift Container Platform web console:

      1. Log in to OpenShift Container Platform as a user that has privileges to deploy CRs in the project for the broker deployment.
      2. In the left pane, click menu:Administration[Custom Resource Definitions].
      3. Click the ActiveMQArtemis CRD.
      4. Click the Instances tab.
      5. Click the instance for your broker deployment.
      6. Click the YAML tab.

        Within the console, a YAML editor opens, enabling you to edit the CR instance.

  2. In the deploymentPlan section of the CR, add an initImage attribute and set the value to the URL of your custom Init Container image.

    apiVersion: broker.amq.io/v1beta1
    kind: ActiveMQArtemis
    metadata:
      name: ex-aao
    spec:
      deploymentPlan:
        size: 1
        image: placeholder
        initImage: <custom_init_container_image_url>
        requireLogin: false
        persistenceEnabled: true
        journalType: nio
        messageMigration: true
    initImage

    Specifies the full URL for your custom Init Container image, which must be available from a container registry.

    Important

    If a CR has a custom init container image specified in the spec.deploymentPlan.initImage attribute, Red Hat recommends that you also specify the URL of the corresponding broker container image in the spec.deploymentPlan.image attribute to prevent automatic upgrades of the broker image. If you do not specify the URL of a specific broker container image in the spec.deploymentPlan.image attribute, the broker image can be automatically upgraded. After the broker image is upgraded, the versions of the broker and custom init container image are different, which might prevent the broker from running.

    If you have a working deployment that has a custom init container, you can prevent any further upgrades of the broker container image to eliminate the risk of a newer broker image not working with your custom init container image. For more information about preventing upgrades to the broker image, see, Section 6.6.2, “Restricting automatic upgrades of images by using image URLs”.

  3. Save the CR.

4.12. Configuring brokers for client connections

You use acceptors to define how a broker pod accepts connections from clients.

4.12.1. Configuring acceptors

When you create an acceptor for inbound client connections, you specify information such as the messaging protocols to enable on the acceptor and the port on the broker pod to use for these protocols.

Procedure

  1. Edit the ActiveMQArtemis custom resource (CR) for your broker deployment.
  2. In the acceptors attribute, add a named acceptor. Add the protocols and port attributes. Set values to specify the messaging protocols to be used by the acceptor and the port on each broker pod to expose for those protocols. For example:

    spec:
      ..
      acceptors:
      - name: my-acceptor
        protocols: amqp
        port: 5672
      ..

    The configured acceptor exposes port 5672 to AMQP clients. The full set of values that you can specify for the protocols attribute is shown in the table.

    Expand
    Table 4.4. Acceptor protocols
    ProtocolValue

    Core Protocol

    core

    AMQP

    amqp

    OpenWire

    openwire

    MQTT

    mqtt

    STOMP

    stomp

    All supported protocols

    all

    Note
    • For each broker pod in your deployment, the Operator also creates a default acceptor that uses port 61616. This default acceptor is required for broker clustering and has Core Protocol enabled.
    • By default, the AMQ Broker management console uses port 8161 on the broker pod. Each broker pod in your deployment has a dedicated Service that provides access to the console. For more information, see Chapter 5, Connecting to AMQ Management Console for an Operator-based broker deployment.
  3. To use another protocol on the same acceptor, modify the protocols attribute. Specify a comma-separated list of protocols. For example:

    spec:
     ..
      acceptors:
      - name: my-acceptor
        protocols: amqp,openwire
        port: 5672
    ...

    The configured acceptor now exposes port 5672 to AMQP and OpenWire clients.

  4. To specify the number of concurrent client connections that the acceptor allows, add the connectionsAllowed attribute and set a value. For example:

    spec:
      ...
      acceptors:
      - name: my-acceptor
        protocols: amqp,openwire
        port: 5672
        connectionsAllowed: 5
      ...
  5. By default, an acceptor is exposed only to clients in the same OpenShift cluster as the broker deployment. To also expose the acceptor to clients outside OpenShift, set both the expose attribute and the sslEnabled attribute to true.

    spec:
      ...
      acceptors:
      - name: my-acceptor
        protocols: amqp,openwire
        port: 5672
        connectionsAllowed: 5
        expose: true
        sslEnabled: true
      ...

    When you enable SSL (that is, Secure Sockets Layer) security on an acceptor (or connector), you can add related configuration, such as:

    • The secret name used to store authentication credentials in your OpenShift cluster. A secret is required when you enable SSL on the acceptor.
    • The Transport Layer Security (TLS) protocols to use for secure network communication. TLS is an updated, more secure version of SSL. You specify the TLS protocols in the enabledProtocols attribute.
    • Whether the acceptor uses mTLS, also known as mutual authentication, between the broker and the client. You specify this by setting the value of the needClientAuth attribute to true.

    For more information about these tasks, see Section 4.12.2, “Securing broker-client connections”.

    When you expose an acceptor to clients outside OpenShift, the Operator automatically creates a dedicated service and Openshift route for the acceptor on each broker pod in the deployment.

  6. If you want to customize the host name of the route that is exposed for the acceptor on each pod to match the internal routing configuration on your Openshift cluster, you can do one or both of the following:

    • Use the ingressHost attribute to replace the default host name with a custom host name for a specific acceptor.
    • Use the ingressDomain attribute to append a custom domain to the host name. The custom domain is also applied to all other routes, such as routes for other acceptors and the console, that are exposed by the CR configuration.

      1. To set a custom host name for the acceptor routes, add the ingressHost attribute and specify the host string. For example:

        spec:
          ...
          acceptors:
          - name: my-acceptor
            protocols: amqp,openwire
            port: 5672
            connectionsAllowed: 5
            expose: true
            ingressHost: my-acceptor-production.my-subdomain.com
          ...
        Note

        The ingressHost value must be unique on your Openshift cluster. If your broker cluster has multiple broker pods, you can make the ingressHost value unique by including the $(BROKER_ORDINAL) variable in the value. The Operator replaces this variable on each broker pod with the ordinal number the StatefulSet assigned to the pod. For example, an ingressHost value of my-acceptor-$(BROKER_ORDINAL)-production.my-subdomain.com sets the host name of the route to my-acceptor-0-production.my-subdomain on the first pod, my-acceptor-1-production.my-subdomain on the second pod and so on.

        You can include any the following variables in the custom host name string for an acceptor route:

        Expand
        Table 4.5. Variables in a custom host name string for an acceptor route
        NameDescription

        $(CR_NAME)

        The value of the metadata.name attribute in the CR.

        $(CR_NAMESPACE)

        The namespace of the custom resource.

        $(BROKER_ORDINAL)

        The ordinal number assigned to the broker pod by the StatefulSet.

        $(ITEM_NAME)

        The name of the acceptor.

        $(RES_TYPE)

        The resource type. A route has a resource type of rte. An ingress has a resource type of ing.

        $(INGRESS_DOMAIN)

        The value of the spec.ingressDomain attribute if it is configured in the CR.

      2. To append a custom domain to the host name in routes, add a spec.ingressDomain attribute and specify a custom string. For example:

        spec:
          ...
          ingressDomain: my.domain.com
  7. If your organization’s network policy require that you expose acceptors by using an ingress instead of a route, complete the following steps:

    1. Add the exposeMode attribute and set the value to ingress.

      spec:
        ...
        acceptors:
        - name: my-acceptor
          protocols: amqp,openwire
          port: 5672
          connectionsAllowed: 5
          expose: true
          exposeMode: ingress
        ...
    2. If you want to customize the host name of the ingresses that are exposed for the acceptor to match the internal routing configuration on your Openshift cluster, you can do one or both of the following:

      • Use the ingressHost attribute to replace the default host name with a custom host name.
      • Use the ingressDomain attribute to append a custom domain to the host name. The custom domain is also applied to all other ingresses, such as ingresses for other acceptors and the console, that are exposed by the CR configuration.

        1. To set a custom host name for the ingresses for the acceptor, add the ingressHost attribute and specify the host string. For example:

          spec:
            ...
            acceptors:
            - name: my-acceptor
              protocols: amqp,openwire
              port: 5672
              connectionsAllowed: 5
              expose: true
              exposeMode: ingress
              ingressHost: my-acceptor-production.my-subdomain.com
            ...

          You can include the same variables to customize an ingress host as a route host, which are described earlier in this procedure.

        2. To append a custom domain to the host name in ingresses, add a spec.ingressDomain attribute and specify a custom string. For example:

          spec:
            ...
            ingressDomain: my-subdomain.domain.com

          For acceptors, the default host name of an ingress is in the format of <cr-name>-<acceptor name>-<ordinal>-svc-ing-<namespace>. If, for example, you have a CR named production in the amqbroker name space, an ingressDomain value of mydomain.com gives a host value of production-wconsj-0-svc-ing-mynamespace.amqbroker.com for the ingress created on pod 0.

4.12.2. Securing broker-client connections

If you enable security on an acceptor, by setting the sslEnabled attribute to true, you must configure Transport Layer Security (TLS) to allow certificate-based authentication between the broker and clients. TLS is an updated, more secure version of SSL.

There are two primary TLS configurations:

TLS
Only the broker presents a certificate. The certificate is used by the client to authenticate the broker. This is the most common configuration.
mTLS
Both the broker and the client present certificates. This is sometimes called mutual authentication.

You can use various methods to generate a TLS certificate.

If the broker and clients are running on the same OpenShift cluster, you can use OpenShift to generate a service serving certificate for the broker.

If the broker and clients are not running on the same OpenShift cluster, you must generate a certificate by using a method that allows you to customize the certificate. This section describes two methods that you can use to generate custom certificates:

  • cert-manager Operator for OpenShift
  • Java keytool utility.

If you want to secure connections between the broker and clients on the same OpenShift cluster, you can add an annotation to the acceptor service to request that OpenShift generates a service serving TLS certificate.

The generated certificate and key are in PEM format, stored in tls.crt and tls.key respectively, within a created secret.

Note

The service CA certificate, which issues the service serving certificates, is valid for 26 months and is automatically rotated when there is less than 13 months validity left. After rotation, the previous service CA configuration is still trusted until its expiration. This allows a grace period for all affected services to refresh their key material before the expiration. If you do not upgrade your cluster during this grace period, which restarts services and refreshes their key material, you might need to manually restart services to avoid failures after the previous service CA expires.

Procedure

  1. Edit the ActiveMQArtemis custom resource (CR) for you broker deployment.
  2. Use the resourceTemplates attribute to annotate the service that is created for an acceptor. For example:

    spec:
      ...
      resourceTemplates:
        - selector:
            kind: Service
            name: amq-broker-myacceptor-0-svc
          annotations:
            service.beta.openshift.io/serving-cert-secret-name: myacceptor-ptls
      ...
    resourceTemplates.selector.kind
    Specify that the type of resource to which the customization applies is Service.
    resourceTemplates.selector.name

    Specify the name of the service to which you want to apply the annotation. An acceptor service has a name format of: <CR name><acceptor name><ordinal>-svc, where:

    • <CR name> name is the value of the metadata.name attribute in the CR.
    • <acceptor name> is the name of the acceptor. The example assumes that the name of the acceptor is myacceptor.
    • <ordinal> is the ordinal number assigned to the broker pod by the StatefulSet.
    resourceTemplates.annotations

    Specify an annotation of service.beta.openshift.io/serving-cert-secret-name: <secret>, where <secret> is the name of the secret that Openshift creates for the service.

    Note

    The secret name must match the acceptor name and have a -ptls suffix. The specific suffix is required to allow the Operator to deploy the CR before the secret is created.

  3. In the sslSecret` attribute in the CR, specify the secret that contains the broker certificate. For example:

    spec:
      acceptors:
        - name: myacceptor
          protocols: CORE
          port: 61626
          sslEnabled: true
          sslSecret: myacceptor-ptls
  4. In the brokerProperties attribute, configure the broker to automatically load a new certificate each time the certificate is renewed in Openshift. For example:

    spec:
      ...
      brokerProperties
      - "acceptorConfigurations.myacceptor.params.sslAutoReload=true"
       ...
  5. Add the public key of the service serving certificate to each client’s trust store.
  6. If you want to configure mTLS authentication between the broker and clients, complete the following steps.

    1. Create a trust bundle that contains the certificate of each client that you want the broker to trust and add the trust bundle to a secret, for example,trusted-clients-bundle.
    2. In the acceptors configured in the broker CR, add the needClientAuth attribute and set to true to require client authentication. For example:

      spec:
        ..
        acceptors:
          - name: myacceptor
            protocols: all
            port: 62666
            sslEnabled: true
            sslSecret: myacceptor-ptls
            needClientAuth: true
        ..
    3. In the trustSecret attribute of each acceptor, specify the secret that contains the trust bundle of client certificates. For example:

      spec:
        ..
        acceptors:
          - name: new-acceptor
            protocols: all
            port: 62666
            sslEnabled: true
            sslSecret: myacceptor-ptls
            needClientAuth: true
            trustSecret: trusted-clients-bundle
        ..
  7. Save the CR.

You can use cert-manager Operator for OpenShift to generate a TLS certificate to secure connections between a broker and clients. The cert-manager Operator for OpenShift is a cluster-wide service that provides application certificate lifecycle management.

The following example procedure describes how to configure Transport Layer Security (TLS) by using a self-signed certificate. If your policy requires certificates that are signed by a recognized certificate manager, you can request the certificates by using the cert-manager Operator for OpenShift.

Prerequisites

Procedure

  1. Create a YAML file, for example, self-signed-issuer.yaml, that defines a root self-signed issuer. An issuer is an Openshift resource that represents certificate authorities (CAs) that are able to generate signed certificates by honoring certificate signing requests.

    The following example yaml creates a self-signed issuer, which you can then use to create a Certification Authority(CA) certificate. Your CA certificate can be managed by the cert-manager Operator.

    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
      name: root-issuer
    spec:
      selfSigned: {}
  2. Create a YAML file, for example, root-ca.yaml, that defines a root CA certificate.

    In the issuerRef.name field, specify the name of the self-signed issuer, root-issuer, that you created. For example:

    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: root-ca
      namespace: cert-manager
    spec:
      isCA: true
      commonName: "amq.io.root"
      secretName: root-ca-secret
      subject:
        organizations:
        - "www.amq.io"
      issuerRef:
        name: root-issuer
        kind: ClusterIssuer

    The Certificate is created in Privacy Enhanced Mail (PEM) format in a secret named root-ca-secret.

  3. Create a YAML file, for example, root-ca-issuer.yaml, that defines a CA issuer for issuing certificates that are signed by the root CA. For example:

    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
      name: root-ca-issuer
    spec:
      ca:
        secretName: root-ca-secret
  4. Create a YAML file, for example, broker-cert.yaml, that defines a broker certificate.

    In the issuerRef.Name field, specify the name of the issuer, root-ca-issuer, that you created to issue certificates that are signed by the root CA. For example:

    apiVersion: cert-manager.io/v1
    kind: Certificate
    Metadata:
     name: broker-cert
    spec:
     isCA: false
     commonName: “amq.io”
     dnsNames:
       - “amq-broker-ss-0.amq-broker-svc-rte-default.cluster.local
       - “amq-broker-ss-1.amq-broker-svc-rte-default.cluster.local
     secretName: broker-cert-secret
     subject:
       organizations:
       - “www.amq.io”
     issuerRef:
       name: root-ca-issuer
       kind: ClusterIssuer
  5. Deploy the custom resources that you defined for issuers and certificates in YAML files to create the corresponding OpenShift objects. For example:

    $ oc create -f  self-signed-issuer.yaml
    $ oc create -f  root-ca.yaml
    $ oc create -f  root-ca-issuer.yaml
    $ oc create -f  broker-cert.yaml
  6. Edit the ActiveMQArtemis CR for you broker deployment.
  7. Specify the secret that contains the broker certificate in the sslSecret attribute of each acceptor that you want to secure. For example:

    spec:
      ..
      acceptors:
        - name: new-acceptor
          protocols: all
          port: 62666
          sslEnabled: true
          needClientAuth: false
          sslSecret: broker-cert-secret
      ..
  8. In the brokerProperties attribute, configure the broker to automatically load a new broker certificate for the acceptor each time the certificate is renewed by the cert-manager Operator for Openshift. For example:

    spec:
      ...
      brokerProperties
      - "acceptorConfigurations.new-acceptor.params.sslAutoReload=true"
       ...
  9. Add the root CA certificate that signed the broker certificate, which was create in a secret named root-ca-secret secret in this example procedure, to each client’s trust store, so clients can trust the broker.
  10. If you want to configure mTLS authentication between the broker and clients, complete the following steps.

    1. Use Trust Manager for Kubernetes to create a trust bundle that contains the certificate of each client that you want the broker to trust and add the trust bundle to a secret, for example,trusted-clients-bundle. For information on how to create a trust bundle, see the trust-manager documentation.
    2. In the acceptors configured in the broker CR, add the needClientAuth attribute and set to true to require client authentication. For example:

      spec:
        ..
        acceptors:
          - name: new-acceptor
            protocols: all
            port: 62666
            sslEnabled: true
            sslSecret: broker-cert-secret
            needClientAuth: true
        ..
    3. in the trustSecret attribute of each acceptor, specify the secret that contains the trust bundle of client certificates. For example:

      spec:
        ..
        acceptors:
          - name: new-acceptor
            protocols: all
            port: 62666
            sslEnabled: true
            sslSecret: broker-cert-secret
            needClientAuth: true
            trustSecret: trusted-clients-bundle
        ..
  11. Save the CR.

4.12.2.3. Using the Java keytool utility

You can use keytool, a certificate management utility included with Java, to generate a TLS certificate to secure connections between a broker and clients.

4.12.2.3.1. Configuring one-way TLS

You can secure communications between between the broker and clients with one-way Transport Layer Security (TLS) authentication. In one-way TLS, only the broker presents a certificate which clients use to authenticate the broker.

Prerequisites

Procedure

  1. Generate a self-signed certificate for the broker key store.

    $ keytool -genkey -alias broker -keyalg RSA -keystore ~/broker.ks
  2. Export the certificate from the broker key store, so that it can be shared with clients. Export the certificate in the Base64-encoded .pem format. For example:

    $ keytool -export -alias broker -keystore ~/broker.ks -file ~/broker_cert.pem
  3. On the client, create a client trust store that imports the broker certificate.

    $ keytool -import -alias broker -keystore ~/client.ts -file ~/broker_cert.pem
  4. Log in to OpenShift Container Platform as an administrator. For example:

    $ oc login -u system:admin
  5. Switch to the project that contains your broker deployment. For example:

    $ oc project <my_openshift_project>
  6. Create a secret to store the TLS credentials. For example:

    $ oc create secret generic my-tls-secret \
    --from-file=broker.ks=~/broker.ks \
    --from-file=client.ts=~/broker.ks \
    --from-literal=keyStorePassword=<password> \
    --from-literal=trustStorePassword=<password>
    Note

    When generating a secret, OpenShift requires you to specify both a key store and a trust store. The trust store key is generically named client.ts. For one-way TLS between the broker and a client, a trust store is not actually required. However, to successfully generate the secret, you need to specify some valid store file as a value for client.ts. The preceding step provides a "dummy" value for client.ts by reusing the previously-generated broker key store file. This is sufficient to generate a secret with all of the credentials required for one-way TLS.

  7. Link the secret to the service account that you created when installing the Operator. For example:

    $ oc secrets link sa/amq-broker-operator secret/my-tls-secret
  8. Specify the secret name in the sslSecret parameter of your secured acceptor or connector. For example:

    spec:
    ...
      acceptors:
      - name: my-acceptor
        protocols: amqp,openwire
        port: 5672
        sslEnabled: true
        sslSecret: my-tls-secret
        expose: true
        connectionsAllowed: 5
    ...
4.12.2.3.2. Configuring two-way TLS

You can secure communications between the broker and clients with two-way Transport Layer Security (TLS) authentication. In two-way TLS, both the broker and clients present certificates which they use to authenticate each other in a process called mutual authentication.

Prerequisites

Procedure

  1. Generate a self-signed certificate for the broker key store.

    $ keytool -genkey -alias broker -keyalg RSA -keystore ~/broker.ks
  2. Export the certificate from the broker key store, so that it can be shared with clients. Export the certificate in the Base64-encoded .pem format. For example:

    $ keytool -export -alias broker -keystore ~/broker.ks -file ~/broker_cert.pem
  3. On the client, create a client trust store that imports the broker certificate.

    $ keytool -import -alias broker -keystore ~/client.ts -file ~/broker_cert.pem
  4. On the client, generate a self-signed certificate for the client key store.

    $ keytool -genkey -alias broker -keyalg RSA -keystore ~/client.ks
  5. On the client, export the certificate from the client key store, so that it can be shared with the broker. Export the certificate in the Base64-encoded .pem format. For example:

    $ keytool -export -alias broker -keystore ~/client.ks -file ~/client_cert.pem
  6. Create a broker trust store that imports the client certificate.

    $ keytool -import -alias broker -keystore ~/broker.ts -file ~/client_cert.pem
  7. Log in to OpenShift Container Platform as an administrator. For example:

    $ oc login -u system:admin
  8. Switch to the project that contains your broker deployment. For example:

    $ oc project <my_openshift_project>
  9. Create a secret to store the TLS credentials. For example:

    $ oc create secret generic my-tls-secret \
    --from-file=broker.ks=~/broker.ks \
    --from-file=client.ts=~/broker.ts \
    --from-literal=keyStorePassword=<password> \
    --from-literal=trustStorePassword=<password>
    Note

    When generating a secret, OpenShift requires you to specify both a key store and a trust store. The trust store key is generically named client.ts. For two-way TLS between the broker and a client, you must generate a secret that includes the broker trust store, because this holds the client certificate. Therefore, in the preceding step, the value that you specify for the client.ts key is actually the broker trust store file.

  10. Link the secret to the service account that you created when installing the Operator. For example:

    $ oc secrets link sa/amq-broker-operator secret/my-tls-secret
  11. Specify the secret name in the sslSecret parameter of your secured acceptor or connector. For example:

    spec:
    ...
      acceptors:
      - name: my-acceptor
        protocols: amqp,openwire
        port: 5672
        sslEnabled: true
        sslSecret: my-tls-secret
        expose: true
        connectionsAllowed: 5
    ...

If clients are configured to use host verification, you must ensure that the Common Name (CN) of the broker’s certificate matches the host name that clients use to connect to the broker.

When a client tries to connect to a broker pod in your deployment, the verifyHost option in the client connection URL determines whether the client compares the Common Name (CN) of the broker’s certificate to its host name, to verify that they match. The client performs this verification if you specify verifyHost=true or similar in the client connection URL.

You might omit this verification in rare cases where you have no concerns about the security of the connection, for example, if the brokers are deployed on an OpenShift cluster in an isolated network. Otherwise, for a secure connection, it is advisable for a client to perform this verification. In this case, correct configuration of the broker key store certificate is essential to ensure successful client connections.

In general, when a client is using host verification, the CN that you specify when generating the broker certificate must match the full host name for the Route on the broker Pod that the client is connecting to. For example, if you have a deployment with a single broker pod, the CN might look like the following:

CN=my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain

To ensure that the CN can resolve to any broker pod in a deployment with multiple brokers, you can specify an asterisk (*) wildcard character in place of the ordinal of the broker pod. For example:

CN=my-broker-deployment-*-svc-rte-my-openshift-project.my-openshift-domain

The CN shown in the preceding example successfully resolves to any broker pod in the my-broker-deployment deployment.

In the the Subject Alternative Name (SAN) that you specify when generating the broker certificate, you can specify a wildcard DNS name to match all of the brokers in the cluster. In the following example, an asterisk (*) wildcard character is specified in place of the ordinal of the broker pod.

"SAN=DNS:my-broker-deployment-*-svc-rte-my-openshift-project.my-openshift-domain"

If wildcard DNS names are not supported, you can include a comma-separated list of DNS names in the SAN field of the certificate for all of the broker pods in the cluster. For example:

"SAN=DNS:my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain,DNS:my-broker-deployment-1-svc-rte-my-openshift-project.my-openshift-domain,..."

Two network services, a headless service and a ping service, are automatically created by the Operator for your broker deployment.

On the Networking pane of the OpenShift Container Platform web console for your broker deployment, there are two running services; a headless service and a ping service. The default name of the headless service uses the format <custom_resource_name>-hdls-svc, for example, my-broker-deployment-hdls-svc. The default name of the ping service uses a format of <custom_resource_name>-ping-svc, for example, `my-broker-deployment-ping-svc.

The headless service provides access to port 61616, which is used for internal broker clustering.

The ping service is used by the brokers for discovery, and enables brokers to form a cluster within the OpenShift environment. Internally, this service exposes port 8888.

Internal clients run on the same OpenShift cluster as the broker while external clients are external to the OpenShift cluster where the broker runs. They way in which clients connect to the broker differs between internal and external clients.

To connect an internal client to a broker, you specify the DNS resolvable name of the broker pod in the client connection details.

The following is an example connection for internal clients.

$ tcp://ex–aao-ss-0:<port>

If the internal client is using the Core protocol and the useTopologyForLoadBalancing=false key was not set in the connection URL, after the client connects to the broker for the first time, the broker can inform the client of the addresses of all the brokers in the cluster. The client can then load balance connections across all brokers.

If your brokers have durable subscription queues or request/reply queues, be aware of the caveats associated with using these queues when client connections are load balanced. For more information, see Section 4.12.4.4, “Caveats to load balancing client connections when you have durable subscription queues or reply/request queues”.

When you expose an acceptor on the broker, the Operator automatically creates a dedicated service and route for each broker pod in the deployment. External clients can connect to a broker by specifying the full host name of the route created for the broker pod.

You can use a basic curl command to test external access to the full host name of the route. For example:

$ curl https://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain

The full host name of the route for the broker pod must resolve to the node that is hosting the OpenShift router. The OpenShift router uses the host name to determine where to send the traffic inside the OpenShift internal network. By default, the OpenShift router listens to port 80 for non-secured (that is, non-SSL) traffic and port 443 for secured (that is, SSL-encrypted) traffic. For an HTTP connection, the router automatically directs traffic to port 443 if you specify a secure connection URL (that is, https), or to port 80 if you specify a non-secure connection URL (that is, http).

In each external client’s connection URL, specify the following:

  • The full host name of the route for each broker pod and the port number. The client attempts to connect to the first host name in the connection URL. However, if the first host name is unavailable, the client automatically connects to the next host name in the connection URL, and so on.
  • For one-way TLS, the path to the trust store and the corresponding password.
  • For two-way TLS, the key store and the corresponding password, also.
  • If the external client uses the Core protocol, set the useTopologyForLoadBalancing=false key in the client’s connection URL.

    Setting the useTopologyForLoadBalancing=false key prevents a client from using the AMQ Broker Pod DNS names that are in the cluster topology information provided by the broker. The Pod DNS names resolve to internal IP addresses, which an external client cannot access.

Some example client connection URLs, for supported messaging protocols, are shown below.

External Core client, using one-way TLS
tcp://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain:443?useTopologyForLoadBalancing=false&sslEnabled=true \
&trustStorePath=~/client.ts&trustStorePassword=<password>
Note

The useTopologyForLoadBalancing key is explicitly set to false in the connection URL because an external Core client cannot use topology information returned by the broker. If this key is set to true or you do not specify a value, it results in a DEBUG log message.

External Core client, using two-way TLS
tcp://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain:443?useTopologyForLoadBalancing=false&sslEnabled=true \
&keyStorePath=~/client.ks&keyStorePassword=<password> \
&trustStorePath=~/client.ts&trustStorePassword=<password>
External OpenWire client, using one-way TLS
ssl://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain:443"

# Also, specify the following JVM flags
-Djavax.net.ssl.trustStore=~/client.ts -Djavax.net.ssl.trustStorePassword=<password>
External OpenWire client, using two-way TLS
ssl://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain:443"

# Also, specify the following JVM flags
-Djavax.net.ssl.keyStore=~/client.ks -Djavax.net.ssl.keyStorePassword=<password> \
-Djavax.net.ssl.trustStore=~/client.ts -Djavax.net.ssl.trustStorePassword=<password>
External AMQP client, using one-way TLS
amqps://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain:443?transport.verifyHost=true \
&transport.trustStoreLocation=~/client.ts&transport.trustStorePassword=<password>
External AMQP client, using two-way TLS
amqps://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain:443?transport.verifyHost=true \
&transport.keyStoreLocation=~/client.ks&transport.keyStorePassword=<password> \
&transport.trustStoreLocation=~/client.ts&transport.trustStorePassword=<password>

As an alternative to exposing routes on the broker for external client connections, you can configure a NodePort on the broker for these connections.

The NodePort should map to one of the protocol-specific ports specified by the acceptors configured for the broker.

By default, NodePorts are in the range 30000 to 32767, which means that a NodePort typically does not match the intended port on the broker pod.

To connect from a client outside OpenShift to the broker via a NodePort, you specify a URL in the format <protocol>://<ocp_node_ip>:<node_port_number>.

If client connections are load balanced, a client is not guaranteed to connect to the same broker every time it initiates a connection. This can lead to issues with features that require connection persistence, such as the use of durable subscriptions and reply/request queues.

Durable subscriptions

A durable subscription is represented as a queue on a broker and is created when a durable subscriber first connects to the broker. This queue exists and receives messages until the client unsubscribes. If the client reconnects to a different broker, another durable subscription queue is created on that broker. This can cause the following issues.

Expand
Table 4.6. Issues with durable subscriptions

Issue

Mitigation

Messages may get stranded in the original subscription queue.

Enable message distribution by setting the redistributionDelay property for an address or set of addresses. You can set this property under the brokerProperties attribute in the ActiveMQArtemis CR. For example:

addressSettings.<address>.redistributionDelay=5000.

In the example, the broker waits 5000 milliseconds after a queue’s final consumer closes before it redistributes messages to other brokers.

For more information on message redistribution, see Enabling message redistribution.

Messages may be received in the wrong order as there is a window during message redistribution when other messages are still routed.

None.

When a client unsubscribes, it deletes the queue only on the broker it last connected to. This means that the other queues can still exist and receive messages.

To delete other empty queues that may exist for a client that unsubscribed, configure both of the following properties for an address or set of addresses. You can set these properties under the brokerProperties attribute in the ActiveMQArtemis CR.

addressSettings.<address>.autoDeleteQueuesMessageCount=0

addressSettings.<address>.autoDeleteQueuesDelay=5000

With the autoDeleteQueuesMessageCount property set to 0, a queue is deleted only if there are no messages in the queue. The value of the autoDeleteQueuesDelay property is the number of milliseconds after which a queue that has no messages is deleted.

For more information, see Configuring automatic creation and deletion of addresses and queues.

Request/Reply queues

When a JMS Producer creates a temporary reply queue, the queue is created on the broker. If the client that is consuming from the work queue and replying to the temporary queue connects to a different broker, the following issues can occur.

Expand
Table 4.7. Issues with request/reply queues
IssueMitigation

Since the reply queue does not exist on the broker that the client is connected to, the client may generate an error.

Configure the broker to automatically create a queue when a client requests to connect to a queue that does not exist. To configure automatic queue creation, add the following property under the brokerProperties attribute in the ActiveMQArtemis CR.

addressSettings.<address>.autoCreateQueues=true

Messages sent to the work queue may not be distributed.

Enable load balancing on demand by adding the following property under the brokerProperties attribute in the ActiveMQArtemis CR:

clusterConfigurations.<cluster>.messageLoadBalancingType=ON-DEMAND.

Also, enable message distribution by setting the redistributionDelay property for an address or set of addresses. You can set this property under the brokerProperties attribute in the ActiveMQArtemis CR. For example:

addressSettings<address>.redistributionDelay=5000

For more information, see Enabling message redistribution.

4.12.5. Disabling FIPS mode

FIPS is a standard that specifies the security requirements for cryptographic modules used in applications. AMQ Broker runs on FIPS-enabled RHEL systems. If you need the broker to accept connections from clients that use non-FIPS compliant algorithms, you can disable FIPS mode on the broker.

Procedure

  1. Edit the ActiveMQArtemis custom resource (CR) for your broker deployment.
  2. Add an environment variable to set a Java system property that disables FIPS mode on the broker. For example:

    spec:
      ...
      env:
      - name: JDK_JAVA_OPTIONS
        value: com.redhat.fips=false
      ..
  3. Save the ActiveMQArtemis CR.

Internal connections between brokers in a cluster use an internal connector and acceptor, both of which are named artemis. You can secure connections between brokers in a cluster by configuring Transport Layer Security (TLS) for the artemis connector and acceptor.

On the acceptor, you specify a secret that contains a common TLS certificate for all the brokers in the cluster. On the connector, you specify a truststore that contains the public key of the TLS certificate. The public key is required in each broker’s truststore so a broker can trust the other brokers in the cluster when they establish a TLS connection.

The following example procedure describes how to secure the internal connections between the brokers in a cluster by using a self-signed certificate.

Procedure

  1. Generate a self-signed TLS certificate and add it to a keystore file.

    • In the Subject Alternative Name (SAN) field of the certificate, specify a wildcard DNS name to match all of the brokers in the cluster, as shown in the following example. The example is based on using a CR named ex-aao that is deployed in a test namespace.

      $ keytool -storetype jks -keystore server-keystore.jks -storepass artemis -keypass artemis -alias server -genkey -keyalg "RSA" -keysize 2048 -dname "CN=AMQ Server, OU=Artemis, O=ActiveMQ, L=AMQ, S=AMQ, C=AMQ" -validity 365 -ext bc=ca:false -ext eku=sA -ext san=dns:*.ex-aao-hdls-svc.test.svc.cluster.local
    • If the certificate does not support the use of wildcard DNS names, you can include a comma-separated list of DNS names in the SAN field of the certificate for all of the broker pods in the cluster. For example:

      keytool -storetype jks -keystore server-keystore.jks -storepass artemis -keypass artemis -alias server -genkey -keyalg "RSA" -keysize 2048 -dname "CN=AMQ Server, OU=Artemis, O=ActiveMQ, L=AMQ, S=AMQ, C=AMQ" -validity 365 -ext bc=ca:false -ext eku=sA -ext san=dns:ex-aao-ss-0.ex-aao-hdls-svc.test.svc.cluster.local,dns:ex-aao-ss-1.ex-aao-hdls-svc.test.svc.cluster.local
    • If the TLS certificate does not support the use of DNS names, you must disable host verification in the ActiveMQArtemis CR, as described below.
  2. Export the public key of the TLS certificate from the keystore file so that it can be imported into a truststore file. For example:

    $ keytool -storetype jks -keystore server-keystore.jks -storepass artemis -alias server -exportcert -rfc > server.crt
  3. Import the public key of the TLS certificate into a truststore file so the other brokers in the clusters can trust the certificate. For example:

    $ keytool -storetype jks -keystore server-truststore.jks -storepass artemis -keypass artemis -importcert -alias server -file server.crt -noprompt
  4. Create a secret to store the keystore and truststore files and their associated passwords. For example:

    oc create secret generic artemis-ssl-secret --namespace test --from-file=broker.ks=server-keystore.jks --from-file=client.ts=server-truststore.jks --from-literal=keyStorePassword=artemis --from-literal=trustStorePassword=artemis
  5. Edit the ActiveMQArtemis CR for your broker deployment and add an internal acceptor named artemis. In the artemis acceptor, set the sslEnabled attribute to true and specify the name of the secret that you created in the sslSecret attribute. For example:

    spec:
      ..
      deploymentPlan:
        size: 2
      acceptors:
      - name: artemis
        port: 61616
        sslEnabled: true
        sslSecret: artemis-ssl-secret
      ..
  6. Enable SSL for the artemis connector, which is used by each broker in the cluster to connect to other brokers in the cluster. Use the brokerProperties attribute to enable SSL and specify the path and credentials of the truststore file that contains the public key of the TLS certificate.

    spec:
      ..
      deploymentPlan:
        size: 2
      acceptors:
      - name: artemis
        port: 61616
        sslEnabled: true
        sslSecret: artemis-ssl-secret
      brokerProperties:
      - 'connectorConfigurations.artemis.params.sslEnabled=true'
      - 'connectorConfigurations.artemis.params.trustStorePath=/etc/artemis-ssl-secret-volume/client.ts'
      - 'connectorConfigurations.artemis.params.trustStorePassword=artemis'
      ..
    connectorConfigurations.artemis.params.trustStorePath
    This value must match the location of the truststore file, client.ts on the broker pods. The truststore file and accompanying password file in the secret are mounted in a /etc/<secret name>-volume directory on each broker pod. The previous example specifies the location of a truststore that is in a secret named artemis-ssl-secret.
  7. If the TLS certificate does not support the use of DNS names, use the brokerProperties attribute to disable host verification. For example:

    spec:
      ..
      brokerProperties:
      ..
      - 'connectorConfigurations.artemis.params.verifyHost=false'
      ..
  8. Save the CR.

It is possible for clients to send large AMQP messages that exceed the size of the broker’s internal buffer, causing unexpected errors. To avoid this risk, you can configure the broker to store messages as files, instead of in memory, when the message size exceeds a specified minimum value.

For a broker deployment on OpenShift Container Platform, the large messages directory is /opt/<custom_resource_name>/data/large-messages on the Persistent Volume (PV) used by the broker for message storage. When the broker stores a message as a large message, the queue retains a reference to the file in the large messages directory.

Note

You can configure the large message size limit in the broker configuration for the AMQP protocol only. For the AMQ Core and Openwire protocols, you can configure large message size limits in the client connection configuration. For more information, see the Red Hat AMQ Clients documentation.

You can configure the broker to store messages as files, instead of in memory, when the message size exceeds a minimum threshold. Any message that exceeds the configured threshold is stored in a dedicated directory specifically for large message files.

Prerequisites

Procedure

  1. Open the Custom Resource (CR) instance in which you previously defined an AMQP acceptor.

    1. Using the OpenShift command-line interface:

      $ oc edit -f <path/to/custom_resource_instance>.yaml
    2. Using the OpenShift Container Platform web console:

      1. In the left navigation menu, click menu:Administration[Custom Resource Definitions]
      2. Click the ActiveMQArtemis CRD.
      3. Click the Instances tab.
      4. Locate the CR instance that corresponds to your project namespace.

    A previously-configured AMQP acceptor might resemble the following:

    spec:
    ...
      acceptors:
      - name: my-acceptor
        protocols: amqp
        port: 5672
        connectionsAllowed: 5
        expose: true
        sslEnabled: true
    ...
  2. Specify the minimum size, in bytes, of an AMQP message that the broker handles as a large message. For example:

    spec:
    ...
      acceptors:
      - name: my-acceptor
        protocols: amqp
        port: 5672
        connectionsAllowed: 5
        expose: true
        sslEnabled: true
        amqpMinLargeMessageSize: 204800
        ...
    ...

    In the preceding example, the broker is configured to accept AMQP messages on port 5672. Based on the value of amqpMinLargeMessageSize, if the acceptor receives an AMQP message with a body larger than or equal to 204800 bytes (that is, 200 kilobytes), the broker stores the message as a large message.

    The broker stores the message in the large messages directory (/opt/<custom_resource_name>/data/large-messages, by default) on the persistent volume (PV) used by the broker for message storage.

    If you do not explicitly specify a value for the amqpMinLargeMessageSize property, the broker uses a default value of 102400 (that is, 100 kilobytes).

    If you set amqpMinLargeMessageSize to a value of -1, large message handling for AMQP messages is disabled.

4.15. Configuring broker health checks

You can configure readiness, liveness, and startup probes to monitor the health and operational status of your broker containers.

The following summary describes the check performed by each probe.

  • A startup probe indicates whether the application within a container is started.
  • A liveness probe determines if a container is still running.
  • A readiness probe determines if a container is ready to accept service requests

If a startup probe or a liveness probe check of a Pod fails, the probe restarts the Pod.

AMQ Broker includes default readiness and liveness probes. The default liveness probe checks if the broker is running by pinging the broker’s HTTP port. The default readiness probe checks if the broker can accept network traffic by opening a connection to each of the acceptor ports configured for the broker.

A limitation of using the default liveness and readiness probes is that they are unable to identify underlying issues, for example, issues with the broker’s file system. You can create custom liveness and readiness probes that use the broker’s command-line utility, artemis, to run more comprehensive health checks.

AMQ Broker does not include a default startup probe. You can configure a startup probe in the ActiveMQArtemis Custom Resource (CR).

4.15.1. Configuring a startup probe

You can configure a startup probe to check if the AMQ Broker application within the broker container has started.

Procedure

  1. Edit the CR instance for the broker deployment.

    1. Using the OpenShift command-line interface:

      1. Log in to OpenShift Container Platform as a user that has privileges to deploy CRs in the project for the broker deployment.
      2. Edit the CR for your deployment.

         oc edit ActiveMQArtemis <CR instance name> -n <namespace>
    2. Using the OpenShift Container Platform web console:

      1. Log in to OpenShift Container Platform as a user that has privileges to deploy CRs in the project for the broker deployment.
      2. In the left pane, click menu:Administration[Custom Resource Definitions].
      3. Click the ActiveMQArtemis CRD.
      4. Click the Instances tab.
      5. Click the instance for your broker deployment.
      6. Click the YAML tab.

        Within the console, a YAML editor opens, enabling you to edit the CR instance.

  2. In the deploymentPlan section of the CR, add a startupProbe section. For example:

    spec:
      deploymentPlan:
        startupProbe:
          exec:
            command:
              - /bin/bash
              - '-c'
              - /opt/amq/bin/artemis
              - 'check'
              - 'node'
              - '--up'
              - '--url'
              - 'tcp://$HOSTNAME:61616'
          initialDelaySeconds: 5
          periodSeconds: 10
          timeoutSeconds: 3
          failureThreshold: 30
    command
    The startup probe command to run within the container. In the example, the startup probe uses the artemis check node command to verify that AMQ Broker has started in the container for a broker Pod.
    initialDelaySeconds
    The delay, in seconds, before the probe runs after the container starts. The default is 0.
    periodSeconds
    The interval, in seconds, at which the probe runs. The default is 10.
    timeoutSeconds
    Time, in seconds, that the startup probe command waits for a reply from the broker. If a response to the command is not received, the command is terminated. The default value is 1.
    failureThreshold

    The minimum consecutive failures, including timeouts, of the startup probe after which the probe is deemed to have failed. When the probe is deemed to have failed, it restarts the Pod. The default value is 3.

    Depending on the resources of the cluster and the size of the broker journal, you might need to increase the failure threshold to allow the broker sufficient time to start and pass the probe check. Otherwise, the broker enters a loop condition whereby the failure threshold is reached repeatedly and the broker is restarted each time by the startup probe. For example, if you set the failureThreshold to 30 and the probe runs at the default interval of 10 seconds, the broker has 300 seconds to start and pass the probe check.

  3. Save the CR.

4.15.2. Configuring liveness and readiness probes

You can configure a liveness probe to check if a container is running and a readiness probe to determine if a container is ready to accept service requests.

Prerequisites

Procedure

  1. Edit the CR instance for the broker deployment.

    1. Using the OpenShift command-line interface:

      1. Log in to OpenShift Container Platform as a user that has privileges to deploy CRs in the project for the broker deployment.
      2. Edit the CR for your deployment.

         oc edit ActiveMQArtemis <CR instance name> -n <namespace>
    2. Using the OpenShift Container Platform web console:

      1. Log in to OpenShift Container Platform as a user that has privileges to deploy CRs in the project for the broker deployment.
      2. In the left pane, click menu:Administration[Custom Resource Definitions].
      3. Click the ActiveMQArtemis CRD.
      4. Click the Instances tab.
      5. Click the instance for your broker deployment.
      6. Click the YAML tab.
  2. To configure a liveness probe, in the deploymentPlan section of the CR, add a livenessProbe section. For example:

    spec:
      deploymentPlan:
        livenessProbe:
          initialDelaySeconds: 5
          periodSeconds: 5
          failureThreshold: 30
    initialDelaySeconds

    The delay, in seconds, before the probe runs after the container starts. The default is 5.

    Note

    If the deployment also has a startup probe configured, you can set the delay to 0 for both a liveness and a readiness probe. Both of these probes run only after the startup probe has passed. If the startup probe has already passed, it confirms that the broker has started successfully, so a delay in running the liveness and readiness probes is not required.

    periodSeconds
    The interval, in seconds, at which the probe runs. The default is 5.
    failureThreshold

    The minimum consecutive failures, including timeouts, of the liveness probe that signify the probe has failed. When the probe fails, it restarts the Pod. The default value is 3.

    If your deployment does not have a startup probe configured, which verifies that the broker application is started before the liveness probe runs, you might need to increase the failure threshold to allow the broker sufficient time to start and pass the liveness probe check. Otherwise, the broker can enter a loop condition whereby the failure threshold is reached repeatedly and the broker Pod is restarted each time by the liveness probe.

    The time required by the broker to start and pass a liveness probe check depends on the resources of the cluster and the size of the broker journal. For example, if you set the failureThreshold to 30 and the probe runs at the default interval of 5 seconds, the broker has 150 seconds to start and pass the liveness probe check.

    Note

    If you do not configure a liveness probe or if the handler is missing from a configured probe, the AMQ Broker Operator creates a default TCP probe that has the following configuration. The default TCP probe attempts to open a socket to the broker container on the specified port.

    spec:
      deploymentPlan:
        livenessProbe:
          tcpSocket:
            port: 8181
          initialDelaySeconds: 30
          timeoutSeconds: 5
  3. To configure a readiness probe, in the deploymentPlan section of the CR, add a readinessProbe section. For example:

    spec:
      deploymentPlan:
        readinessProbe:
          initialDelaySeconds: 5
          periodSeconds: 5

    If you don’t configure a readiness probe, a built-in script checks if all acceptors can accept connections.

  4. If you want to configure more comprehensive health checks, add the artemis check command-line utility to the liveness or readiness probe configuration.

    1. If you want to configure a health check that creates a full client connection to the broker, in the livenessProbe or readinessProbe section, add an exec section. In the exec section, add a command section. In the command section, add the artemis check node command syntax. For example:

      spec:
        deploymentPlan:
          readinessProbe:
            exec:
              command:
                - bash
                - '-c'
                - /home/jboss/amq-broker/bin/artemis
                - check
                - node
                - '--silent'
                - '--acceptor'
                - <acceptor name>
                - '--user'
                - $AMQ_USER
                - '--password'
                - $AMQ_PASSWORD
            initialDelaySeconds: 30
            timeoutSeconds: 5

      By default, the artemis check node command uses the URI of an acceptor called artemis. If the broker has an acceptor called artemis, you can exclude the --acceptor <acceptor name> option from the command.

      Note

      $AMQ_USER and $AMQ_PASSWORD are environment variables that are configured by the AMQ Operator.

    2. If you want to configure a health check that produces and consumes messages, which also validates the health of the broker’s file system, in the livenessProbe or readinessProbe section, add an exec section. In the exec section, add a command section. In the command section, add the artemis check queue command syntax. For example:

      spec:
        deploymentPlan:
          readinessProbe:
            exec:
              command:
                - bash
                - '-c'
                - /home/jboss/amq-broker/bin/artemis
                - check
                - queue
                - '--name'
                - livenessqueue
                - '--produce'
                - "1"
                - '--consume'
                - "1"
                - '--silent'
                - '--user'
                - $AMQ_USER
                - '--password'
                - $AMQ_PASSWORD
            initialDelaySeconds: 30
            timeoutSeconds: 5
      Note

      The queue name that you specify must be configured on the broker and have a routingType of anycast. For example:

      apiVersion: broker.amq.io/v1beta1
      kind: ActiveMQArtemisAddress
      metadata:
        name: livenessqueue
        namespace: activemq-artemis-operator
      spec:
        addressName: livenessqueue
        queueConfiguration:
          purgeOnNoConsumers: false
          maxConsumers: -1
          durable: true
          enabled: true
        queueName: livenessqueue
        routingType: anycast
  5. Save the CR.

If you want to have the capability to scale down the number of brokers in a cluster and migrate messages to a remaining broker, you must enable message migration.

When you scale down a cluster that has message migration enabled, a scaledown controller manages the message migration process.

4.16.1. Steps in message migration process

If you scale down the number of brokers in a cluster that has message migration enabled, a temporary drainer pod migrates messages to another broker in the cluster.

The message migration process follows these steps.

  1. When a broker pod in the deployment shuts down due to an intentional scale down of the deployment, the Operator automatically deploys a scaledown custom resource to prepare for message migration.
  2. To check for Persistent Volumes (PVs) that have been orphaned, the scaledown controller looks at the ordinal on the volume claim. The controller compares the ordinal on the volume claim to that of the broker pods that are still running in the StatefulSet (that is, the broker cluster) in the project.

    If the ordinal on the volume claim is higher than the ordinal on any of the broker Pods still running in the broker cluster, the scaledown controller determines that the broker Pod at that ordinal has been shut down and that messaging data must be migrated to another broker Pod.

  3. The scaledown controller starts a drainer pod. The drainer pod connects to one of the other live broker pods in the cluster and migrates messages to that live broker pod.

The following figure illustrates how the scaledown controller (also known as a drain controller) migrates messages to a running broker pod.

Figure 4.1. Message migration using the scaledown controller

ah ocp pod draining 3

After the messages are migrated successfully to an operational broker pod, the drainer pod shuts down and the scaledown controller removes the PVC for the orphaned PV. The PV is returned to a "Released" state.

Note

If the reclaim policy for the PV is set to retain, the PV cannot be used by another pod until you delete and recreate the PV. For example, if you scale the cluster up after scaling it down, the PV is not available to a pod started until you delete and recreate the PV.

4.16.2. Enabling message migration

You must enable message migration if you want to migrate messages from a broker pod that will be removed during a scale down operation. The messages are then moved to a remaining broker pod in the cluster.

Prerequisites

Note
  • A scaledown controller operates only within a single OpenShift project. The controller cannot migrate messages between brokers in separate projects.
  • If you scale a broker deployment down to 0 (zero), message migration does not occur, since there is no running broker pod to which messaging data can be migrated. However, if you scale a deployment down to zero and then back up to a size that is smaller than the original deployment, drainer pods are started for the brokers that remain shut down.

Procedure

  1. Edit the CR instance for the broker deployment.

    1. Using the OpenShift command-line interface:

      1. Log in to OpenShift Container Platform as a user that has privileges to deploy CRs in the project for the broker deployment.
      2. Edit the CR for your deployment.

         oc edit ActiveMQArtemis <CR instance name> -n <namespace>
    2. Using the OpenShift Container Platform web console:

      1. Log in to OpenShift Container Platform as a user that has privileges to deploy CRs in the project for the broker deployment.
      2. In the left pane, click menu:Administration[Custom Resource Definitions].
      3. Click the ActiveMQArtemis CRD.
      4. Click the Instances tab.
      5. Click the instance for your broker deployment.
      6. Click the YAML tab.

        Within the console, a YAML editor opens, enabling you to edit the CR instance.

  2. In the deploymentPlan section of the CR, add a messageMigration attribute and set to true. If not already configured, add a persistenceEnabled attribute and also set to true. For example:

    spec:
      deploymentPlan:
        messageMigration: true
        persistenceEnabled: true
      ...

    These settings mean that when you later scale down the size of your clustered broker deployment, the Operator automatically starts a scaledown controller and migrates messages to a broker pod that is still running.

  3. Save the CR.
  4. (Optional) Complete the following steps to scale down the cluster and view the message migration process.

    1. In your existing broker deployment, verify which pods are running.

      $ oc get pods

      You see output that looks like the following.

      activemq-artemis-operator-8566d9bf58-9g25l   1/1   Running   0   3m38s
      ex-aao-ss-0                                  1/1   Running   0   112s
      ex-aao-ss-1                                  1/1   Running   0   8s

      The preceding output shows that there are three pods running; one for the broker Operator itself, and a separate pod for each broker in the deployment.

    2. Log into each pod and send some messages to each broker.

      1. Supposing that pod ex-aao-ss-0 has a cluster IP address of 172.17.0.6, run the following command:

        $ /opt/amq/bin/artemis producer --url tcp://172.17.0.6:61616 --user admin --password admin
    3. Supposing that pod ex-aao-ss-1 has a cluster IP address of 172.17.0.7, run the following command:

      $ /opt/amq/bin/artemis producer --url tcp://172.17.0.7:61616 --user admin --password admin

      The preceding commands create a queue called TEST on each broker and add 1000 messages to each queue.

    4. Scale the cluster down from two brokers to one.

      1. Open the main broker CR, broker_activemqartemis_cr.yaml.
      2. In the CR, set deploymentPlan.size to 1.
      3. At the command line, apply the change:

        $ oc apply -f deploy/crs/broker_activemqartemis_cr.yaml

        You see that the pod ex-aao-ss-1 starts to shut down. The scaledown controller starts a new drainer pod of the same name. This drainer pod also shuts down after it migrates all messages from broker pod ex-aao-ss-1 to the other broker pod in the cluster, ex-aao-ss-0.

    5. When the drainer pod is shut down, check the message count on the TEST queue of broker ood ex-aao-ss-0. You see that the number of messages in the queue is 2000, indicating that the drainer pod successfully migrated 1000 messages from the broker pod that shut down.

You can control the placement of AMQ Broker pods on OpenShift Container Platform nodes to, for example, provide greater resilience to node failures.

The control of pod placement can be implemented by using node selectors, tolerations, or affinity and anti-affinity rules.

Node selectors
A node selector allows you to schedule a broker pod on a specific node.
Tolerations
A toleration enables a broker pod to be scheduled on a node if the toleration matches a taint configured for the node. Without a matching pod toleration, a taint allows a node to refuse to accept a pod.
Affinity and anti-affinity
Node affinity rules control which nodes a pod can be scheduled on based on the node’s labels. Pod affinity and anti-affinity rules control which nodes a pod can be scheduled on based on the pods already running on that node.

In a node selector, you configure a key-value pair that OpenShift uses to schedule the broker pod on a node that has a matching key-value pair.

The following example shows how to configure a node selector to schedule a broker pod on a specific node.

Prerequisites

Procedure

  1. Create a Custom Resource (CR) instance based on the main broker CRD.

    1. Using the OpenShift command-line interface:

      1. Log in to OpenShift as a user that has privileges to deploy CRs in the project for the broker deployment.

        oc login -u <user> -p <password> --server=<host:port>
      2. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted.
    2. Using the OpenShift Container Platform web console:

      1. Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment.
      2. Start a new CR instance based on the main broker CRD. In the left pane, click menu:Administration[Custom Resource Definitions].
      3. Click the ActiveMQArtemis CRD.
      4. Click the Instances tab.
      5. Click Create ActiveMQArtemis.

        Within the console, a YAML editor opens, enabling you to configure a CR instance.

  2. In the deploymentPlan section of the CR, add a nodeSelector section and add the node label that you want to match to select a node for the pod. For example:

    spec:
        deploymentPlan:
          nodeSelector:
            app: broker1

    In this example, the broker pod is scheduled on a node that has a app: broker1 label.

  3. Deploy the CR instance.

    1. Using the OpenShift command-line interface:

      1. Save the CR file.
      2. Switch to the project in which you are creating the broker deployment.

        $ oc project <project_name>
      3. Create the CR instance.

        $ oc create -f <path/to/custom_resource_instance>.yaml
    2. Using the OpenShift web console:

      1. When you have finished configuring the CR, click Create.

You can use taints and tolerations to control whether broker pods can or cannot be scheduled on specific nodes. A taint allows a node to refuse to schedule a pod unless the pod has a matching toleration.

You can use taints to exclude pods from a node so the node is reserved for specific pods, such as broker pods, that have a matching toleration.

Having a matching toleration permits a broker pod to be scheduled on a node but does not guarantee that the pod is scheduled on that node. To guarantee that the broker pod is scheduled on the node that has a taint configured, you can configure affinity rules. For more information, see Section 4.17.3, “Controlling pod placement by using affinity and anti-affinity rules”

The following example shows how to configure a toleration to match a taint that is configured on a node.

Prerequisites

  • You should be familiar with how to use a CR instance to create a basic broker deployment. See Section 3.3.1, “Deploying a single broker instance”.
  • Apply a taint to the nodes which you want to reserve for scheduling broker pods. A taint consists of a key, value, and effect. The taint effect determines if:

    • existing pods on the node are evicted
    • existing pods are allowed to remain on the node but new pods cannot be scheduled unless they have a matching toleration
    • new pods can be scheduled on the node if necessary, but preference is to not schedule new pods on the node.

Procedure

  1. Create a Custom Resource (CR) instance based on the main broker CRD.

    1. Using the OpenShift command-line interface:

      1. Log in to OpenShift as a user that has privileges to deploy CRs in the project for the broker deployment.

        oc login -u <user> -p <password> --server=<host:port>
      2. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted.
    2. Using the OpenShift Container Platform web console:

      1. Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment.
      2. Start a new CR instance based on the main broker CRD. In the left pane, click menu:Administration[Custom Resource Definitions].
      3. Click the ActiveMQArtemis CRD.
      4. Click the Instances tab.
      5. Click Create ActiveMQArtemis.

        Within the console, a YAML editor opens, enabling you to configure a CR instance.

  2. In the deploymentPlan section of the CR, add a tolerations section. In the tolerations section, add a toleration for the node taint that you want to match. For example:

    spec:
         deploymentPlan:
            tolerations:
            - key: "app"
              value: "amq-broker"
              effect: "NoSchedule"

    In this example, the toleration matches a node taint of app=amq-broker:NoSchedule, so the pod can be scheduled on a node that has this taint configured.

    Note

    To ensure that the broker pods are scheduled correctly, do not specify a tolerationsSeconds attribute in the tolerations section of the CR.

  3. Deploy the CR instance.

    1. Using the OpenShift command-line interface:

      1. Save the CR file.
      2. Switch to the project in which you are creating the broker deployment.

        $ oc project <project_name>
      3. Create the CR instance.

        $ oc create -f <path/to/custom_resource_instance>.yaml
    2. Using the OpenShift web console:

      1. When you have finished configuring the CR, click Create.

You can control the placement of broker pods on nodes by using node affinity, pod affinity, or pod anti-affinity rules.

A broker pod can be scheduled on any node that has a matching key-value pair to that specified in an affinity rule.

The following example shows how to configure a broker to control pod placement by using node affinity rules.

Prerequisites

  • You should be familiar with how to use a CR instance to create a basic broker deployment. See Section 3.3.1, “Deploying a single broker instance”.
  • Assign a common label to the nodes in your OpenShift Container Platform cluster that can schedule the broker pod, for example, zone: emea.

Procedure

  1. Create a Custom Resource (CR) instance based on the main broker CRD.

    1. Using the OpenShift command-line interface:

      1. Log in to OpenShift as a user that has privileges to deploy CRs in the project for the broker deployment.

        oc login -u <user> -p <password> --server=<host:port>
      2. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted.
    2. Using the OpenShift Container Platform web console:

      1. Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment.
      2. Start a new CR instance based on the main broker CRD. In the left pane, click menu:Administration[Custom Resource Definitions].
      3. Click the ActiveMQArtemis CRD.
      4. Click the Instances tab.
      5. Click Create ActiveMQArtemis.

        Within the console, a YAML editor opens, enabling you to configure a CR instance.

  2. In the deploymentPlan section of the CR, add the following sections: affinity, nodeAffinity, requiredDuringSchedulingIgnoredDuringExecution, and nodeSelectorTerms. In the nodeSelectorTerms section, add the - matchExpressions parameter and specify the key-value string of a node label to match. For example:

    spec:
      deploymentPlan:
        affinity:
          nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
              - matchExpressions:
                - key: zone
                  operator: In
                  values:
                  - emea

    In this example, the affinity rule allows the pod to be scheduled on any node that has a label with a key of zone and a value of emea.

  3. Deploy the CR instance.

    1. Using the OpenShift command-line interface:

      1. Save the CR file.
      2. Switch to the project in which you are creating the broker deployment.

        $ oc project <project_name>
      3. Create the CR instance.

        $ oc create -f <path/to/custom_resource_instance>.yaml
    2. Using the OpenShift web console:

      1. When you have finished configuring the CR, click Create.

Anti-affinity rules allow you to restrict which Openshift node a broker pod can be scheduled on based on the labels of pods already running on the node. You can use anti-affinity rules to ensure that multiple broker pods are not scheduled on the same Openshift node.

Prerequisites

Procedure

  1. Edit the ActiveMQArtemis CR instance for your first broker deployment.
  2. In the deploymentPlan section of the CR, add a labels section. Create an identifying label for the broker so that you can create an anti-affinity rule based on the label in your second deployment. For example:

    spec:
      ...
        deploymentPlan:
          labels:
            name: broker1
  3. Save the CR.
  4. Edit the ActiveMQArtemis CR instance for your second broker deployment.
  5. In the deploymentPlan section of the CR, add the following sections: affinity, podAntiAffinity, requiredDuringSchedulingIgnoredDuringExecution, and labelSelector. In the labelSelector section, add the matchExpressions parameter and specify the key-value string of the label to match. A pod in this deployment cannot be scheduled on a node that contains a pod with the matching label.

    spec:
      deploymentPlan:
        affinity:
          podAntiAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              - labelSelector:
                  matchExpressions:
                    - key: name
                      operator: In
                      values:
                        - broker1
                topologyKey: topology.kubernetes.io/zone

    In this example, the pod anti-affinity rule prevents a pod from being placed on the same node as a pod that has a label with a key of name and a value of broker1, which is the label assigned to the broker in your first deployment.

  6. Save the CR.

4.18. Configuring logging for brokers

AMQ Broker uses the Log4j 2 logging utility to provide message logging. You can override the default Log4j 2 configuration by creating a new configuration file stored in a secret or configMap. The Operator mounts this file to configure logging for each broker pod.

Prerequisite

  • You are familiar with the Log4j 2 configuration options.

Procedure

  1. Prepare a file that contains the log4j 2 configuration that you want to use with AMQ Broker.

    The default Log4j 2 configuration file that is used by a broker is located in the /home/jboss/amq-broker/etc/log4j2.properties file on each broker pod. You can use the contents of the default configuration file as the basis for creating a new Log4j 2 configuration in a secret or configMap. To get the contents of the default Log4j 2 configuration file, complete the following steps.

    1. Using the OpenShift Container Platform web console:

      1. Click menu:Workloads[Pods].
      2. Click the ex-aao-ss Pod.
      3. Click the Terminal tab.
      4. Use the cat command to display the contents of the /home/jboss/amq-broker/etc/log4j2.properties file on a broker pod and copy the contents.
      5. Paste the contents into a local file, where the OpenShift Container Platform CLI is installed, and save the file as logging.properties.
    2. Using the OpenShift command-line interface:

      1. Get the name of a pod in your deployment.

        $ oc get pods -o wide
        
        NAME                          STATUS   IP
        amq-broker-operator-54d996c   Running  10.129.2.14
        ex-aao-ss-0                   Running  10.129.2.15
      2. Use the oc cp command to copy the log configuration file from a pod to your local directory.

        $ oc cp <pod name>:/home/jboss/amq-broker/etc/log4j2.properties logging.properties -c <name>-container

        Where the <name> part of the container name is the prefix before the -ss string in the pod name. For example:

        $ oc cp ex-aao-ss-0:/home/jboss/amq-broker/etc/log4j2.properties logging.properties -c ex-aao-container
        Note

        When you create a configMap or secret from a file, the key in the configMap or secret defaults to the file name and the value defaults to the file content. By creating a secret from a file named logging.properties, the required key for the new logging configuration is inserted in the secret or configMap.

  2. Edit the logging.properties file and create the Log4j 2 configuration that you want to use with AMQ Broker.

    For example, with the default configuration, AMQ Broker logs messages to the console only. You might want to update the configuration so that AMQ Broker logs messages to disk also.

  3. Add the updated Log4j 2 configuration to a secret or a ConfigMap.

    1. Log in to OpenShift as a user that has privileges to create secrets or ConfigMaps in the project for the broker deployment.

      oc login -u <user> -p <password> --server=<host:port>
    2. If you want to configure the log settings in a secret, use the oc create secret command. For example:

      oc create secret generic newlog4j-logging-config --from-file=logging.properties
    3. If you want to configure the log settings in a ConfigMap, use the oc create configmap command. For example:

      oc create configmap newlog4j-logging-config --from-file=logging.properties

      The configMap or secret name must have a suffix of -logging-config, so the Operator can recognize that the secret contains new logging configuration.

  4. Add the secret or ConfigMap to the Custom Resource (CR) instance for your broker deployment.

    1. Using the OpenShift command-line interface:

      1. Log in to OpenShift as a user that has privileges to deploy CRs in the project for the broker deployment.

        oc login -u <user> -p <password> --server=<host:port>
      2. Edit the CR.

         oc edit ActiveMQArtemis <CR instance name> -n <namespace>
    2. Using the OpenShift Container Platform web console:

      1. Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment.
      2. In the left pane, click menu:Operators[Installed Operator].
      3. Click the Red Hat Integration - AMQ Broker for RHEL 9 (Multiarch) operator.
      4. Click the AMQ Broker tab.
      5. Click the name of the ActiveMQArtemis instance name
      6. Click the YAML tab.

        Within the console, a YAML editor opens, enabling you to configure a CR instance.

    3. Add the secret or configMap that contains the Log4j 2 logging configuration to the CR. The following examples show a secret and a configMap added to the CR.

      apiVersion: broker.amq.io/v1beta1
      kind: ActiveMQArtemis
      metadata:
        name: ex-aao
      spec:
        deploymentPlan:
          ...
          extraMounts:
            secrets:
            - "newlog4j-logging-config"
          ...
      apiVersion: broker.amq.io/v1beta1
      kind: ActiveMQArtemis
      metadata:
        name: ex-aao
      spec:
        deploymentPlan:
          ...
          extraMounts:
            configMaps:
            - "newlog4j-logging-config"
          ...
  5. Save the CR.

    In each broker pod, the Operator mounts a logging.properties file that contains the logging configuration in the secret or configMap that you created. In addition, the Operator configures each broker to use the mounted log configuration file instead of the default log configuration file.

    Note

    If you update the logging configuration in a configMap or secret, each broker automatically uses the updated logging configuration.

4.19. Configuring a pod disruption budget

A pod disruption budget specifies the minimum number of pods in a cluster that must be available simultaneously during a voluntary disruption, such as a maintenance window.

Procedure

  1. Edit the CR instance for the broker deployment.

    1. Using the OpenShift command-line interface:

      1. Log in to OpenShift Container Platform as a user that has privileges to deploy CRs in the project for the broker deployment.
      2. Edit the CR for your deployment.

         oc edit ActiveMQArtemis <CR instance name> -n <namespace>
    2. Using the OpenShift Container Platform web console:

      1. Log in to OpenShift Container Platform as a user that has privileges to deploy CRs in the project for the broker deployment.
      2. In the left pane, click menu:Administration[Custom Resource Definitions].
      3. Click the ActiveMQArtemis CRD.
      4. Click the Instances tab.
      5. Click the instance for your broker deployment.
      6. Click the YAML tab.

        Within the console, a YAML editor opens, enabling you to edit the CR instance.

  2. In the spec section of the CR, add a podDisruptionBudget element and specify the minimum number of pods in your deployment that must be available during a voluntary disruption. In the following example, a minimum of one pod must be available:

    spec:
      ...
      podDisruptionBudget:
        minAvailable: 1
      ...
  3. Save the CR.

Role-based access control (RBAC) is used to restrict access to the attributes and methods of MBeans. MBeans are the way the management API is exposed by AMQ Broker to support management operations.

Previously, you could restrict access to MBeans by setting the RBAC configuration in the ActiveMQArtemisSecurity custom resource (CR) and restarting the broker for the changes to take effect. Starting in 7.12, you can restrict access to MBeans in the ActiveMQArtemis CR and a broker restart is not required for the changes to take effect.

Procedure

  1. Edit the ActiveMQArtemis CR instance for your broker deployment.
  2. Add the following environment variable to configure the broker to use the RBAC configuration that you specify in the ActiveMQArtemis CR.

    spec:
      ..
      env:
      - name: JAVA_ARGS_APPEND
        value: "-Dhawtio.roles=* -Djavax.management.builder.initial=org.apache.activemq.artemis.core.server.management.ArtemisRbacMBeanServerBuilder"
      ..
  3. In the brokerProperties attribute, add the role based access control configuration for management operations.

    The format of the match addresses for management operations is:

    mops.<resource type>.<resource name>.<operation>

    For example, the following configuration grants a manager role view and edit permission to an activemq.management address. The asterisk (*) in the operation position grants access to all operations.

    spec:
      ..
      brokerProperties:
      - securityRoles."mops.address.activemq.management.*".manager.view=true
      - securityRoles."mops.address.activemq.management.*".manager.edit=true

    In the following example, the number sign (#) after the mops prefix grants the amq role view and edit permissions to all MBeans.

    spec:
      ..
      brokerProperties:
      - securityRoles."mops.#".amq.view=true
      - securityRoles."mops.#".amq.edit=true
      ..
  4. Use the resourceTemplates attribute to define an init container that runs a script to remove the default RBAC configuration in the /amq/init/config/amq-broker/etc/management.xml file in each broker container, as shown in the following example. You must remove the default RBAC configuration so the broker uses the new RBAC configuration that you created in the ActiveMQArtemis CR.

    spec:
      ..
      resourceTemplates:
      - selector:
          kind: "StatefulSet"
        patch:
          kind: "StatefulSet"
          spec:
          template:
           spec:
            initContainers:
            - name: "<BROKER_NAME>-container-init"
              args:
              -  '-c'
              -  '/opt/amq/bin/launch.sh && /opt/amq-broker/script/default.sh; echo "Empty management.xml";echo "<management-context xmlns=\"http://activemq.apache.org/schema\" />" > /amq/init/config/amq-broker/etc/management.xml'

    Replace <BROKER_NAME> with the value of the metadata.name attribute in your CR instance.

  5. Save the CR.

An AMQ Broker deployment creates OpenShift resources, such as deployments, pods, statefulSets and service resources, which are managed by the AMQ Broker Operator. You can customize these Operator-managed OpenShift resources.

Customizing an operator-managed OpenShift resource can be useful if you want to perform certain tasks, such as:

  • Adding custom annotations that control how resources are treated by other services.
  • Modifying attributes that are not exposed in the broker custom resource.

You can use the resourceTemplates attribute to customize resources created by the AMQ Broker Operator. If you want to add an annotation or label to a resource, configure the resourceTemplates attribute to include the annotations or labels attribute. In the following example, the annotations attribute is used to add an annotation to all the services managed by the Operator.

spec:
  ..
  resourceTemplates:
   - selector:
       kind: "Service"
     annotations:
       name: "amq-operator-managed"
  ..
Note

The selector attribute determines which Operator-managed resources are customized. For example, a selector value of kind: "Service", customizes all service resources. If the selector attribute is empty, changes are applied to all Operator-managed resources.

If you want to customize items other than annotations or labels for resources, you must use the patch attribute with the resourceTemplates attribute. When you specify the patch attribute, the Operator uses a strategic merge to update resources.

Note

If you use the patch attribute, you must populate the selector attribute to identify specific resources to update.

In the following example, the patch attribute is used to change the default value of the minReadySeconds property in the StatefulSet resource.

spec:
  ..
  resourceTemplates:
  - selector:
      kind: "StatefulSet"
    patch:
      kind: "StatefulSet"
      spec:
       template:
        spec:
          minReadySeconds: 10
  ..

4.22. Registering plugins with AMQ Broker

You can extend the functionality of AMQ Broker by registering plugins in the brokerProperties attribute in the CR.

Procedure

  1. Edit the custom resource (CR) for your broker deployment.
  2. In the brokerProperties attribute, specify the class name of the plugin and include a comma-separated string of <key>=<value> pairs that define the properties for the plugin.

    In the following example, the LoggingActiveMQServerPlugin plugin, which is provided with AMQ Broker, is registered.

    spec:
      ...
      brokerProperties:
      - brokerPlugins.\"org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin.class\".init=LOG_CONNECTION_EVENTS=true,LOG_SESSION_EVENTS=true,LOG_CONSUMER_EVENTS=true
      ...
  3. Save the CR.

    After an instance of the plugin is created, the init method is passed a string that contains the <key>=<value> pairs, which are used to set properties for the plugin.

    Note

    If you create a custom plugin, ensure that the JAR files for the plugin class are in the Java classpath of the broker. For more information, see Section 4.4, “Adding third-party JAR files”.

If the broker custom resource reaches the 1 MB size limit, segregate brokerProperties entries into Java properties or JSON files. Store these files in an OpenShift secret referenced by the CR.

Individual Java properties files and JSON files are each limited to 1 MB in size, which is the size limit for secrets in OpenShift.

Procedure

  1. Create a file that contains the brokerProperties configuration that you want to apply to the broker. You can add the properties in text or JSON format.

    Text format

    In text format, you add each property in a separate line in the properties file. For example:

    addressConfigurations.usa-news.queueConfigs.usa-news.routingType=ANYCAST
    addressConfigurations.usa-news.queueConfigs.usa-news.purgeOnNoConsumers=true
    addressConfigurations.usa-news.queueConfigs.usa-news.maxConsumers=30
    JSON format

    JSON format is more compact than text format for writing properties. Therefore, you can store more properties in an individual file without exceeding the 1 MB size limit. The following is the JSON equivalent of the previous text format example:

    {
          "addressConfigurations": {
            "usa-news": {
                "routingTypes":"MULTICAST",
                "queueConfigs": {
                    "usa-news": {
                       "routingType": "MULTICAST",
                       "purgeOnNoConsumers": "true",
                       "maxConsumers": "30"
                    }
                }
            }
        }
    }
  2. Save the file with one of the following extensions depending on the format of the properties in the file:

    • .properties if the properties are in text format. For example, addresses.properties.
    • .json if the properties are in JSON format. For example, addresses.json.
    Note

    If you have multiple brokers in your deployment, you can add addresses to individual brokers by naming the file with a prefix of broker-<ordinal>. Replace <ordinal> with the ordinal number assigned to the specific broker pod by the StatefulSet. For example, a file name of broker-0.addresses.properties adds the addresses to the first pod, broker-1.addresses.properties to the second pod, and so on.

  3. Create a secret that contains the file you created. For example:

    oc create secret generic address-settings-bp --from-file=addresses.properties
    Note

    The secret name must have a suffix of -bp. When a secret has a -bp suffix, the broker finds .properties and .json files in the directory where the secret is mounted on the broker pod.

  4. Add a reference to the secret in the extraMounts attribute so the Operator mounts the properties files that are in the secret on each broker pod:

    deploymentPlan:
      ...
      extraMounts:
        secrets:
        - "address-settings-bp"
      ...

    The Operator mounts the file or files that are in the secret in a /amq/extra/secrets/<secret name> directory on each broker pod.

    At startup, the broker searches each mounted directory for files that have a .properties or .json extension, sorts the files alphabetically, and applies the configuration in the files one after another. Within a file, the broker applies the properties in the order in which they are listed.

You can configure a high availability (HA) leader-follower topology for two standalone brokers. Both brokers must be configured to persist messages to the same journal or JDBC database. Only the leader broker processes client requests at any given time.

A leader-follower configuration has separate deployments that each have a single broker instance. High availability is achieved by the brokers competing to acquire a lock for the database or the shared volume. The broker that acquires a lock becomes the leader broker, which serves client requests. The broker that is unable to acquire a lock becomes a follower. A follower is a passive broker that continuously tries to obtain a lock for the database or the shared volume. If the follower obtains a lock, it immediately becomes the leader and starts to serve clients.

Leader-follower deployments provide a faster mean time to repair (MTTR) for a node failure than that provided by OpenShift for a single deployment. In leader-follower deployments, the brokers can be on separate clusters to protect against a cluster failure. These clusters can be in different data centers to also make the broker service resilient to a data center outage, provided the storage used to persist messages is available across data centers.

You can configure leader-follower broker deployments that persist messages to a shared JDBC database.

Prerequisite

You have a container image that contains the JAR file for the JDBC database you want to use with AMQ Broker. For information about creating container images, see Creating images in the OpenShift documentation. In the configuration for each broker, you can specify an init container to copy the JAR file from the container image to a location that is available to the broker at runtime.

Procedure

  1. Configure two ActiveMQArtemis custom resource instances to create separate broker deployments.

    In each custom resource, specify a unique name and ensure that the clustered and persistenceEnabled attributes are set to false. Set the size attribute to 1 to create a single broker in each deployment. For example:

    apiVersion: broker.amq.io/v1beta1
    kind: ActiveMQArtemis
    metadata:
      name: peer-broker-a
    spec:
      deploymentPlan:
        size: 1
        clustered: false
        persistenceEnabled: false
    apiVersion: broker.amq.io/v1beta1
    kind: ActiveMQArtemis
    metadata:
      name: peer-broker-b
    spec:
      deploymentPlan:
        size: 1
        clustered: false
        persistenceEnabled: false
    Note

    If both broker deployments are on the same OpenShift cluster, ensure that the broker pods are provisioned on separate nodes to avoid an outage of both brokers if a node failure occurs. For more information about controlling the placement of pods on nodes, see Section 4.17, “Controlling placement of broker pods on OpenShift Container Platform nodes”.

  2. Add a new liveness probe to each CR.

    If you do not configure a liveness probe, a default probe is enabled to check the health of a broker. The default probe checks that the AMQ Management Console is reachable. In a leader-follower configuration, the AMQ Management Console is not reachable on the broker that is a follower at any given time, which causes the liveness probe to fail on that broker. Each time the liveness probe fails, it restarts the broker, which puts the broker in a persistent restart loop. As a result, the follower broker enters a CrashLoopBackOff state and is not available to become the leader if the current leader fails.

    To prevent the default liveness probe from running, you must configure a new liveness probe that can run successfully when a broker is either a leader or a follower. The following is an example of a liveness probe that meets this requirement. In this example, the liveness probe checks that the command to run the broker was executed, which is indicated by the presence of the cli.lock lock file.

    spec:
      ..
      livenessProbe:
        exec:
          command:
          - test
          - -f
          - /home/jboss/amq-broker/lock/cli.lock
      ..

    For more information about configuring liveness probes, see Section 4.15.2, “Configuring liveness and readiness probes”.

  3. In each broker configuration, enable JDBC database persistence by using the brokerProperties attribute. For example:

    spec:
      ..
      brokerProperties:
      - storeConfiguration=DATABASE
      - storeConfiguration.jdbcDriverClassName=<class name>
      - storeConfiguration.jdbcConnectionUrl=jdbc:<Database URL>
      - HAPolicyConfiguration=SHARED_STORE_PRIMARY
      - storeConfiguration.jdbcLockRenewPeriodMillis=2000
      - storeConfiguration.jdbcLockExpirationMillis=6000

    For more information about enabling JDBC database persistence, see Section 4.5.2, “Configuring database persistence”.

  4. In each broker configuration, configure the broker to load the JAR file required to connect to the JDBC database.

    • Use the resourceTemplates attribute to customize the StatefulSet resource for each broker. In the customization, use the patch attribute to specify an init container that copies the JAR file from the custom container image you prepared to the broker pod.
    • Use the env attribute to create an ARTEMIS_EXTRA_LIBS environment variable to extend the broker’s Java classpath to include the directory to which the JAR file for the JDBC database is copied. By extending the Java classpath, the broker can load the JAR file from the specified directory on the pod at runtime.

      spec:
        ..
        env:
        - name: ARTEMIS_EXTRA_LIBS
          value: '/amq/init/config/extra-libs'
        resourceTemplates:
        - selector:
            kind: StatefulSet
          patch:
            kind: StatefulSet
            spec:
              template:
                spec:
                  initContainers:
                  - name: jdbc-driver-init
                    image: <custom container image with JAR>
                    volumeMounts:
                    - name: amq-cfg-dir
                      mountPath: /amq/init/config
                    command:
                    - "bash"
                    - "-c"
                    - "mkdir -p /amq/init/config/extra-libs && cp <__JAR file_> /amq/init/config/extra-libs"

      For more information about customizing OpenShift resources created by the Operator, see Section 4.21, “Customizing Openshift resources created by the Operator”.

  5. Save each custom resource.

    The following example shows the configuration for leader-follower broker deployments that use an Oracle database.

    apiVersion: broker.amq.io/v1beta1
    kind: ActiveMQArtemis
    metadata:
      name: peer-broker-a
    spec:
      deploymentPlan:
        size: 1
        clustered: false
        persistenceEnabled: false
        livenessProbe:
          exec:
            command:
            - test
            - -f
            - /home/jboss/amq-broker/lock/cli.lock
      env:
        - name: ARTEMIS_EXTRA_LIBS
          value: '/amq/init/config/extra-libs'
      brokerProperties:
        - criticalAnalyser=false
        - storeConfiguration=DATABASE
        - storeConfiguration.jdbcDriverClassName=oracle.jdbc.OracleDriver
        - storeConfiguration.jdbcConnectionUrl=jdbc:<Database URL>
        - storeConfiguration.jdbcLockRenewPeriodMillis=2000
        - storeConfiguration.jdbcLockExpirationMillis=6000
        - HAPolicyConfiguration=SHARED_STORE_PRIMARY
      acceptors:
      - name: ext-acceptor
        protocols: CORE
        port: 61626
        expose: true
        sslEnabled: true
        sslSecret: ext-acceptor-ssl-secret
      console:
        expose: true
      resourceTemplates:
      - selector:
          kind: StatefulSet
        patch:
          kind: StatefulSet
          spec:
            template:
              spec:
                initContainers:
                - name: oracle-database-jdbc-driver-init
                  image: <custom container image with JAR>
                  volumeMounts:
                  - name: amq-cfg-dir
                    mountPath: /amq/init/config
                  command:
                  - "bash"
                  - "-c"
                  - "mkdir -p /amq/init/config/extra-libs && cp <JAR file> /amq/init/config/extra-libs"
    apiVersion: broker.amq.io/v1beta1
    kind: ActiveMQArtemis
    metadata:
      name: peer-broker-b
    spec:
      deploymentPlan:
        size: 1
        clustered: false
        persistenceEnabled: false
        livenessProbe:
          exec:
            command:
            - test
            - -f
            - /home/jboss/amq-broker/lock/cli.lock
      env:
        - name: ARTEMIS_EXTRA_LIBS
          value: '/amq/init/config/extra-libs'
      brokerProperties:
        - criticalAnalyser=false
        - storeConfiguration=DATABASE
        - storeConfiguration.jdbcDriverClassName=oracle.jdbc.OracleDriver
        - storeConfiguration.jdbcConnectionUrl=jdbc:<Database URL>
        - storeConfiguration.jdbcLockRenewPeriodMillis=2000
        - storeConfiguration.jdbcLockExpirationMillis=6000
        - HAPolicyConfiguration=SHARED_STORE_PRIMARY
      acceptors:
      - name: ext-acceptor
        protocols: CORE
        port: 61626
        expose: true
        sslEnabled: true
        sslSecret: ext-acceptor-ssl-secret
      console:
        expose: true
      resourceTemplates:
      - selector:
          kind: StatefulSet
        patch:
          kind: StatefulSet
          spec:
            template:
              spec:
                initContainers:
                - name: oracle-database-jdbc-driver-init
                  image: <custom container image with JAR>
                  volumeMounts:
                  - name: amq-cfg-dir
                    mountPath: /amq/init/config
                  command:
                  - "bash"
                  - "-c"
                  - "mkdir -p /amq/init/config/extra-libs && cp <JAR file> /amq/init/config/extra-libs"

You can configure leader-follower broker deployments that persist messages to a journal on a volume that is shared by both broker instances.

Prerequisite

You created a Persistent Volume Claim (PVC) to request a shared volume to store messaging data for both brokers. Give careful consideration to ensure that the capacity you request is sufficient to store the messaging data on the shared volume. In the following PVC example, 2 gigabytes storage capacity is requested.

oc apply -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: shared-volume
  namespace: activemq
spec:
  accessModes:
    - ReadWriteMany
  volumeMode: Filesystem
  resources:
    requests:
      storage: 2Gi
EOF
Note

After you create the PVC, the output of the oc get pvc command shows that the shared volume has a pending status until the brokers are deployed and bind to the volume. For example:

NAME            STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
shared-volume   Pending                                      gp3-csi        <unset>                 7s

Procedure

  1. Prepare and apply a YAML configuration to create leader-follower broker deployments.

    Ensure that you set the persistenceEnabled attribute to false so the Operator does not create the resources to attach a separate persistent volume to each broker pod. In a leader-follower configuration, both brokers must share the same persistent volume, which must be provisioned before you apply the YAML configuration. The Operator uses the extraVolumes and extraVolumeMount attributes in the YAML to mount that persistent volume on both pods. In addition, the Operator uses the brokerProperties attributes in the YAML to create the journal and other data files on that volume.

    For example:

    oc apply -f - <<EOF
    apiVersion: broker.amq.io/v1beta1
    kind: ActiveMQArtemis
    metadata:
      name: peer-broker-a
    spec:
      deploymentPlan:
        size: 1
        persistenceEnabled: false
        clustered: false
        extraVolumes:
        - name: extra-volume
          persistentVolumeClaim:
            claimName: shared-volume
        extraVolumeMounts:
        - name: extra-volume
          mountPath: /opt/amq-broker/data
        livenessProbe:
          exec:
            command:
            - test
            - -f
            - /home/jboss/amq-broker/lock/cli.lock
      brokerProperties:
      - HAPolicyConfiguration=SHARED_STORE_PRIMARY
      - journalDirectory=/opt/amq-broker/data/journal
      - pagingDirectory=/opt/amq-broker/data/paging,
      - bindingsDirectory=/opt/amq-broker/data/bindings
      - largeMessagesDirectory=/opt/amq-broker/data/largemessages
      acceptors:
      - name: ext-acceptor
        protocols: CORE
        port: 61626
        expose: true
        sslEnabled: true
        sslSecret: ext-acceptor-ssl-secret
    ---
    apiVersion: broker.amq.io/v1beta1
    kind: ActiveMQArtemis
    metadata:
      name: peer-broker-b
    spec:
      deploymentPlan:
        size: 1
        persistenceEnabled: false
        clustered: false
        extraVolumes:
        - name: extra-volume
          persistentVolumeClaim:
            claimName: shared-volume
        extraVolumeMounts:
        - name: extra-volume
          mountPath: /opt/amq-broker/data
        livenessProbe:
          exec:
            command:
            - test
            - -f
            - /home/jboss/amq-broker/lock/cli.lock
      brokerProperties:
      - HAPolicyConfiguration=SHARED_STORE_PRIMARY
      - journalDirectory=/opt/amq-broker/data/journal
      - pagingDirectory=/opt/amq-broker/data/paging,
      - bindingsDirectory=/opt/amq-broker/data/bindings
      - largeMessagesDirectory=/opt/amq-broker/data/largemessages
      acceptors:
      - name: ext-acceptor
        protocols: CORE
        port: 61626
        expose: true
        sslEnabled: true
        sslSecret: ext-acceptor-ssl-secret
    EOF
    Note

    If both broker deployments are on the same OpenShift cluster, ensure that the broker pods are provisioned on separate nodes to avoid an outage of both brokers if a node failure occurs. For more information about controlling the placement of pods on nodes, see Section 4.17, “Controlling placement of broker pods on OpenShift Container Platform nodes”.

    The following table describes the YAML configuration in the example that is specific to a leader-follower deployment.

    Expand
    Table 4.8. Leader-follower configuration
    AttributeDescription

    metadata.name

    Specify a unique name for the broker in each deployment.

    size

    Specify a value of 1 to create a single broker in each deployment.

    persistenceEnabled

    Set to false so the Operator does not create the resources to attach a separate persistent volume to each broker pod. In a leader-follower configuration, both brokers must share the same persistent volume, which must be provisioned before you apply the leader-follower YAML configuration.

    clustered

    Set to false to disable clustering.

    extraVolumes

    Specify the details of the PVC you created to request a shared persistent volume for the brokers, which is a prerequisite to creating leader-follower deployments.

    name
    Use a string value to assign a name to the volume.
    claimName
    Ensure that this matches the value of the metadata.name attribute in the PVC you created.

    extraVolumeMounts

    Specify the details to mount the shared persistent volume on each broker pod.

    name
    Ensure that this matches the extraVolumes.name value.
    mountPath
    The path to which the shared volume is mounted on each broker pod.

    livenessProbe

    A default liveness probe is enabled to check the health of a broker. The default probe checks that the AMQ Management Console is reachable. In a leader-follower configuration, the AMQ Management Console is not reachable on the broker that is a follower at any given time, which causes the liveness probe to fail on that broker. Each time the liveness probe fails, it restarts the broker, which puts the broker in a persistent restart loop. As a result, the follower broker enters a CrashLoopBackOff state and is not available to become the leader if the current leader fails.

    Configure a new liveness probe that can run successfully when a broker is either a leader or a follower, as shown in the example. In the example, the liveness probe checks that the command to run the broker was executed, which is indicated by the presence of the cli.lock lock file.

    For more information about configuring liveness probes, see Section 4.15.2, “Configuring liveness and readiness probes”.

    brokerProperties

    Specify the directories on the mounted persistent volume to store the journal and other messaging data files.

    HAPolicyConfiguration
    Set to SHARED_STORE_PRIMARY to ensure that the broker uses a file lock to protect the broker from concurrent access by both brokers.
    journalDirectory
    File system location to store the message journal.
    pagingDirectory
    File system location to store paged messages.
    bindingsDirectory
    File system location to store the bindings journal.
    largeMessagesDirectory
    File system location to store large messages.
  2. Use the oc get pods command to identify which broker is the leader and which is the follower. In the following example, peer-broker-a-ss-0 is the leader, which is indicated by the 1/1 entry in the READY column.

    NAME                 READY   STATUS             RESTARTS         AGE
    peer-broker-a-ss-0   1/1     Running            0                72m
    peer-broker-b-ss-0   0/1     Running            0                72m

Mirroring is the process of copying data from a broker to one or more other brokers for disaster recovery. The source and target brokers in a mirror can be on separate OpenShift clusters in different data centers to protect against a data center outage.

Mirroring can also be used for data backup or to create a failover broker for use during maintenance windows. Messages that existed before a mirror is created are not mirrored.

Important

Ensure that you do not implement a mirror topology that creates a loop, which cause message duplication. For example, if you have a 3-broker deployment where broker 1 mirrors data to broker 2 and broker 2 mirrors data to broker 3, avoid creating a loop by mirroring data from broker 3 to broker 1. Similarly, do not create a topology of 3 or more brokers where each broker mirrors data to all of the other brokers.

Note

You can implement a dual mirror topology where both brokers are mirrors of each other.

Procedure

  1. Configure two ActiveMQArtemis custom resource (CR) instances to create a source broker and a target broker for the mirrored data. Specify a unique name for each. For example:

    apiVersion: broker.amq.io/v1beta1
    kind: ActiveMQArtemis
    metadata:
      name: production-broker
      namespace: production
    spec:
      deploymentPlan:
        size: 1
    apiVersion: broker.amq.io/v1beta1
    kind: ActiveMQArtemis
    metadata:
      name: mirror-broker
      namespace: dr
    spec:
      deploymentPlan:
        size: 1
  2. In the CR of the target broker, add an acceptor for the mirror connection. For example:

    metadata:
      name: mirror-broker
      namespace: dr
    spec:
      ...
      acceptors:
      - expose: true
        name: amqp
        port: 5672
        protocols: amqp
      ...

    After you create the acceptor, a route is exposed for the acceptor in the following format:

    <broker name>-<acceptor name>-<ordinal>-svc-rte.<namespace>.<hostname>

    When you add the mirror configuration to the source broker later in this procedure, you use this route to create a connection to the target broker.

    The <ordinal> is the ordinal assigned to the broker pod by the StatefulSet. The first broker pod in a cluster is assigned an ordinal of 0. The second pod is assigned an ordinal of 1 and so on. The ordinal value of a pod is stored in a STATEFUL_SET_ORDINAL variable. You can use this variable instead of the ordinal value in the mirror connection details on the source broker. For example:

    <broker name>-<acceptor name>-${STATEFUL_SET_ORDINAL}-svc-rte.<namespace>.<hostname>

    By using the STATEFUL_SET_ORDINAL variable, you ensure that a source creates a mirror connection to a target that has the same ordinal if you scale up the number of brokers on the source and target clusters.

  3. If you want to send data securely over the mirror connection, configure Transport Layer Security (TLS) for the connection. Depending on your requirements, you can use various methods to generate SSL/TLS certificates. For example, you can use a trusted Certificate Authority (CA), cert-manager Operator for OpenShift or Secure Sockets Layer (SSL) tools.

    The following example provides a summary of the steps to configure mutual TLS authentication (mTLS) between the brokers by using SSL/TLS tools to manually generate self-signed certificates.

    1. Generate a self-signed SSL/TLS certificate for the target broker. For example:

      keytool -genkey -trustcacerts -alias broker -keyalg RSA -keystore broker.ks -keypass password -storepass password

    2. Export the public key of the SSL/TLS certificate you created to a file, so you can import the key into a truststore file for use on the source broker. For example:

      keytool -export -noprompt -alias broker -keystore broker.ks -file for_source_truststore -storepass password

    3. Import the public key of the SSL/TLS certificate that you exported into a truststore file for use on the source broker. For example:

      keytool -import -noprompt -trustcacerts -alias broker -keystore client.ts -file for_source_truststore -storepass password

    4. Generate a self-signed SSL/TLS certificate for the source broker. For example:

      keytool -genkey -trustcacerts -alias broker -keyalg RSA -keystore broker.ks -keypass password -storepass password

    5. Export the public key of the SSL/TLS certificate you created, so you can import the key into a truststore file for use on the target broker.

      keytool -export -noprompt -alias broker -keystore broker.ks -file for_target_truststore -storepass password

    6. Import the public key of the SSL/TLS certificate that you exported into a truststore file for use on the target broker. For example:

      keytool -import -noprompt -trustcacerts -alias broker -keystore client.ts -file for_target_truststore -storepass password

    7. Add the keystore and truststore files you created for the source broker to a secret in the namespace of the source broker. For example:

      oc create secret generic mirror --from-file=broker.ks=broker.ks --from-file=client.ts=client.ts --from-literal=keyStorePassword=password --from-literal=trustStorePassword=password

      Repeat this step to add the keystore and truststore files you created for the target broker to a secret in the namespace of the target broker.

    8. In the acceptor configured for the target broker, set the sslEnabled attribute to true and specify the name of the secret that you created in the namespace of the target broker. For example:

      metadata:
        name: mirror-broker
        namespace: dr
      spec:
        ...
        acceptors:
        - expose: true
          name: amqp
          port: 5672
          protocols: amqp
          sslEnabled: true
          sslSecret: mirror
        ...
    9. In the CR of the source broker, add a reference to the secret you created in the namespace of the source broker to the extraMounts attribute. This step is required so the Operator mounts the keystore and truststore files in the secret on each broker pod. For example:

      spec:
        ...
        deploymentPlan:
          extraMounts:
            secrets:
            - mirror
        ...

      The keystore and truststore files in the secret are mounted in a /amq/extra/secrets/<secret name> directory on a broker pod.

  4. In the CR of the source broker, under the brokerProperties attribute, configure the mirror connection details. For the connection URI, specify the route exposed for the acceptor that you created on the target broker. If you want to use SSL/TLS to secure the mirror connection, also include the following in the URI:

    • a port number of 443
    • sslEnabled=true to enable SSL/TLS
    • the paths and credentials for the keystore and truststore files

    For example:

    spec:
      ...
      brokerProperties:
      - AMQPConnections.datacenter1.uri=tcp://broker-dr-amqp-${STATEFUL_SET_ORDINAL}-svc-rte-dr.apps.lab.redhat.com:443?;sslEnabled=true;trustStorePath=/amq/extra/secrets/mirror/client.ts;trustStorePassword=password;keyStorePath=/amq/extra/secrets/mirror/broker.ks;keyStorePassword=password
      - AMQPConnections.datacenter1.connectionElements.mirror.type=MIRROR
      ...
    Note

    If required, you can configure multiple mirror targets for the source broker. For example:

    spec:
      ...
      brokerProperties:
      - AMQPConnections.datacenter1.uri=tcp://primary-mirror-broker-amqp-${STATEFUL_SET_ORDINAL}-svc.dr.svc.cluster.local:61616
      - AMQPConnections.datacenter1.connectionElements.mirror.type=MIRROR
      - AMQPConnections.datacenter2.uri=tcp://backup-mirror-broker-amqp-${STATEFUL_SET_ORDINAL}-svc.dr.svc.cluster.local:61616
      - AMQPConnections.datacenter2.connectionElements.mirror.type=MIRROR
      ...
  5. In the CR of the source broker, configure additional mirror configuration properties as required. For example:

    - AMQPConnections.datacenter1.user=admin
    - AMQPConnections.datacenter1.password=admin
    - AMQPConnections.datacenter1.retryInterval=5000
    - AMQPConnections.datacenter1.connectionElements.mirror.messageAcknowledgements=true
    - AMQPConnections.datacenter1.connectionElements.mirror.queueCreation=true
    - AMQPConnections.datacenter1.connectionElements.mirror.queueRemoval=true
    - AMQPConnections.datacenter1.connectionElements.mirror.addressFilter=addresses
    Note

    You can use any alphanumeric string to name the AMQP connection. In the previous example, the AMQP connection name is datacenter1.

    AMQPConnections.<name>.user
    The name of the user on the target broker with the permissions to mirror the required events.
    AMQPConnections.<name>.password
    The password of the user on the target broker.
    AMQPConnections.<name>.retryInterval
    The interval, in milliseconds, between retry attempts to connect to the target broker.
    AMQPConnections.<name>.connectionElements.mirror.messageAcknowledgements
    Specifies whether message acknowledgments are mirrored. The default value is true.
    AMQPConnections.<name>.connectionElements.mirror.queueCreation
    Specifies whether queue or address creation events are mirrored. The default value is true.
    AMQPConnections.<name>.connectionElements.mirror.queueRemoval
    Specifies whether queue or address removal events are mirrored. The default value is true.
    AMQPConnections.<name>.connectionElements.mirror.addressFilter

    A filter that the source broker can use to include or exclude addresses for which events are mirrored. For example, you might want to exclude temporary queues from being mirrored.

    You specify the filter as a comma-separated list of addresses. If you want to specify a list of addresses to exclude, prefix each address with an exclamation mark (!). In the following example, events for addresses that start with us. and europe. are not mirrored.

    AMQPConnections.<name>.connectionElements.mirror.addressFilter=!us.,!europe.

    Note

    If you specify one or more addresses to include, events for all other addresses are not mirrored. If you specify one or more addresses to exclude, events for all other addresses are mirrored.

  6. In the status section of the CR for the source broker, verify that the status of the BrokerPropertiesApplied condition is true to confirm that all the properties you specified in the CR were applied. For more information, see Section 3.6, “Viewing status information for your broker deployment”.
  7. Check the logs of the source broker pod for a line similar to the following to confirm that the mirror connection was established.

    broker-prod-ss-0 broker-prod-container Connected on Server AMQP Connection dr on broker-dr-amqp-0-svc-rte-dr.lab.redhat.com:443 after 0 retries

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top