Search

Chapter 4. Configuring Operator-based broker deployments

download PDF

4.1. How the Operator generates the broker configuration

Before you use Custom Resource (CR) instances to configure your broker deployment, you should understand how the Operator generates the broker configuration.

When you create an Operator-based broker deployment, a Pod for each broker runs in a StatefulSet in your OpenShift project. An application container for the broker runs within each Pod.

The Operator runs a type of container called an Init Container when initializing each Pod. In OpenShift Container Platform, Init Containers are specialized containers that run before application containers. Init Containers can include utilities or setup scripts that are not present in the application image.

By default, the AMQ Broker Operator uses a built-in Init Container. The Init Container uses the main CR instance for your deployment to generate the configuration used by each broker application container.

If you have specified address settings in the CR, the Operator generates a default configuration and then merges or replaces that configuration with the configuration specified in the CR. This process is described in the section that follows.

4.1.1. How the Operator generates the address settings configuration

If you have included an address settings configuration in the main Custom Resource (CR) instance for your deployment, the Operator generates the address settings configuration for each broker as described below.

  1. The Operator runs the Init Container before the broker application container. The Init Container generates a default address settings configuration. The default address settings configuration is shown below.

    <address-settings>
        <!--
        if you define auto-create on certain queues, management has to be auto-create
        -->
        <address-setting match="activemq.management#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!--
            with -1 only the global-max-size is in use for limiting
            -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
        </address-setting>
    
        <!-- default for catch all -->
        <address-setting match="#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!--
            with -1 only the global-max-size is in use for limiting
            -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
        </address-setting>
    <address-settings>
  2. If you have also specified an address settings configuration in your Custom Resource (CR) instance, the Init Container processes that configuration and converts it to XML.
  3. Based on the value of the applyRule property in the CR, the Init Container merges or replaces the default address settings configuration shown above with the configuration that you have specified in the CR. The result of this merge or replacement is the final address settings configuration that the broker will use.
  4. When the Init Container has finished generating the broker configuration (including address settings), the broker application container starts. When starting, the broker container copies its configuration from the installation directory previously used by the Init Container. You can inspect the address settings configuration in the broker.xml configuration file. For a running broker, this file is located in the /home/jboss/amq-broker/etc directory.

Additional resources

4.1.2. Directory structure of a broker Pod

When you create an Operator-based broker deployment, a Pod for each broker runs in a StatefulSet in your OpenShift project. An application container for the broker runs within each Pod.

The Operator runs a type of container called an Init Container when initializing each Pod. In OpenShift Container Platform, Init Containers are specialized containers that run before application containers. Init Containers can include utilities or setup scripts that are not present in the application image.

When generating the configuration for a broker instance, the Init Container uses files contained in a default installation directory. This installation directory is on a volume that the Operator mounts to the broker Pod and which the Init Container and broker container share. The path that the Init Container uses to mount the shared volume is defined in an environment variable called CONFIG_INSTANCE_DIR. The default value of CONFIG_INSTANCE_DIR is /amq/init/config. In the documentation, this directory is referred to as <install_dir>.

Note

You cannot change the value of the CONFIG_INSTANCE_DIR environment variable.

By default, the installation directory has the following sub-directories:

Sub-directoryContents

<install_dir>/bin

Binaries and scripts needed to run the broker.

<install_dir>/etc

Configuration files.

<install_dir>/data

The broker journal.

<install_dir>/lib

JARs and libraries needed to run the broker.

<install_dir>/log

Broker log files.

<install_dir>/tmp

Temporary web application files.

When the Init Container has finished generating the broker configuration, the broker application container starts. When starting, the broker container copies its configuration from the installation directory previously used by the Init Container. When the broker Pod is initialized and running, the broker configuration is located in the /home/jboss/amq-broker directory (and subdirectories) of the broker.

Additional resources

4.2. Configuring addresses and queues for Operator-based broker deployments

For an Operator-based broker deployment, you use two separate Custom Resource (CR) instances to configure address and queues and their associated settings.

  • To create address and queues on your brokers, you deploy a CR instance based on the address Custom Resource Definition (CRD).

    • If you used the OpenShift command-line interface (CLI) to install the Operator, the address CRD is the broker_activemqartemisaddress_crd.yaml file that was included in the deploy/crds of the Operator installation archive that you downloaded and extracted.
    • If you used OperatorHub to install the Operator, the address CRD is the ActiveMQAretmisAddress CRD listed under Administration Custom Resource Definitions in the OpenShift Container Platform web console.
  • To configure address and queue settings that you then match to specific addresses, you include configuration in the main Custom Resource (CR) instance used to create your broker deployment .

    • If you used the OpenShift CLI to install the Operator, the main broker CRD is the broker_activemqartemis_crd.yaml file that was included in the deploy/crds of the Operator installation archive that you downloaded and extracted.
    • If you used OperatorHub to install the Operator, the main broker CRD is the ActiveMQAretmis CRD listed under Administration Custom Resource Definitions in the OpenShift Container Platform web console.
    Note

    To configure address settings for an Operator-based deployment, you must be using at least the latest version of the Operator for AMQ Broker 7.7 (that is, version 0.17). To learn how to upgrade the Operator to the latest version for AMQ Broker 7.8, see Chapter 6, Upgrading an Operator-based broker deployment.

    In general, the address and queue settings that you can configure for a broker deployment on OpenShift Container Platform are fully equivalent to those of standalone broker deployments on Linux or Windows. However, you should be aware of some differences in how those settings are configured. Those differences are described in the following sub-section.

4.2.1. Differences in configuration of address and queue settings between OpenShift and standalone broker deployments

  • To configure address and queue settings for broker deployments on OpenShift Container Platform, you add configuration to an addressSettings section of the main Custom Resource (CR) instance for the broker deployment. This contrasts with standalone deployments on Linux or Windows, for which you add configuration to an address-settings element in the broker.xml configuration file.
  • The format used for the names of configuration items differs between OpenShift Container Platform and standalone broker deployments. For OpenShift Container Platform deployments, configuration item names are in camel case, for example, defaultQueueRoutingType. By contrast, configuration item names for standalone deployments are in lower case and use a dash (-) separator, for example, default-queue-routing-type.

    The following table shows some further examples of this naming difference.

    Configuration item for standalone broker deploymentConfiguration item for OpenShift broker deployment

    address-full-policy

    addressFullPolicy

    auto-create-queues

    autoCreateQueues

    default-queue-routing-type

    defaultQueueRoutingType

    last-value-queue

    lastValueQueue

Additional resources

4.2.2. Creating addresses and queues for an Operator-based broker deployment

The following procedure shows how to use a Custom Resource (CR) instance to add an address and associated queue to an Operator-based broker deployment.

Note

To create multiple addresses and/or queues in your broker deployment, you need to create separate CR files and deploy them individually, specifying new address and/or queue names in each case. In addition, the name attribute of each CR instance must be unique.

Prerequisites

Procedure

  1. Start configuring a Custom Resource (CR) instance to define addresses and queues for the broker deployment.

    1. Using the OpenShift command-line interface:

      1. Log in to OpenShift as a user that has privileges to deploy CRs in the project for the broker deployment.

        oc login -u <user> -p <password> --server=<host:port>
      2. Open the sample CR file called broker_activemqartemisaddress_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted.
    2. Using the OpenShift Container Platform web console:

      1. Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment.
      2. Start a new CR instance based on the address CRD. In the left pane, click Administration Custom Resource Definitions.
      3. Click the ActiveMQArtemisAddresss CRD.
      4. Click the Instances tab.
      5. Click Create ActiveMQArtemisAddress.

        Within the console, a YAML editor opens, enabling you to configure a CR instance.

  2. In the spec section of the CR, add lines to define an address, queue, and routing type. For example:

    apiVersion: broker.amq.io/v2alpha2
    kind: ActiveMQArtemisAddress
    metadata:
        name: myAddressDeployment0
        namespace: myProject
    spec:
        ...
        addressName: myAddress0
        queueName: myQueue0
        routingType: anycast
        ...

    The preceding configuration defines an address named myAddress0 with a queue named myQueue0 and an anycast routing type.

    Note

    In the metadata section, you need to include the namespace property and specify a value only if you are using the OpenShift Container Platform web console to create your CR instance. The value that you should specify is the name of the OpenShift project for your broker deployment.

  3. Deploy the CR instance.

    1. Using the OpenShift command-line interface:

      1. Save the CR file.
      2. Switch to the project for the broker deployment.

        $ oc project <project_name>
      3. Create the CR instance.

        $ oc create -f <path/to/address_custom_resource_instance>.yaml
    2. Using the OpenShift web console:

      1. When you have finished configuring the CR, click Create.
  4. (Optional) To delete an address and queue previously added to your deployment using a CR instance, use the following command:

    $ oc delete -f <path/to/address_custom_resource_instance>.yaml

4.2.3. Matching address settings to configured addresses in an Operator-based broker deployment

If delivery of a message to a client is unsuccessful, you might not want the broker to make ongoing attempts to deliver the message. To prevent infinite delivery attempts, you can define a dead letter address and an associated dead letter queue. After a specified number of delivery attempts, the broker removes an undelivered message from its original queue and sends the message to the configured dead letter address. A system administrator can later consume undelivered messages from a dead letter queue to inspect the messages.

The following example shows how to configure a dead letter address and queue for an Operator-based broker deployment. The example demonstrates how to:

  • Use the addressSetting section of the main broker Custom Resource (CR) instance to configure address settings.
  • Match those address settings to addresses in your broker deployment.

Prerequisites

Procedure

  1. Start configuring a CR instance to add a dead letter address and queue to receive undelivered messages for each broker in the deployment.

    1. Using the OpenShift command-line interface:

      1. Log in to OpenShift as a user that has privileges to deploy CRs in the project for the broker deployment.

        oc login -u <user> -p <password> --server=<host:port>
      2. Open the sample CR file called broker_activemqartemisaddress_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted.
    2. Using the OpenShift Container Platform web console:

      1. Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment.
      2. Start a new CR instance based on the address CRD. In the left pane, click Administration Custom Resource Definitions.
      3. Click the ActiveMQArtemisAddresss CRD.
      4. Click the Instances tab.
      5. Click Create ActiveMQArtemisAddress.

        Within the console, a YAML editor opens, enabling you to configure a CR instance.

  2. In the spec section of the CR, add lines to specify a dead letter address and queue to receive undelivered messages. For example:

    apiVersion: broker.amq.io/v2alpha2
    kind: ActiveMQArtemisAddress
    metadata:
        name: ex-aaoaddress
    spec:
        ...
        addressName: myDeadLetterAddress
        queueName: myDeadLetterQueue
        routingType: anycast
        ...

    The preceding configuration defines a dead letter address named myDeadLetterAddress with a dead letter queue named myDeadLetterQueue and an anycast routing type.

    Note

    In the metadata section, you need to include the namespace property and specify a value only if you are using the OpenShift Container Platform web console to create your CR instance. The value that you should specify is the name of the OpenShift project for your broker deployment.

  3. Deploy the address CR instance.

    1. Using the OpenShift command-line interface:

      1. Save the CR file.
      2. Switch to the project for the broker deployment.

        $ oc project <project_name>
      3. Create the address CR.

        $ oc create -f <path/to/address_custom_resource_instance>.yaml
    2. Using the OpenShift web console:

      1. When you have finished configuring the CR, click Create.
  4. Start configuring a Custom Resource (CR) instance for a broker deployment.

    1. From a sample CR file:

      1. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted.
    2. Using the OpenShift Container Platform web console:

      1. Start a new CR instance based on the main broker CRD. In the left pane, click Administration Custom Resource Definitions.
      2. Click the ActiveMQArtemis CRD.
      3. Click the Instances tab.
      4. Click Create ActiveMQArtemis.

        Within the console, a YAML editor opens, enabling you to configure a CR instance.

    For a basic broker deployment, a configuration might resemble that shown below. This configuration is the default content of the broker_activemqartemis_cr.yaml sample CR file.

    apiVersion: broker.amq.io/v2alpha4
    kind: ActiveMQArtemis
    metadata:
      name: ex-aao
      application: ex-aao-app
    spec:
        version: 7.8.5
        deploymentPlan:
            size: 1
            image: placeholder
            requireLogin: false
            persistenceEnabled: true
            journalType: nio
            messageMigration: true

    Observe that in the broker_activemqartemis_cr.yaml sample CR file, the image property is set to a default value of placeholder. This value indicates that, by default, the image property does not specify a broker container image to use for the deployment. To learn how the Operator determines the appropriate broker container image to use, see Section 2.4, “How the Operator chooses container images”.

    Note

    In the metadata section, you need to include the namespace property and specify a value only if you are using the OpenShift Container Platform web console to create your CR instance. The value that you should specify is the name of the OpenShift project for your broker deployment.

  5. In the deploymentPlan section of the CR, add a new addressSettings section that contains a single addressSetting section, as shown below.

    spec:
        version: 7.8.5
        deploymentPlan:
            size: 1
            image: placeholder
            requireLogin: false
            persistenceEnabled: true
            journalType: nio
            messageMigration: true
            addressSettings:
                addressSetting:
  6. Add a single instance of the match property to the addressSetting block. Specify an address-matching expression. For example:

    spec:
        version: 7.8.5
        deploymentPlan:
            size: 1
            image: placeholder
            requireLogin: false
            persistenceEnabled: true
            journalType: nio
            messageMigration: true
            addressSettings:
                addressSetting:
                 -  match: myAddress
    match
    Specifies the address, or set of address to which the broker applies the configuration that follows. In this example, the value of the match property corresponds to a single address called myAddress.
  7. Add properties related to undelivered messages and specify values. For example:

    spec:
        version: 7.8.5
        deploymentPlan:
            size: 1
            image: placeholder
            requireLogin: false
            persistenceEnabled: true
            journalType: nio
            messageMigration: true
            addressSettings:
                addressSetting:
                 -  match: myAddress
                    deadLetterAddress: myDeadLetterAddress
                    maxDeliveryAttempts: 5
    deadLetterAddress
    Address to which the broker sends undelivered messages.
    maxDeliveryAttempts

    Maximum number of delivery attempts that a broker makes before moving a message to the configured dead letter address.

    In the preceding example, if the broker makes five unsuccessful attempts to deliver a message to an address that begins with myAddress, the broker moves the message to the specified dead letter address, myDeadLetterAddress.

  8. (Optional) Apply similar configuration to another address or set of addresses. For example:

    spec:
        version: 7.8.5
        deploymentPlan:
            size: 1
            image: placeholder
            requireLogin: false
            persistenceEnabled: true
            journalType: nio
            messageMigration: true
            addressSettings:
                addressSetting:
                 -  match: myAddress
                    deadLetterAddress: myDeadLetterAddress
                    maxDeliveryAttempts: 5
                 -  match: 'myOtherAddresses*'
                    deadLetterAddress: myDeadLetterAddress
                    maxDeliveryAttempts: 3

    In this example, the value of the second match property includes an asterisk wildcard character. The wildcard character means that the preceding configuration is applied to any address that begins with the string myOtherAddresses.

    Note

    If you use a wildcard expression as a value for the match property, you must enclose the value in single quotation marks, for example, 'myOtherAddresses*'.

  9. At the beginning of the addressSettings section, add the applyRule property and specify a value. For example:

    spec:
        version: 7.8.5
        deploymentPlan:
            size: 1
            image: placeholder
            requireLogin: false
            persistenceEnabled: true
            journalType: nio
            messageMigration: true
            addressSettings:
                applyRule: merge_all
                addressSetting:
                 -  match: myAddress
                    deadLetterAddress: myDeadLetterAddress
                    maxDeliveryAttempts: 5
                 -  match: 'myOtherAddresses*'
                    deadLetterAddress: myDeadLetterAddress
                    maxDeliveryAttempts: 3

    The applyRule property specifies how the Operator applies the configuration that you add to the CR for each matching address or set of addresses. The values that you can specify are:

    merge_all
    • For address settings specified in both the CR and the default configuration that match the same address or set of addresses:

      • Replace any property values specified in the default configuration with those specified in the CR.
      • Keep any property values that are specified uniquely in the CR or the default configuration. Include each of these in the final, merged configuration.
    • For address settings specified in either the CR or the default configuration that uniquely match a particular address or set of addresses, include these in the final, merged configuration.
    merge_replace
    • For address settings specified in both the CR and the default configuration that match the same address or set of addresses, include the settings specified in the CR in the final, merged configuration. Do not include any properties specified in the default configuration, even if these are not specified in the CR.
    • For address settings specified in either the CR or the default configuration that uniquely match a particular address or set of addresses, include these in the final, merged configuration.
    replace_all
    Replace all address settings specified in the default configuration with those specified in the CR. The final, merged configuration corresponds exactly to that specified in the CR.
    Note

    If you do not explicitly include the applyRule property in your CR, the Operator uses a default value of merge_all.

  10. Deploy the broker CR instance.

    1. Using the OpenShift command-line interface:

      1. Save the CR file.
      2. Create the CR instance.

        $ oc create -f <path/to/broker_custom_resource_instance>.yaml
    2. Using the OpenShift web console:

      1. When you have finished configuring the CR, click Create.

Additional resources

  • To learn about all of the configuration options for addresses, queues, and address settings for OpenShift Container Platform broker deployments, see Section 11.1, “Custom Resource configuration reference”.
  • If you installed the AMQ Broker Operator using the OpenShift command-line interface (CLI), the installation archive that you downloaded and extracted contains some additional examples of configuring address settings. In the deploy/examples folder of the installation archive, see:

    • artemis-basic-address-settings-deployment.yaml
    • artemis-merge-replace-address-settings-deployment.yaml
    • artemis-replace-address-settings-deployment.yaml
  • For comprehensive information about configuring addresses, queues, and associated address settings for standalone broker deployments, see Addresses, Queues, and Topics in Configuring AMQ Broker. You can use this information to create equivalent configurations for broker deployments on OpenShift Container Platform.
  • For more information about Init Containers in OpenShift Container Platform, see:

4.3. Configuring broker storage requirements

To use persistent storage in an Operator-based broker deployment, you set persistenceEnabled to true in the Custom Resource (CR) instance used to create the deployment. If you do not have container-native storage in your OpenShift cluster, you need to manually provision Persistent Volumes (PVs) and ensure that these are available to be claimed by the Operator using a Persistent Volume Claim (PVC). If you want to create a cluster of two brokers with persistent storage, for example, then you need to have two PVs available. By default, each broker in your deployment requires storage of 2 GiB. However, you can configure the CR for your broker deployment to specify the size of PVC required by each broker.

Important
  • To configure the size of the PVC required by the brokers in an Operator-based deployment, you must be using at least the latest version of the Operator for AMQ Broker 7.7 (that is, version 0.17). To learn how to upgrade the Operator to the latest version for AMQ Broker 7.8, see Chapter 6, Upgrading an Operator-based broker deployment.
  • You must add the configuration for broker storage size to the main CR for your broker deployment before deploying the CR for the first time. You cannot add the configuration to a broker deployment that is already running.

4.3.1. Configuring broker storage size

The following procedure shows how to configure the Custom Resource (CR) instance for your broker deployment to specify the size of the Persistent Volume Claim (PVC) required by each broker for persistent message storage.

Important

You must add the configuration for broker storage size to the main CR for your broker deployment before deploying the CR for the first time. You cannot add the configuration to a broker deployment that is already running.

Prerequisites

  • You must be using at least the latest version of the Operator for AMQ Broker 7.7 (that is, version 0.17). To learn how to upgrade the Operator to the latest version for AMQ Broker 7.8, see Chapter 6, Upgrading an Operator-based broker deployment.
  • You should be familiar with how to use a CR instance to create a basic broker deployment. See Section 3.4.1, “Deploying a basic broker instance”.
  • You must have already provisioned Persistent Volumes (PVs) and made these available to be claimed by the Operator. For example, if you want to create a cluster of two brokers with persistent storage, you need to have two PVs available.

    For more information about provisioning persistent storage, see:

Procedure

  1. Start configuring a Custom Resource (CR) instance for the broker deployment.

    1. Using the OpenShift command-line interface:

      1. Log in to OpenShift as a user that has privileges to deploy CRs in the project in which you are creating the deployment.

        oc login -u <user> -p <password> --server=<host:port>
      2. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted.
    2. Using the OpenShift Container Platform web console:

      1. Log in to the console as a user that has privileges to deploy CRs in the project in which you are creating the deployment.
      2. Start a new CR instance based on the main broker CRD. In the left pane, click Administration Custom Resource Definitions.
      3. Click the ActiveMQArtemis CRD.
      4. Click the Instances tab.
      5. Click Create ActiveMQArtemis.

        Within the console, a YAML editor opens, enabling you to configure a CR instance.

    For a basic broker deployment, a configuration might resemble that shown below. This configuration is the default content of the broker_activemqartemis_cr.yaml sample CR file.

    apiVersion: broker.amq.io/v2alpha4
    kind: ActiveMQArtemis
    metadata:
      name: ex-aao
      application: ex-aao-app
    spec:
        version: 7.8.5
        deploymentPlan:
            size: 1
            image: placeholder
            requireLogin: false
            persistenceEnabled: true
            journalType: nio
            messageMigration: true

    Observe that in the broker_activemqartemis_cr.yaml sample CR file, the image property is set to a default value of placeholder. This value indicates that, by default, the image property does not specify a broker container image to use for the deployment. To learn how the Operator determines the appropriate broker container image to use, see Section 2.4, “How the Operator chooses container images”.

    Note

    In the metadata section, you need to include the namespace property and specify a value only if you are using the OpenShift Container Platform web console to create your CR instance. The value that you should specify is the name of the OpenShift project for your broker deployment.

  2. To specify broker storage requirements, in the deploymentPlan section of the CR, add a storage section. Add a size property and specify a value. For example:

    spec:
        version: 7.8.5
        deploymentPlan:
            size: 1
            image: placeholder
            requireLogin: false
            persistenceEnabled: true
            journalType: nio
            messageMigration: true
            storage:
                size: 4Gi
    storage.size
    Size, in bytes, of the Persistent Volume Claim (PVC) that each broker Pod requires for persistent storage. This property applies only when persistenceEnabled is set to true. The value that you specify must include a unit. Supports byte notation (for example, K, M, G), or the binary equivalents (Ki, Mi, Gi).
  3. Deploy the CR instance.

    1. Using the OpenShift command-line interface:

      1. Save the CR file.
      2. Switch to the project in which you are creating the broker deployment.

        $ oc project <project_name>
      3. Create the CR instance.

        $ oc create -f <path/to/custom_resource_instance>.yaml
    2. Using the OpenShift web console:

      1. When you have finished configuring the CR, click Create.

4.4. Configuring resource limits and requests for Operator-based broker deployments

When you create an Operator-based broker deployment, the broker Pods in the deployment run in a StatefulSet on a node in your OpenShift cluster. You can configure the Custom Resource (CR) instance for the deployment to specify the host-node compute resources used by the broker container that runs in each Pod. By specifying limit and request values for CPU and memory (RAM), you can ensure satisfactory performance of the broker Pods.

Important
  • To configure resource limits and requests for the brokers in an Operator-based deployment, you must be using at least the latest version of the Operator for AMQ Broker 7.7 (that is, version 0.17). To learn how to upgrade the Operator to the latest version for AMQ Broker 7.8, see Chapter 6, Upgrading an Operator-based broker deployment.
  • You must add configuration for limits and requests to the CR instance for your broker deployment before deploying the CR for the first time. You cannot add the configuration to a broker deployment that is already running.
  • It is not possible for Red Hat to recommend values for limits and requests because these are based on your specific messaging system use-cases and the resulting architecture that you have implemented. However, it is recommended that you test and tune these values in a development environment before configuring them for your production environment.
  • The Operator runs a type of container called an Init Container when initializing each broker Pod. Any resource limits and requests that you configure for each broker container also apply to each Init Container. For more information about the use of Init Containers in broker deployments, see Section 4.1, “How the Operator generates the broker configuration”.

You can specify the following limit and request values:

CPU limit
For each broker container running in a Pod, this value is the maximum amount of host-node CPU that the container can consume. If a broker container attempts to exceed the specified CPU limit, OpenShift throttles the container. This ensures that containers have consistent performance, regardless of the number of Pods running on a node.
Memory limit
For each broker container running in a Pod, this value is the maximum amount of host-node memory that the container can consume. If a broker container attempts to exceed the specified memory limit, OpenShift terminates the container. The broker Pod restarts.
CPU request

For each broker container running in a Pod, this value is the amount of host-node CPU that the container requests. The OpenShift scheduler considers the CPU request value during Pod placement, to bind the broker Pod to a node with sufficient compute resources.

The CPU request value is the minimum amount of CPU that the broker container requires to run. However, if there is no contention for CPU on the node, the container can use all available CPU. If you have specified a CPU limit, the container cannot exceed that amount of CPU usage. If there is CPU contention on the node, CPU request values provide a way for OpenShift to weigh CPU usage across all containers.

Memory request

For each broker container running in a Pod, this value is the amount of host-node memory that the container requests. The OpenShift scheduler considers the memory request value during Pod placement, to bind the broker Pod to a node with sufficient compute resources.

The memory request value is the minimum amount of memory that the broker container requires to run. However, the container can consume as much available memory as possible. If you have specified a memory limit, the broker container cannot exceed that amount of memory usage.

CPU is measured in units called millicores. Each node in an OpenShift cluster inspects the operating system to determine the number of CPU cores on the node. Then, the node multiplies that value by 1000 to express the total capacity. For example, if a node has two cores, the CPU capacity of the node is expressed as 2000m. Therefore, if you want to use one-tenth of a single core, you specify a value of 100m.

Memory is measured in bytes. You can specify the value using byte notation (E, P, T, G, M, K) or the binary equivalents (Ei, Pi, Ti, Gi, Mi, Ki). The value that you specify must include a unit.

4.4.1. Configuring broker resource limits and requests

The following example shows how to configure the main Custom Resource (CR) instance for your broker deployment to set limits and requests for CPU and memory for each broker container that runs in a Pod in the deployment.

Important
  • You must add configuration for limits and requests to the CR instance for your broker deployment before deploying the CR for the first time. You cannot add the configuration to a broker deployment that is already running.
  • It is not possible for Red Hat to recommend values for limits and requests because these are based on your specific messaging system use-cases and the resulting architecture that you have implemented. However, it is recommended that you test and tune these values in a development environment before configuring them for your production environment.

Prerequisites

Procedure

  1. Start configuring a Custom Resource (CR) instance for the broker deployment.

    1. Using the OpenShift command-line interface:

      1. Log in to OpenShift as a user that has privileges to deploy CRs in the project in which you are creating the deployment.

        oc login -u <user> -p <password> --server=<host:port>
      2. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted.
    2. Using the OpenShift Container Platform web console:

      1. Log in to the console as a user that has privileges to deploy CRs in the project in which you are creating the deployment.
      2. Start a new CR instance based on the main broker CRD. In the left pane, click Administration Custom Resource Definitions.
      3. Click the ActiveMQArtemis CRD.
      4. Click the Instances tab.
      5. Click Create ActiveMQArtemis.

        Within the console, a YAML editor opens, enabling you to configure a CR instance.

    For a basic broker deployment, a configuration might resemble that shown below. This configuration is the default content of the broker_activemqartemis_cr.yaml sample CR file.

    apiVersion: broker.amq.io/v2alpha4
    kind: ActiveMQArtemis
    metadata:
      name: ex-aao
      application: ex-aao-app
    spec:
        version: 7.8.5
        deploymentPlan:
            size: 1
            image: placeholder
            requireLogin: false
            persistenceEnabled: true
            journalType: nio
            messageMigration: true

    Observe that in the broker_activemqartemis_cr.yaml sample CR file, the image property is set to a default value of placeholder. This value indicates that, by default, the image property does not specify a broker container image to use for the deployment. To learn how the Operator determines the appropriate broker container image to use, see Section 2.4, “How the Operator chooses container images”.

    Note

    In the metadata section, you need to include the namespace property and specify a value only if you are using the OpenShift Container Platform web console to create your CR instance. The value that you should specify is the name of the OpenShift project for your broker deployment.

  2. In the deploymentPlan section of the CR, add a resources section. Add limits and requests sub-sections. In each sub-section, add a cpu and memory property and specify values. For example:

    spec:
        version: 7.8.5
        deploymentPlan:
            size: 1
            image: placeholder
            requireLogin: false
            persistenceEnabled: true
            journalType: nio
            messageMigration: true
            resources:
                limits:
                    cpu: "500m"
                    memory: "1024M"
                requests:
                    cpu: "250m"
                    memory: "512M"
    limits.cpu
    Each broker container running in a Pod in the deployment cannot exceed this amount of host-node CPU usage.
    limits.memory
    Each broker container running in a Pod in the deployment cannot exceed this amount of host-node memory usage.
    requests.cpu
    Each broker container running in a Pod in the deployment requests this amount of host-node CPU. This value is the minimum amount of CPU required for the broker container to run.
    requests.memory
    Each broker container running in a Pod in the deployment requests this amount of host-node memory. This value is the minimum amount of memory required for the broker container to run.
  3. Deploy the CR instance.

    1. Using the OpenShift command-line interface:

      1. Save the CR file.
      2. Switch to the project in which you are creating the broker deployment.

        $ oc project <project_name>
      3. Create the CR instance.

        $ oc create -f <path/to/custom_resource_instance>.yaml
    2. Using the OpenShift web console:

      1. When you have finished configuring the CR, click Create.

4.5. Specifying a custom Init Container image

As described in Section 4.1, “How the Operator generates the broker configuration”, the AMQ Broker Operator uses a default, built-in Init Container to generate the broker configuration. To generate the configuration, the Init Container uses the main Custom Resource (CR) instance for your deployment. The only items that you can specify in the CR are those that are exposed in the main broker Custom Resource Definition (CRD).

However, there might a case where you need to include configuration that is not exposed in the CRD. In this case, in your main CR instance, you can specify a custom Init Container. The custom Init Container can modify or add to the configuration that has already been created by the Operator. For example, you might use a custom Init Container to modify the broker logging settings. Or, you might use a custom Init Container to include extra runtime dependencies (that is, .jar files) in the broker installation directory.

When you build a custom Init Container image, you must follow these important guidelines:

  • In the build script (for example, a Docker Dockerfile or Podman Containerfile) that you create for the custom image, the FROM instruction must specify the latest version of the AMQ Broker Operator built-in Init Container as the base image. In your script, include the following line:

    FROM registry.redhat.io/amq7/amq-broker-init-rhel7:0.2-13
  • The custom image must include a script called post-config.sh that you include in a directory called /amq/scripts. The post-config.sh script is where you can modify or add to the initial configuration that the Operator generates. When you specify a custom Init Container, the Operator runs the post-config.sh script after it uses your CR instance to generate a configuration, but before it starts the broker application container.
  • As described in Section 4.1.2, “Directory structure of a broker Pod”, the path to the installation directory used by the Init Container is defined in an environment variable called CONFIG_INSTANCE_DIR. The post-config.sh script should use this environment variable name when referencing the installation directory (for example, ${CONFIG_INSTANCE_DIR}/lib) and not the actual value of this variable (for example, /amq/init/config/lib).
  • If you want to include additional resources (for example, .xml or .jar files) in your custom broker configuration, you must ensure that these are included in the custom image and accessible to the post-config.sh script.

The following procedure describes how to specify a custom Init Container image.

Prerequisites

Procedure

  1. Start configuring a Custom Resource (CR) instance for the broker deployment.

    1. Using the OpenShift command-line interface:

      1. Log in to OpenShift as a user that has privileges to deploy CRs in the project in which you are creating the deployment.

        oc login -u <user> -p <password> --server=<host:port>
      2. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted.
    2. Using the OpenShift Container Platform web console:

      1. Log in to the console as a user that has privileges to deploy CRs in the project in which you are creating the deployment.
      2. Start a new CR instance based on the main broker CRD. In the left pane, click Administration Custom Resource Definitions.
      3. Click the ActiveMQArtemis CRD.
      4. Click the Instances tab.
      5. Click Create ActiveMQArtemis.

        Within the console, a YAML editor opens, enabling you to configure a CR instance.

    For a basic broker deployment, a configuration might resemble that shown below. This configuration is the default content of the broker_activemqartemis_cr.yaml sample CR file.

    apiVersion: broker.amq.io/v2alpha4
    kind: ActiveMQArtemis
    metadata:
      name: ex-aao
      application: ex-aao-app
    spec:
        version: 7.8.5
        deploymentPlan:
            size: 1
            image: placeholder
            requireLogin: false
            persistenceEnabled: true
            journalType: nio
            messageMigration: true

    Observe that in the broker_activemqartemis_cr.yaml sample CR file, the image property is set to a default value of placeholder. This value indicates that, by default, the image property does not specify a broker container image to use for the deployment. To learn how the Operator determines the appropriate broker container image to use, see Section 2.4, “How the Operator chooses container images”.

    Note

    In the metadata section, you need to include the namespace property and specify a value only if you are using the OpenShift Container Platform web console to create your CR instance. The value that you should specify is the name of the OpenShift project for your broker deployment.

  2. In the deploymentPlan section of the CR, add the initImage property.

    apiVersion: broker.amq.io/v2alpha4
    kind: ActiveMQArtemis
    metadata:
      name: ex-aao
      application: ex-aao-app
    spec:
        version: 7.8.5
        deploymentPlan:
            size: 1
            image: placeholder
            initImage:
            requireLogin: false
            persistenceEnabled: true
            journalType: nio
            messageMigration: true
  3. Set the value of the initImage property to the URL of your custom Init Container image.

    apiVersion: broker.amq.io/v2alpha4
    kind: ActiveMQArtemis
    metadata:
      name: ex-aao
      application: ex-aao-app
    spec:
        version: 7.8.5
        deploymentPlan:
            size: 1
            image: placeholder
            initImage: <custom_init_container_image_url>
            requireLogin: false
            persistenceEnabled: true
            journalType: nio
            messageMigration: true
    initImage
    Specifies the full URL for your custom Init Container image, which you must have added to repository in a container registry.
  4. Deploy the CR instance.

    1. Using the OpenShift command-line interface:

      1. Save the CR file.
      2. Switch to the project in which you are creating the broker deployment.

        $ oc project <project_name>
      3. Create the CR instance.

        $ oc create -f <path/to/custom_resource_instance>.yaml
    2. Using the OpenShift web console:

      1. When you have finished configuring the CR, click Create.

Additional resources

4.6. Configuring Operator-based broker deployments for client connections

4.6.1. Configuring acceptors

To enable client connections to broker Pods in your OpenShift deployment, you define acceptors for your deployment. Acceptors define how a broker Pod accepts connections. You define acceptors in the main Custom Resource (CR) used for your broker deployment. When you create an acceptor, you specify information such as the messaging protocols to enable on the acceptor, and the port on the broker Pod to use for these protocols.

The following procedure shows how to define a new acceptor in the CR for your broker deployment.

Prerequisites

Procedure

  1. In the deploy/crs directory of the Operator archive that you downloaded and extracted during your initial installation, open the broker_activemqartemis_cr.yaml Custom Resource (CR) file.
  2. In the acceptors element, add a named acceptor. Add the protocols and port parameters. Set values to specify the messaging protocols to be used by the acceptor and the port on each broker Pod to expose for those protocols. For example:

    spec:
    ...
      acceptors:
      - name: my-acceptor
        protocols: amqp
        port: 5672
    ...

    The configured acceptor exposes port 5672 to AMQP clients. The full set of values that you can specify for the protocols parameter is shown in the table.

    ProtocolValue

    Core Protocol

    core

    AMQP

    amqp

    OpenWire

    openwire

    MQTT

    mqtt

    STOMP

    stomp

    All supported protocols

    all

    Note
    • For each broker Pod in your deployment, the Operator also creates a default acceptor that uses port 61616. This default acceptor is required for broker clustering and has Core Protocol enabled.
    • By default, the AMQ Broker management console uses port 8161 on the broker Pod. Each broker Pod in your deployment has a dedicated Service that provides access to the console. For more information, see Chapter 5, Connecting to AMQ Management Console for an Operator-based broker deployment.
  3. To use another protocol on the same acceptor, modify the protocols parameter. Specify a comma-separated list of protocols. For example:

    spec:
    ...
      acceptors:
      - name: my-acceptor
        protocols: amqp,openwire
        port: 5672
    ...

    The configured acceptor now exposes port 5672 to AMQP and OpenWire clients.

  4. To specify the number of concurrent client connections that the acceptor allows, add the connectionsAllowed parameter and set a value. For example:

    spec:
    ...
      acceptors:
      - name: my-acceptor
        protocols: amqp,openwire
        port: 5672
        connectionsAllowed: 5
    ...
  5. By default, an acceptor is exposed only to clients in the same OpenShift cluster as the broker deployment. To also expose the acceptor to clients outside OpenShift, add the expose parameter and set the value to true.

    In addition, to enable secure connections to the acceptor from clients outside OpenShift, add the sslEnabled parameter and set the value to true.

    spec:
    ...
      acceptors:
      - name: my-acceptor
        protocols: amqp,openwire
        port: 5672
        connectionsAllowed: 5
        expose: true
        sslEnabled: true
        ...
    ...

    When you enable SSL (that is, Secure Sockets Layer) security on an acceptor (or connector), you can add related configuration, such as:

    • The secret name used to store authentication credentials in your OpenShift cluster. A secret is required when you enable SSL on the acceptor. For more information on generating this secret, see Section 4.6.2, “Securing broker-client connections”.
    • The Transport Layer Security (TLS) protocols to use for secure network communication. TLS is an updated, more secure version of SSL. You specify the TLS protocols in the enabledProtocols parameter.
    • Whether the acceptor uses two-way TLS, also known as mutual authentication, between the broker and the client. You specify this by setting the value of the needClientAuth parameter to true.

Additional resources

4.6.2. Securing broker-client connections

If you have enabled security on your acceptor or connector (that is, by setting sslEnabled to true), you must configure Transport Layer Security (TLS) to allow certificate-based authentication between the broker and clients. TLS is an updated, more secure version of SSL. There are two primary TLS configurations:

One-way TLS
Only the broker presents a certificate. The certificate is used by the client to authenticate the broker. This is the most common configuration.
Two-way TLS
Both the broker and the client present certificates. This is sometimes called mutual authentication.

The sections that follow describe:

For both one-way and two-way TLS, you complete the configuration by generating a secret that stores the credentials required for a successful TLS handshake between the broker and the client. This is the secret name that you must specify in the sslSecret parameter of your secured acceptor or connector. The secret must contain a Base64-encoded broker key store (both one-way and two-way TLS), a Base64-encoded broker trust store (two-way TLS only), and the corresponding passwords for these files, also Base64-encoded. The one-way and two-way TLS configuration procedures show how to generate this secret.

Note

If you do not explicitly specify a secret name in the sslSecret parameter of a secured acceptor or connector, the acceptor or connector assumes a default secret name. The default secret name uses the format <CustomResourceName>-<AcceptorName>-secret or <CustomResourceName>-<ConnectorName>-secret. For example, my-broker-deployment-my-acceptor-secret.

Even if the acceptor or connector assumes a default secrete name, you must still generate this secret yourself. It is not automatically created.

4.6.2.1. Configuring a broker certificate for host name verification

Note

This section describes some requirements for the broker certificate that you must generate when configuring one-way or two-way TLS.

When a client tries to connect to a broker Pod in your deployment, the verifyHost option in the client connection URL determines whether the client compares the Common Name (CN) of the broker’s certificate to its host name, to verify that they match. The client performs this verification if you specify verifyHost=true or similar in the client connection URL.

You might omit this verification in rare cases where you have no concerns about the security of the connection, for example, if the brokers are deployed on an OpenShift cluster in an isolated network. Otherwise, for a secure connection, it is advisable for a client to perform this verification. In this case, correct configuration of the broker key store certificate is essential to ensure successful client connections.

In general, when a client is using host verification, the CN that you specify when generating the broker certificate must match the full host name for the Route on the broker Pod that the client is connecting to. For example, if you have a deployment with a single broker Pod, the CN might look like the following:

CN=my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain

To ensure that the CN can resolve to any broker Pod in a deployment with multiple brokers, you can specify an asterisk (*) wildcard character in place of the ordinal of the broker Pod. For example:

CN=my-broker-deployment-*-svc-rte-my-openshift-project.my-openshift-domain

The CN shown in the preceding example successfully resolves to any broker Pod in the my-broker-deployment deployment.

In addition, the Subject Alternative Name (SAN) that you specify when generating the broker certificate must individually list all broker Pods in the deployment, as a comma-separated list. For example:

"SAN=DNS:my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain,DNS:my-broker-deployment-1-svc-rte-my-openshift-project.my-openshift-domain,..."

4.6.2.2. Configuring one-way TLS

The procedure in this section shows how to configure one-way Transport Layer Security (TLS) to secure a broker-client connection.

In one-way TLS, only the broker presents a certificate. This certificate is used by the client to authenticate the broker.

Prerequisites

Procedure

  1. Generate a self-signed certificate for the broker key store.

    $ keytool -genkey -alias broker -keyalg RSA -keystore ~/broker.ks
  2. Export the certificate from the broker key store, so that it can be shared with clients. Export the certificate in the Base64-encoded .pem format. For example:

    $ keytool -export -alias broker -keystore ~/broker.ks -file ~/broker_cert.pem
  3. On the client, create a client trust store that imports the broker certificate.

    $ keytool -import -alias broker -keystore ~/client.ts -file ~/broker_cert.pem
  4. Log in to OpenShift Container Platform as an administrator. For example:

    $ oc login -u system:admin
  5. Switch to the project that contains your broker deployment. For example:

    $ oc project my-openshift-project
  6. Create a secret to store the TLS credentials. For example:

    $ oc create secret generic my-tls-secret \
    --from-file=broker.ks=~/broker.ks \
    --from-file=client.ts=~/broker.ks \
    --from-literal=keyStorePassword=<password> \
    --from-literal=trustStorePassword=<password>
    Note

    When generating a secret, OpenShift requires you to specify both a key store and a trust store. The trust store key is generically named client.ts. For one-way TLS between the broker and a client, a trust store is not actually required. However, to successfully generate the secret, you need to specify some valid store file as a value for client.ts. The preceding step provides a "dummy" value for client.ts by reusing the previously-generated broker key store file. This is sufficient to generate a secret with all of the credentials required for one-way TLS.

  7. Link the secret to the service account that you created when installing the Operator. For example:

    $ oc secrets link sa/amq-broker-operator secret/my-tls-secret
  8. Specify the secret name in the sslSecret parameter of your secured acceptor or connector. For example:

    spec:
    ...
      acceptors:
      - name: my-acceptor
        protocols: amqp,openwire
        port: 5672
        sslEnabled: true
        sslSecret: my-tls-secret
        expose: true
        connectionsAllowed: 5
    ...

4.6.2.3. Configuring two-way TLS

The procedure in this section shows how to configure two-way Transport Layer Security (TLS) to secure a broker-client connection.

In two-way TLS, both the broker and client presents certificates. The broker and client use these certificates to authenticate each other in a process sometimes called mutual authentication.

Prerequisites

Procedure

  1. Generate a self-signed certificate for the broker key store.

    $ keytool -genkey -alias broker -keyalg RSA -keystore ~/broker.ks
  2. Export the certificate from the broker key store, so that it can be shared with clients. Export the certificate in the Base64-encoded .pem format. For example:

    $ keytool -export -alias broker -keystore ~/broker.ks -file ~/broker_cert.pem
  3. On the client, create a client trust store that imports the broker certificate.

    $ keytool -import -alias broker -keystore ~/client.ts -file ~/broker_cert.pem
  4. On the client, generate a self-signed certificate for the client key store.

    $ keytool -genkey -alias broker -keyalg RSA -keystore ~/client.ks
  5. On the client, export the certificate from the client key store, so that it can be shared with the broker. Export the certificate in the Base64-encoded .pem format. For example:

    $ keytool -export -alias broker -keystore ~/client.ks -file ~/client_cert.pem
  6. Create a broker trust store that imports the client certificate.

    $ keytool -import -alias broker -keystore ~/broker.ts -file ~/client_cert.pem
  7. Log in to OpenShift Container Platform as an administrator. For example:

    $ oc login -u system:admin
  8. Switch to the project that contains your broker deployment. For example:

    $ oc project my-openshift-project
  9. Create a secret to store the TLS credentials. For example:

    $ oc create secret generic my-tls-secret \
    --from-file=broker.ks=~/broker.ks \
    --from-file=client.ts=~/broker.ts \
    --from-literal=keyStorePassword=<password> \
    --from-literal=trustStorePassword=<password>
    Note

    When generating a secret, OpenShift requires you to specify both a key store and a trust store. The trust store key is generically named client.ts. For two-way TLS between the broker and a client, you must generate a secret that includes the broker trust store, because this holds the client certificate. Therefore, in the preceding step, the value that you specify for the client.ts key is actually the broker trust store file.

  10. Link the secret to the service account that you created when installing the Operator. For example:

    $ oc secrets link sa/amq-broker-operator secret/my-tls-secret
  11. Specify the secret name in the sslSecret parameter of your secured acceptor or connector. For example:

    spec:
    ...
      acceptors:
      - name: my-acceptor
        protocols: amqp,openwire
        port: 5672
        sslEnabled: true
        sslSecret: my-tls-secret
        expose: true
        connectionsAllowed: 5
    ...

4.6.3. Networking Services in your broker deployments

On the Networking pane of the OpenShift Container Platform web console for your broker deployment, there are two running Services; a headless Service and a ping Service. The default name of the headless Service uses the format <Custom Resource name>-hdls-svc, for example, my-broker-deployment-hdls-svc. The default name of the ping Service uses a format of <Custom Resource name>-ping-svc, for example, `my-broker-deployment-ping-svc.

The headless Service provides access to ports 8161 and 61616 on each broker Pod. Port 8161 is used by the broker management console, and port 61616 is used for broker clustering. You can also use the headless Service to connect to a broker Pod from an internal client (that is, a client inside the same OpenShift cluster as the broker deployment).

The ping Service is used by the brokers for discovery, and enables brokers to form a cluster within the OpenShift environment. Internally, this Service exposes port 8888.

Additional resources

4.6.4. Connecting to the broker from internal and external clients

The examples in this section show how to connect to the broker from internal clients (that is, clients in the same OpenShift cluster as the broker deployment) and external clients (that is, clients outside the OpenShift cluster).

4.6.4.1. Connecting to the broker from internal clients

An internal client can connect to the broker Pod using the headless Service that is running for the broker deployment.

To connect to a broker Pod using the headless Service, specify an address in the format <Protocol>://<PodName>.<HeadlessServiceName>.<ProjectName>.svc.cluster.local. For example:

$ tcp://my-broker-deployment-0.my-broker-deployment-hdls-svc.my-openshift-project.svc.cluster.local

OpenShift DNS successfully resolves addresses in this format because the StatefulSets created by Operator-based broker deployments provide stable Pod names.

Additional resources

4.6.4.2. Connecting to the broker from external clients

When you expose an acceptor to external clients (that is, by setting the value of the expose parameter to true), a dedicated Service and Route are automatically created for each broker Pod in the deployment. To see the Routes configured on a given broker Pod, select the Pod in the OpenShift Container Platform web console and click the Routes tab.

An external client can connect to the broker by specifying the full host name of the Route created for the the broker Pod. You can use a basic curl command to test external access to this full host name. For example:

$ curl https://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain

The full host name for the Route must resolve to the node that’s hosting the OpenShift router. The OpenShift router uses the host name to determine where to send the traffic inside the OpenShift internal network.

By default, the OpenShift router listens to port 80 for non-secured (that is, non-SSL) traffic and port 443 for secured (that is, SSL-encrypted) traffic. For an HTTP connection, the router automatically directs traffic to port 443 if you specify a secure connection URL (that is, https), or to port 80 if you specify a non-secure connection URL (that is, http).

For non-HTTP connections:

  • Clients must explicitly specify the port number (for example, port 443) as part of the connection URL.
  • For one-way TLS, the client must specify the path to its trust store and the corresponding password, as part of the connection URL.
  • For two-way TLS, the client must also specify the path to its key store and the corresponding password, as part of the connection URL.

Some example client connection URLs, for supported messaging protcols, are shown below.

External Core client, using one-way TLS

tcp://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain:443?useTopologyForLoadBalancing=false&sslEnabled=true \
&trustStorePath=~/client.ts&trustStorePassword=<password>

Note

The useTopologyForLoadBalancing key is explicitly set to false in the connection URL because an external Core client cannot use topology information returned by the broker. If this key is set to true or you do not specify a value, it results in a DEBUG log message.

External Core client, using two-way TLS

tcp://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain:443?useTopologyForLoadBalancing=false&sslEnabled=true \
&keyStorePath=~/client.ks&keyStorePassword=<password> \
&trustStorePath=~/client.ts&trustStorePassword=<password>

External OpenWire client, using one-way TLS

ssl://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain:443"

# Also, specify the following JVM flags
-Djavax.net.ssl.trustStore=~/client.ts -Djavax.net.ssl.trustStorePassword=<password>

External OpenWire client, using two-way TLS

ssl://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain:443"

# Also, specify the following JVM flags
-Djavax.net.ssl.keyStore=~/client.ks -Djavax.net.ssl.keyStorePassword=<password> \
-Djavax.net.ssl.trustStore=~/client.ts -Djavax.net.ssl.trustStorePassword=<password>

External AMQP client, using one-way TLS

amqps://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain:443?transport.verifyHost=true \
&transport.trustStoreLocation=~/client.ts&transport.trustStorePassword=<password>

External AMQP client, using two-way TLS

amqps://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain:443?transport.verifyHost=true \
&transport.keyStoreLocation=~/client.ks&transport.keyStorePassword=<password> \
&transport.trustStoreLocation=~/client.ts&transport.trustStorePassword=<password>

4.6.4.3. Connecting to the Broker using a NodePort

As an alternative to using a Route, an OpenShift administrator can configure a NodePort to connect to a broker Pod from a client outside OpenShift. The NodePort should map to one of the protocol-specifc ports specified by the acceptors configured for the broker.

By default, NodePorts are in the range 30000 to 32767, which means that a NodePort typically does not match the intended port on the broker Pod.

To connect from a client outside OpenShift to the broker via a NodePort, you specify a URL in the format <Protocol>://<OCPNodeIP>:<NodePortNumber>.

Additional resources

4.7. Configuring large message handling for AMQP messages

Clients might send large AMQP messages that can exceed the size of the broker’s internal buffer, causing unexpected errors. To prevent this situation, you can configure the broker to store messages as files when the messages are larger than a specified minimum value. Handling large messages in this way means that the broker does not hold the messages in memory. Instead, the broker stores the messages in a dedicated directory used for storing large message files.

For a broker deployment on OpenShift Container Platform, the large messages directory is /opt/<custom-resource-name>/data/large-messages on the Persistent Volume (PV) used by the broker for message storage. When the broker stores a message as a large message, the queue retains a reference to the file in the large messages directory.

Important
  • To configure large message handling for AMQP messages, you must be using at least the latest version of the Operator for AMQ Broker 7.7 (that is, version 0.17). To learn how to upgrade the Operator to the latest version for AMQ Broker 7.8, see Chapter 6, Upgrading an Operator-based broker deployment.
  • For Operator-based broker deployments in AMQ Broker 7.8, large message handling is available only for the AMQP protocol.

4.7.1. Configuring AMQP acceptors for large message handling

The following procedure shows how to configure an acceptor to handle an AMQP message larger than a specified size as a large message.

Prerequisites

Procedure

  1. Open the Custom Resource (CR) instance in which you previously defined an AMQP acceptor.

    1. Using the OpenShift command-line interface:

      $ oc edit -f <path/to/custom-resource-instance>.yaml
    2. Using the OpenShift Container Platform web console:

      1. In the left navigation menu, click Administration Custom Resource Definitions
      2. Click the ActiveMQArtemis CRD.
      3. Click the Instances tab.
      4. Locate the CR instance that corresponds to your project namespace.

    A previously-configured AMQP acceptor might resemble the following:

    spec:
    ...
      acceptors:
      - name: my-acceptor
        protocols: amqp
        port: 5672
        connectionsAllowed: 5
        expose: true
        sslEnabled: true
    ...
  2. Specify the minimum size, in bytes, of an AMQP message that the broker handles as a large message. For example:

    spec:
    ...
      acceptors:
      - name: my-acceptor
        protocols: amqp
        port: 5672
        connectionsAllowed: 5
        expose: true
        sslEnabled: true
        amqpMinLargeMessageSize: 204800
        ...
    ...

    In the preceding example, the broker is configured to accept AMQP messages on port 5672. Based on the value of amqpMinLargeMessageSize, if the acceptor receives an AMQP message with a body larger than or equal to 204800 bytes (that is, 200 kilobytes), the broker stores the message as a large message.

    The broker stores the message in the large messages directory (/opt/<custom-resource-name>/data/large-messages, by default) on the persistent volume (PV) used by the broker for message storage.

    If you do not explicitly specify a value for the amqpMinLargeMessageSize property, the broker uses a default value of 102400 (that is, 100 kilobytes).

    If you set amqpMinLargeMessageSize to a value of -1, large message handling for AMQP messages is disabled.

4.8. High availability and message migration

4.8.1. High availability

The term high availability refers to a system that can remain operational even when part of that system fails or is shut down. For AMQ Broker on OpenShift Container Platform, this means ensuring the integrity and availability of messaging data if a broker Pod fails, or shuts down due to intentional scaledown of your deployment.

To allow high availability for AMQ Broker on OpenShift Container Platform, you run multiple broker Pods in a broker cluster. Each broker Pod writes its message data to an available Persistent Volume (PV) that you have claimed for use with a Persistent Volume Claim (PVC). If a broker Pod fails or is shut down, the message data stored in the PV is migrated to another available broker Pod in the broker cluster. The other broker Pod stores the message data in its own PV.

Note

Message migration is available only for deployments based on the AMQ Broker Operator. Deployments based on application templates do not have a message migration capability.

The following figure shows a StatefulSet-based broker deployment. In this case, the two broker Pods in the broker cluster are still running.

ah ocp pod draining

When a broker Pod shuts down, the AMQ Broker Operator automatically starts a scaledown controller that performs the migration of messages to an another broker Pod that is still running in the broker cluster. This message migration process is also known as Pod draining. The section that follows describes message migration.

4.8.2. Message migration

Message migration is how you ensure the integrity of messaging data when a broker in a clustered deployment shuts down due to failure or intentional scaledown of the deployment. Also known as Pod draining, this process refers to removal and redistribution of messages from a broker Pod that has shut down.

Note
  • Message migration is available only for deployments based on the AMQ Broker Operator. Deployments based on application templates do not have a message migration capability.
  • The scaledown controller that performs message migration can operate only within a single OpenShift project. The controller cannot migrate messages between brokers in separate projects.
  • To use message migration, you must have a minimum of two brokers in your deployment. A broker with two or more brokers is clustered by default.

For an Operator-based broker deployment, you enable message migration by setting messageMigration to true in the main broker Custom Resource for your deployment.

The message migration process follows these steps:

  1. When a broker Pod in the deployment shuts down due to failure or intentional scaledown of the deployment, the Operator automatically starts a scaledown controller to prepare for message migration. The scaledown controller runs in the same OpenShift project name as the broker cluster.
  2. The scaledown controller registers itself and listens for Kubernetes events that are related to Persistent Volume Claims (PVCs) in the project.
  3. To check for Persistent Volumes (PVs) that have been orphaned, the scaledown controller looks at the ordinal on the volume claim. The controller compares the ordinal on the volume claim to that of the broker Pods that are still running in the StatefulSet (that is, the broker cluster) in the project.

    If the ordinal on the volume claim is higher than the ordinal on any of the broker Pods still running in the broker cluster, the scaledown controller determines that the broker Pod at that ordinal has been shut down and that messaging data must be migrated to another broker Pod.

  4. The scaledown controller starts a drainer Pod. The drainer Pod runs the broker and executes the message migration. Then, the drainer Pod identifies an alternative broker Pod to which the orphaned messages can be migrated.

    Note

    There must be at least one broker Pod still running in your deployment for message migration to occur.

The following figure illustrates how the scaledown controller (also known as a drain controller) migrates messages to a running broker Pod.

ah ocp pod draining 3

After the messages are successfully migrated to an operational broker Pod, the drainer Pod shuts down and the scaledown controller removes the PVC for the orphaned PV. The PV is returned to a "Released" state.

Note

If you scale a broker deployment down to 0 (zero), message migration does not occur, since there is no running broker Pod to which messaging data can be migrated. However, if you scale a deployment down to zero and then back up to a size that is smaller than the original deployment, drainer Pods are started for the brokers that remain shut down.

Additional resources

4.8.3. Migrating messages upon scaledown

To migrate messages upon scaledown of your broker deployment, use the main broker Custom Resource (CR) to enable message migration. The AMQ Broker Operator automatically runs a dedicated scaledown controller to execute message migration when you scale down a clustered broker deployment.

With message migration enabled, the scaledown controller within the Operator detects shutdown of a broker Pod and starts a drainer Pod to execute message migration. The drainer Pod connects to one of the other live broker Pods in the cluster and migrates messages to that live broker Pod. After migration is complete, the scaledown controller shuts down.

Note
  • A scaledown controller operates only within a single OpenShift project. The controller cannot migrate messages between brokers in separate projects.
  • If you scale a broker deployment down to 0 (zero), message migration does not occur, since there is no running broker Pod to which the messaging data can be migrated. However, if you scale a deployment down to zero brokers and then back up to only some of the brokers that were in the original deployment, drainer Pods are started for the brokers that remain shut down.

The following example procedure shows the behavior of the scaledown controller.

Prerequisites

Procedure

  1. In the deploy/crs directory of the Operator repository that you originally downloaded and extracted, open the main broker CR, broker_activemqartemis_cr.yaml.
  2. In the main broker CR set messageMigration and persistenceEnabled to true.

    These settings mean that when you later scale down the size of your clustered broker deployment, the Operator automatically starts a scaledown controller and migrates messages to a broker Pod that is still running.

  3. In your existing broker deployment, verify which Pods are running.

    $ oc get pods

    You see output that looks like the following.

    activemq-artemis-operator-8566d9bf58-9g25l   1/1   Running   0   3m38s
    ex-aao-ss-0                                  1/1   Running   0   112s
    ex-aao-ss-1                                  1/1   Running   0   8s

    The preceding output shows that there are three Pods running; one for the broker Operator itself, and a separate Pod for each broker in the deployment.

  4. Log into each Pod and send some messages to each broker.

    1. Supposing that Pod ex-aao-ss-0 has a cluster IP address of 172.17.0.6, run the following command:

      $ /opt/amq-broker/bin/artemis producer --url tcp://172.17.0.6:61616 --user admin --password admin
    2. Supposing that Pod ex-aao-ss-1 has a cluster IP address of 172.17.0.7, run the following command:

      $ /opt/amq-broker/bin/artemis producer --url tcp://172.17.0.7:61616 --user admin --password admin

      The preceding commands create a queue called TEST on each broker and add 1000 messages to each queue.

  5. Scale the cluster down from two brokers to one.

    1. Open the main broker CR, broker_activemqartemis_cr.yaml.
    2. In the CR, set deploymentPlan.size to 1.
    3. At the command line, apply the change:

      $ oc apply -f deploy/crs/broker_activemqartemis_cr.yaml

      You see that the Pod ex-aao-ss-1 starts to shut down. The scaledown controller starts a new drainer Pod of the same name. This drainer Pod also shuts down after it migrates all messages from broker Pod ex-aao-ss-1 to the other broker Pod in the cluster, ex-aao-ss-0.

  6. When the drainer Pod is shut down, check the message count on the TEST queue of broker Pod ex-aao-ss-0. You see that the number of messages in the queue is 2000, indicating that the drainer Pod successfully migrated 1000 messages from the broker Pod that shut down.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.