Search

Chapter 3. Deploying AMQ Broker on OpenShift Container Platform using the AMQ Broker Operator

download PDF

3.1. Prerequisites

3.2. Installing the Operator using the CLI

Note

Each Operator release requires that you download the latest AMQ Broker 7.9.4 Operator Installation and Example Files as described below.

The procedures in this section show how to use the OpenShift command-line interface (CLI) to install and deploy the latest version of the Operator for AMQ Broker 7.9 in a given OpenShift project. In subsequent procedures, you use this Operator to deploy some broker instances.

3.2.1. Getting the Operator code

This procedure shows how to access and prepare the code you need to install the latest version of the Operator for AMQ Broker 7.9.

Procedure

  1. In your web browser, navigate to the Software Downloads page for AMQ Broker 7.9.4 releases.
  2. Ensure that the value of the Version drop-down list is set to 7.9.4 and the Releases tab is selected.
  3. Next to AMQ Broker 7.9.4 Operator Installation and Example Files, click Download.

    Download of the amq-broker-operator-7.9.4-ocp-install-examples.zip compressed archive automatically begins.

  4. When the download has completed, move the archive to your chosen installation directory. The following example moves the archive to a directory called ~/broker/operator.

    $ mkdir ~/broker/operator
    $ mv amq-broker-operator-7.9.4-ocp-install-examples.zip ~/broker/operator
  5. In your chosen installation directory, extract the contents of the archive. For example:

    $ cd ~/broker/operator
    $ unzip amq-broker-operator-7.9.4-ocp-install-examples.zip
  6. Switch to the directory that was created when you extracted the archive. For example:

    $ cd amq-broker-operator-7.9.4-ocp-install-examples
  7. Log in to OpenShift Container Platform as a cluster administrator. For example:

    $ oc login -u system:admin
  8. Specify the project in which you want to install the Operator. You can create a new project or switch to an existing one.

    1. Create a new project:

      $ oc new-project <project_name>
    2. Or, switch to an existing project:

      $ oc project <project_name>
  9. Specify a service account to use with the Operator.

    1. In the deploy directory of the Operator archive that you extracted, open the service_account.yaml file.
    2. Ensure that the kind element is set to ServiceAccount.
    3. In the metadata section, assign a custom name to the service account, or use the default name. The default name is amq-broker-operator.
    4. Create the service account in your project.

      $ oc create -f deploy/service_account.yaml
  10. Specify a role name for the Operator.

    1. Open the role.yaml file. This file specifies the resources that the Operator can use and modify.
    2. Ensure that the kind element is set to Role.
    3. In the metadata section, assign a custom name to the role, or use the default name. The default name is amq-broker-operator.
    4. Create the role in your project.

      $ oc create -f deploy/role.yaml
  11. Specify a role binding for the Operator. The role binding binds the previously-created service account to the Operator role, based on the names you specified.

    1. Open the role_binding.yaml file. Ensure that the name values for ServiceAccount and Role match those specified in the service_account.yaml and role.yaml files. For example:

      metadata:
          name: amq-broker-operator
      subjects:
          kind: ServiceAccount
          name: amq-broker-operator
      roleRef:
          kind: Role
          name: amq-broker-operator
    2. Create the role binding in your project.

      $ oc create -f deploy/role_binding.yaml

In the procedure that follows, you deploy the Operator in your project.

3.2.2. Deploying the Operator using the CLI

The procedure in this section shows how to use the OpenShift command-line interface (CLI) to deploy the latest version of the Operator for AMQ Broker 7.9 in your OpenShift project.

Prerequisites

  • You must have already prepared your OpenShift project for the Operator deployment. See Section 3.2.1, “Getting the Operator code”.
  • Starting in AMQ Broker 7.3, you use a new version of the Red Hat Ecosystem Catalog to access container images. This new version of the registry requires you to become an authenticated user before you can access images. Before you can follow the procedure in this section, you must first complete the steps described in Red Hat Container Registry Authentication.
  • If you intend to deploy brokers with persistent storage and do not have container-native storage in your OpenShift cluster, you need to manually provision Persistent Volumes (PVs) and ensure that they are available to be claimed by the Operator. For example, if you want to create a cluster of two brokers with persistent storage (that is, by setting persistenceEnabled=true in your Custom Resource), you need to have two PVs available. By default, each broker instance requires storage of 2 GiB.

    If you specify persistenceEnabled=false in your Custom Resource, the deployed brokers uses ephemeral storage. Ephemeral storage means that that every time you restart the broker Pods, any existing data is lost.

    For more information about provisioning persistent storage, see:

Procedure

  1. In the OpenShift command-line interface (CLI), log in to OpenShift as a cluster administrator. For example:

    $ oc login -u system:admin
  2. Switch to the project that you previously prepared for the Operator deployment. For example:

    $ oc project <project_name>
  3. Switch to the directory that was created when you previously extracted the Operator installation archive. For example:

    $ cd ~/broker/operator/amq-broker-operator-7.9.4-ocp-install-examples
  4. Deploy the CRDs that are included with the Operator. You must install the CRDs in your OpenShift cluster before deploying and starting the Operator.

    1. Deploy the main broker CRD.

      $ oc create -f deploy/crds/broker_activemqartemis_crd.yaml
    2. Deploy the address CRD.

      $ oc create -f deploy/crds/broker_activemqartemisaddress_crd.yaml
    3. Deploy the scaledown controller CRD.

      $ oc create -f deploy/crds/broker_activemqartemisscaledown_crd.yaml
  5. Link the pull secret associated with the account used for authentication in the Red Hat Ecosystem Catalog with the default, deployer, and builder service accounts for your OpenShift project.

    $ oc secrets link --for=pull default <secret_name>
    $ oc secrets link --for=pull deployer <secret_name>
    $ oc secrets link --for=pull builder <secret_name>
  6. In the deploy directory of the Operator archive that you downloaded and extracted, open the operator.yaml file. Ensure that the value of the spec.containers.image property corresponds to version 7.9.4-opr-3 of the Operator, as shown below.

    spec:
        template:
            spec:
                containers:
                    #image: registry.redhat.io/amq7/amq-broker-rhel8-operator:7.9
                    image: registry.redhat.io/amq7/amq-broker-rhel8-operator@sha256:4045170b583f76cdfbe123fd794ed4d175de0c2a76bdb7bf8762b3e35f0eb5b8
    Note

    In the operator.yaml file, the Operator uses an image that is represented by a Secure Hash Algorithm (SHA) value. The comment line, which begins with a number sign (#) symbol, denotes that the SHA value corresponds to a specific container image tag.

  7. Determine which namespaces are watched by the Operator by optionally editing the WATCH_NAMESPACE section of the operator.yaml file.

    • To deploy the Operator to watch the active namespace, do not edit the section:

      - name: WATCH_NAMESPACE
        valueFrom:
          fieldRef:
            fieldPath: metadata.namespace
    • To deploy the Operator to watch all namespaces:

      - name: WATCH_NAMESPACE
        value: '*'
    • To deploy the Operator to watch multiple namespaces, for example namespace1 and namespace2:

      - name: WATCH_NAMESPACE
        value: 'namespace1,namespace2'
    Note

    If you previously deployed brokers using an earlier version of the Operator, and you want deploy the Operator to watch many namespaces, see Before you upgrade.

  8. Deploy the Operator.

    $ oc create -f deploy/operator.yaml

    In your OpenShift project, the Operator starts in a new Pod.

    In the OpenShift Container Platform web console, the information on the Events tab of the Operator Pod confirms that OpenShift has deployed the Operator image that you specified, has assigned a new container to a node in your OpenShift cluster, and has started the new container.

    In addition, if you click the Logs tab within the Pod, the output should include lines resembling the following:

    ...
    {"level":"info","ts":1553619035.8302743,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"activemqartemisaddress-controller"}
    {"level":"info","ts":1553619035.830541,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"activemqartemis-controller"}
    {"level":"info","ts":1553619035.9306898,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"activemqartemisaddress-controller","worker count":1}
    {"level":"info","ts":1553619035.9311671,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"activemqartemis-controller","worker count":1}

    The preceding output confirms that the newly-deployed Operator is communicating with Kubernetes, that the controllers for the broker and addressing are running, and that these controllers have started some workers.

Note

It is recommended that you deploy only a single instance of the AMQ Broker Operator in a given OpenShift project. Setting the spec.replicas property of your Operator deployment to a value greater than 1, or deploying the Operator more than once in the same project is not recommended.

Additional resources

3.3. Installing the Operator using OperatorHub

3.3.1. Overview of the Operator Lifecycle Manager

In OpenShift Container Platform 4.5 and later, the Operator Lifecycle Manager (OLM) helps users install, update, and generally manage the lifecycle of all Operators and their associated services running across their clusters. It is part of the Operator Framework, an open source toolkit designed to manage Kubernetes-native applications (Operators) in an effective, automated, and scalable way.

The OLM runs by default in OpenShift Container Platform 4.5 and later, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster.

OperatorHub is the graphical interface that OpenShift cluster administrators use to discover, install, and upgrade Operators using the OLM. With one click, these Operators can be pulled from OperatorHub, installed on the cluster, and managed by the OLM, ready for engineering teams to self-service manage the software in development, test, and production environments.

When you have deployed the Operator, you can use Custom Resource (CR) instances to create broker deployments such as standalone and clustered brokers.

3.3.2. Deploying the Operator from OperatorHub

This procedure shows how to use OperatorHub to deploy the latest version of the Operator for AMQ Broker to a specified OpenShift project.

Important

Deploying the Operator using OperatorHub requires cluster administrator privileges.

Prerequisites

  • The Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) Operator must be available in OperatorHub.

Procedure

  1. Log in to the OpenShift Container Platform web console as a cluster administrator.
  2. In left navigation menu, click Operators OperatorHub.
  3. On the Project drop-down menu at the top of the OperatorHub page, select the project in which you want to deploy the Operator.
  4. On the OperatorHub page, use the Filter by keyword…​ box to find the Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) Operator.

    Note

    In OperatorHub, you might find more than one Operator than includes AMQ Broker in its name. Ensure that you click the Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) Operator. When you click this Operator, review the information pane that opens. For AMQ Broker 7.9, the latest minor version tag of this Operator is 7.9.4-opr-3.

  5. Click the Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) Operator. On the dialog box that appears, click Install.
  6. On the Install Operator page:

    1. Under Update Channel, specify the channel used to track and receive updates for the Operator by selecting 7.x from the following radio buttons:

      • 7.x - This channel will update to 7.10 when available.
      • 7.8.x - This is the Long Term Support (LTS) channel.
    2. Under Installation Mode, choose which namespaces the Operator watches:

      • A specific namespace on the cluster - The Operator is installed in that namespace and only monitors that namespace for CR changes.
      • All namespaces - The Operator monitors all namespaces for CR changes.
      Note

      If you previously deployed brokers using an earlier version of the Operator, and you want deploy the Operator to watch many namespaces, see Before you upgrade.

  7. From the Installed Namespace drop-down menu, select the project in which you want to install the Operator.
  8. Under Approval Strategy, ensure that the radio button entitled Automatic is selected. This option specifies that updates to the Operator do not require manual approval for installation to take place.
  9. Click Install.

When the Operator installation is complete, the Installed Operators page opens. You should see that the Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) Operator is installed in the project namespace that you specified.

Additional resources

3.4. Creating Operator-based broker deployments

3.4.1. Deploying a basic broker instance

The following procedure shows how to use a Custom Resource (CR) instance to create a basic broker deployment.

Note

Prerequisites

Procedure

When you have successfully installed the Operator, the Operator is running and listening for changes related to your CRs. This example procedure shows how to use a CR instance to deploy a basic broker in your project.

  1. Start configuring a Custom Resource (CR) instance for the broker deployment.

    1. Using the OpenShift command-line interface:

      1. Log in to OpenShift as a user that has privileges to deploy CRs in the project in which you are creating the deployment.

        oc login -u <user> -p <password> --server=<host:port>
      2. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted.
    2. Using the OpenShift Container Platform web console:

      1. Log in to the console as a user that has privileges to deploy CRs in the project in which you are creating the deployment.
      2. Start a new CR instance based on the main broker CRD. In the left pane, click Administration Custom Resource Definitions.
      3. Click the ActiveMQArtemis CRD.
      4. Click the Instances tab.
      5. Click Create ActiveMQArtemis.

        Within the console, a YAML editor opens, enabling you to configure a CR instance.

    For a basic broker deployment, a configuration might resemble that shown below. This configuration is the default content of the broker_activemqartemis_cr.yaml sample CR file.

    apiVersion: broker.amq.io/v2alpha4
    kind: ActiveMQArtemis
    metadata:
      name: ex-aao
      application: ex-aao-app
    spec:
        version: 7.9.4
        deploymentPlan:
            size: 1
            image: placeholder
            requireLogin: false
            persistenceEnabled: true
            journalType: nio
            messageMigration: true

    Observe that in the broker_activemqartemis_cr.yaml sample CR file, the image property is set to a default value of placeholder. This value indicates that, by default, the image property does not specify a broker container image to use for the deployment. To learn how the Operator determines the appropriate broker container image to use, see Section 2.4, “How the Operator chooses container images”.

    Note

    The broker_activemqartemis_cr.yaml sample CR uses a naming convention of ex-aao. This naming convention denotes that the CR is an example resource for the AMQ Broker Operator. AMQ Broker is based on the ActiveMQ Artemis project. When you deploy this sample CR, the resulting StatefulSet uses the name ex-aao-ss. Furthermore, broker Pods in the deployment are directly based on the StatefulSet name, for example, ex-aao-ss-0, ex-aao-ss-1, and so on. The application name in the CR appears in the deployment as a label on the StatefulSet. You might use this label in a Pod selector, for example.

  2. The size property specifies the number of brokers to deploy. A value of 2 or greater specifies a clustered broker deployment. However, to deploy a single broker instance, ensure that the value is set to 1.
  3. Deploy the CR instance.

    1. Using the OpenShift command-line interface:

      1. Save the CR file.
      2. Switch to the project in which you are creating the broker deployment.

        $ oc project <project_name>
      3. Create the CR instance.

        $ oc create -f <path/to/custom_resource_instance>.yaml
    2. Using the OpenShift web console:

      1. When you have finished configuring the CR, click Create.
  4. In the OpenShift Container Platform web console, click Workloads StatefulSets. You see a new StatefulSet called ex-aao-ss.

    1. Click the ex-aao-ss StatefulSet. You see that there is one Pod, corresponding to the single broker that you defined in the CR.
    2. Within the StatefulSet, click the Pods tab. Click the ex-aao-ss Pod. On the Events tab of the running Pod, you see that the broker container has started. The Logs tab shows that the broker itself is running.
  5. To test that the broker is running normally, access a shell on the broker Pod to send some test messages.

    1. Using the OpenShift Container Platform web console:

      1. Click Workloads Pods.
      2. Click the ex-aao-ss Pod.
      3. Click the Terminal tab.
    2. Using the OpenShift command-line interface:

      1. Get the Pod names and internal IP addresses for your project.

        $ oc get pods -o wide
        
        NAME                          STATUS   IP
        amq-broker-operator-54d996c   Running  10.129.2.14
        ex-aao-ss-0                   Running  10.129.2.15
      2. Access the shell for the broker Pod.

        $ oc rsh ex-aao-ss-0
  6. From the shell, use the artemis command to send some test messages. Specify the internal IP address of the broker Pod in the URL. For example:

    sh-4.2$ ./amq-broker/bin/artemis producer --url tcp://10.129.2.15:61616 --destination queue://demoQueue

    The preceding command automatically creates a queue called demoQueue on the broker and sends a default quantity of 1000 messages to the queue.

    You should see output that resembles the following:

    Connection brokerURL = tcp://10.129.2.15:61616
    Producer ActiveMQQueue[demoQueue], thread=0 Started to calculate elapsed time ...
    
    Producer ActiveMQQueue[demoQueue], thread=0 Produced: 1000 messages
    Producer ActiveMQQueue[demoQueue], thread=0 Elapsed time in second : 3 s
    Producer ActiveMQQueue[demoQueue], thread=0 Elapsed time in milli second : 3492 milli seconds

Additional resources

3.4.2. Deploying clustered brokers

If there are two or more broker Pods running in your project, the Pods automatically form a broker cluster. A clustered configuration enables brokers to connect to each other and redistribute messages as needed, for load balancing.

The following procedure shows you how to deploy clustered brokers. By default, the brokers in this deployment use on demand load balancing, meaning that brokers will forward messages only to other brokers that have matching consumers.

Prerequisites

Procedure

  1. Open the CR file that you used for your basic broker deployment.
  2. For a clustered deployment, ensure that the value of deploymentPlan.size is 2 or greater. For example:

    apiVersion: broker.amq.io/v2alpha4
    kind: ActiveMQArtemis
    metadata:
      name: ex-aao
      application: ex-aao-app
    spec:
        version: 7.9.4
        deploymentPlan:
            size: 4
            image: placeholder
            ...
    Note

    In the metadata section, you need to include the namespace property and specify a value only if you are using the OpenShift Container Platform web console to create your CR instance. The value that you should specify is the name of the OpenShift project for your broker deployment.

  3. Save the modified CR file.
  4. Log in to OpenShift as a user that has privileges to deploy CRs in the project in which you previously created your basic broker deployment.

    $ oc login -u <user> -p <password> --server=<host:port>
  5. Switch to the project in which you previously created your basic broker deployment.

    $ oc project <project_name>
  6. At the command line, apply the change:

    $ oc apply -f <path/to/custom_resource_instance>.yaml

    In the OpenShift Container Platform web console, additional broker Pods starts in your project, according to the number specified in your CR. By default, the brokers running in the project are clustered.

  7. Open the Logs tab of each Pod. The logs show that OpenShift has established a cluster connection bridge on each broker. Specifically, the log output includes a line like the following:

    targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@6f13fb88

3.4.3. Applying Custom Resource changes to running broker deployments

The following are some important things to note about applying Custom Resource (CR) changes to running broker deployments:

  • You cannot dynamically update the persistenceEnabled attribute in your CR. To change this attribute, scale your cluster down to zero brokers. Delete the existing CR. Then, recreate and redeploy the CR with your changes, also specifying a deployment size.
  • The value of the deploymentPlan.size attribute in your CR overrides any change you make to size of your broker deployment via the oc scale command. For example, suppose you use oc scale to change the size of a deployment from three brokers to two, but the value of deploymentPlan.size in your CR is still 3. In this case, OpenShift initially scales the deployment down to two brokers. However, when the scaledown operation is complete, the Operator restores the deployment to three brokers, as specified in the CR.
  • As described in Section 3.2.2, “Deploying the Operator using the CLI”, if you create a broker deployment with persistent storage (that is, by setting persistenceEnabled=true in your CR), you might need to provision Persistent Volumes (PVs) for the AMQ Broker Operator to claim for your broker Pods. If you scale down the size of your broker deployment, the Operator releases any PVs that it previously claimed for the broker Pods that are now shut down. However, if you remove your broker deployment by deleting your CR, AMQ Broker Operator does not release Persistent Volume Claims (PVCs) for any broker Pods that are still in the deployment when you remove it. In addition, these unreleased PVs are unavailable to any new deployment. In this case, you need to manually release the volumes. For more information, see Release a persistent volume in the OpenShift documentation.
  • In AMQ Broker 7.9, if you want to configure the following items, you must add the appropriate configuration to the main CR instance before deploying the CR for the first time.

  • During an active scaling event, any further changes that you apply are queued by the Operator and executed only when scaling is complete. For example, suppose that you scale the size of your deployment down from four brokers to one. Then, while scaledown is taking place, you also change the values of the broker administrator user name and password. In this case, the Operator queues the user name and password changes until the deployment is running with one active broker.
  • All CR changes – apart from changing the size of your deployment, or changing the value of the expose attribute for acceptors, connectors, or the console – cause existing brokers to be restarted. If you have multiple brokers in your deployment, only one broker restarts at a time.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.