이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 7. Template-based broker deployment examples


Prerequisites

  • These procedures assume an OpenShift Container Platform instance similar to that created in OpenShift Container Platform Getting Started.
  • In the AMQ Broker application templates, the values of the AMQ_USER, AMQ_PASSWORD, AMQ_CLUSTER_USER, AMQ_CLUSTER_PASSWORD, AMQ_TRUSTSTORE_PASSWORD, and AMQ_KEYSTORE_PASSWORD environment variables are stored in a secret. To learn more about using and modifying these environment variables when you deploy a template in any of tutorials that follow, see About sensitive credentials.

The following procedures example how to use application templates to create various deployments of brokers.

7.1. Deploying a basic broker with SSL

Deploy a basic broker that is ephemeral and supports SSL.

7.1.1. Deploying the image and template

Prerequisites

Procedure

  1. Navigate to the OpenShift web console and log in.
  2. Select the amq-demo project space.
  3. Click Add to Project > Browse Catalog to list all of the default image streams and templates.
  4. Use the Filter search bar to limit the list to those that match amq. You might need to click See all to show the desired application template.
  5. Select the amq-broker-77-ssl template which is labeled Red Hat AMQ Broker 7.7 (Ephemeral, with SSL).
  6. Set the following values in the configuration and click Create.

    Table 7.1. Example template
    Environment variableDisplay NameValueDescription

    AMQ_PROTOCOL

    AMQ Protocols

    openwire,amqp,stomp,mqtt,hornetq

    The protocols to be accepted by the broker

    AMQ_QUEUES

    Queues

    demoQueue

    Creates an anycast queue called demoQueue

    AMQ_ADDRESSES

    Addresses

    demoTopic

    Creates an address (or topic) called demoTopic. By default, this address has no assigned routing type.

    AMQ_USER

    AMQ Username

    amq-demo-user

    The username the client uses

    AMQ_PASSWORD

    AMQ Password

    password

    The password the client uses with the username

    AMQ_TRUSTSTORE

    Trust Store Filename

    broker.ts

    The SSL truststore file name

    AMQ_TRUSTSTORE_PASSWORD

    Truststore Password

    password

    The password used when creating the Truststore

    AMQ_KEYSTORE

    AMQ Keystore Filename

    broker.ks

    The SSL keystore file name

    AMQ_KEYSTORE_PASSWORD

    AMQ Keystore Password

    password

    The password used when creating the Keystore

7.1.2. Deploying the application

After creating the application, deploy it to create a Pod and start the broker.

Procedure

  1. Click Deployments in the OpenShift Container Platform web console.
  2. Click the broker-amq deployment.
  3. Click Deploy to deploy the application.
  4. Click the broker Pod and then click the Logs tab to verify the state of the broker.

    If the broker logs have not loaded, and the Pod status shows ErrImagePull or ImagePullBackOff, your deployment configuration was not able to directly pull the specified broker image from the Red Hat Container Registry. In this case, edit your deployment configuration to reference the correct broker image name and the image pull secret name associated with the account used for authentication in the Red Hat Container Registry. Then, you can import the broker image and start the broker. To do this, complete steps similar to those in Deploying and starting the broker application.

7.1.3. Creating a Route

Create a Route for the broker so that clients outside of OpenShift Container Platform can connect using SSL. By default, the secured broker protocols are available through the 61617/TCP port. In addition, there are SSL and non-SSL ports exposed on the broker Pod for each messaging protocol that the broker supports. However, external client cannot connect directly to these ports on the broker. Instead, external clients connect to OpenShift via the Openshift router, which determines how to forward traffic to the appropriate port on the broker Pod.

Note

If you scale your deployment up to multiple brokers in a cluster, you must manually create a Service and a Route for each broker, and then use each Service-and-Route combination to direct a given client to a given broker, or broker list. For an example of configuring multiple Services and Routes to connect clustered brokers to their own instances of the AMQ Broker management console, see Creating Routes for the AMQ Broker management console.

Prerequisites

  • Before creating an SSL Route, you should understand how external clients use this Route to connect to the broker. For more information, see Creating an SSL Route.

Procedure

  1. Click Services broker-amq-tcp-ssl.
  2. Click Actions Create a route.
  3. To display the TLS parameters, select the Secure route check box.
  4. From the TLS Termination drop-down menu, choose Passthrough. This selection relays all communication to AMQ Broker without the OpenShift router decrypting and resending it.
  5. To view the Route, click Routes. For example:

    https://broker-amq-tcp-amq-demo.router.default.svc.cluster.local

This hostname will be used by external clients to connect to the broker using SSL with SNI.

Additional resources

  • For more information about creating SSL Routes, see Creating an SSL Route.
  • For more information on Routes in the OpenShift Container Platform, see Routes.

7.2. Deploying a basic broker with persistence and SSL

Deploy a persistent broker that supports SSL. When a broker needs persistence, the broker is deployed as a StatefulSet and stores messaging data on a persistent volume associated with the broker Pod via a persistent volume claim. When a broker Pod is created, it uses storage that remains in the event that you shut down the Pod, or if the Pod shuts down unexpectedly. This configuration means that messages are not lost, as they would be with a standard deployment.

Prerequisites

7.2.1. Deploy the image and template

Procedure

  1. Navigate to the OpenShift web console and log in.
  2. Select the amq-demo project space.
  3. Click Add to Project Browse catalog to list all of the default image streams and templates.
  4. Use the Filter search bar to limit the list to those that match amq. You might need to click See all to show the desired application template.
  5. Select the amq-broker-77-persistence-ssl template, which is labelled Red Hat AMQ Broker 7.7 (Persistence, with SSL).
  6. Set the following values in the configuration and click create.

    Table 7.2. Example template
    Environment variableDisplay NameValueDescription

    AMQ_PROTOCOL

    AMQ Protocols

    openwire,amqp,stomp,mqtt,hornetq

    The protocols to be accepted by the broker

    AMQ_QUEUES

    Queues

    demoQueue

    Creates an anycast queue called demoQueue

    AMQ_ADDRESSES

    Addresses

    demoTopic

    Creates an address (or topic) called demoTopic. By default, this address has no assigned routing type.

    VOLUME_CAPACITY

    AMQ Volume Size

    1Gi

    The persistent volume size created for the journal

    AMQ_USER

    AMQ Username

    amq-demo-user

    The username the client uses

    AMQ_PASSWORD

    AMQ Password

    password

    The password the client uses with the username

    AMQ_TRUSTSTORE

    Trust Store Filename

    broker.ts

    The SSL truststore file name

    AMQ_TRUSTSTORE_PASSWORD

    Truststore Password

    password

    The password used when creating the Truststore

    AMQ_KEYSTORE

    AMQ Keystore Filename

    broker.ks

    The SSL keystore file name

    AMQ_KEYSTORE_PASSWORD

    AMQ Keystore Password

    password

    The password used when creating the Keystore

7.2.2. Deploy the application

Once the application has been created it needs to be deployed. Deploying the application creates a Pod and starts the broker.

Procedure

  1. Click Stateful Sets in the OpenShift Container Platform web console.
  2. Click the broker-amq deployment.
  3. Click Deploy to deploy the application.
  4. Click the broker Pod and then click the Logs tab to verify the state of the broker. You should see the queue created via the template.

    If the broker logs have not loaded, and the Pod status shows ErrImagePull or ImagePullBackOff, your configuration was not able to directly pull the specified broker image from the Red Hat Container Registry. In this case, edit your deployment configuration to reference the correct broker image name and the image pull secret name associated with the account used for authentication in the Red Hat Container Registry. Then, you can import the broker image and start the broker. To do this, complete steps similar to those in Deploying and starting the broker application.

  5. Click the Terminal tab to access a shell where you can use the CLI to send some messages.

    sh-4.2$ ./broker/bin/artemis producer --destination queue://demoQueue
    Producer ActiveMQQueue[demoQueue], thread=0 Started to calculate elapsed time ...
    
    Producer ActiveMQQueue[demoQueue], thread=0 Produced: 1000 messages
    Producer ActiveMQQueue[demoQueue], thread=0 Elapsed time in second : 4 s
    Producer ActiveMQQueue[demoQueue], thread=0 Elapsed time in milli second : 4584 milli seconds
    
    sh-4.2$ ./broker/bin/artemis consumer  --destination queue://demoQueue
    Consumer:: filter = null
    Consumer ActiveMQQueue[demoQueue], thread=0 wait until 1000 messages are consumed
    Received 1000
    Consumer ActiveMQQueue[demoQueue], thread=0 Consumed: 1000 messages
    Consumer ActiveMQQueue[demoQueue], thread=0 Consumer thread finished

    Alternatively, use the OpenShift client to access the shell using the Pod name, as shown in the following example.

    // Get the Pod names and internal IP Addresses
    oc get pods -o wide
    
    // Access a broker Pod by name
    oc rsh <broker-pod-name>
  6. Now scale down the broker using the oc command.

    $ oc scale statefulset broker-amq --replicas=0
    statefulset "broker-amq" scaled

    You can use the console to check that the Pod count is 0

  7. Now scale the broker back up to 1.

    $ oc scale statefulset broker-amq --replicas=1
    statefulset "broker-amq" scaled
  8. Consume the messages again by using the terminal. For example:

    sh-4.2$ broker/bin/artemis consumer --destination queue://demoQueue
    Consumer:: filter = null
    Consumer ActiveMQQueue[demoQueue], thread=0 wait until 1000 messages are consumed
    Received 1000
    Consumer ActiveMQQueue[demoQueue], thread=0 Consumed: 1000 messages
    Consumer ActiveMQQueue[demoQueue], thread=0 Consumer thread finished

Additional resources

  • For more information on managing stateful applications, see StatefulSets (external).

7.2.3. Creating a Route

Create a Route for the broker so that clients outside of OpenShift Container Platform can connect using SSL. By default, the broker protocols are available through the 61617/TCP port.

Note

If you scale your deployment up to multiple brokers in a cluster, you must manually create a Service and a Route for each broker, and then use each Service-and-Route combination to direct a given client to a given broker, or broker list. For an example of configuring multiple Services and Routes to connect clustered brokers to their own instances of the AMQ Broker management console, see Creating Routes for the AMQ Broker management console.

Prerequisites

  • Before creating an SSL Route, you should understand how external clients use this Route to connect to the broker. For more information, see Creating an SSL Route.

Procedure

  1. Click Services broker-amq-tcp-ssl.
  2. Click Actions Create a route.
  3. To display the TLS parameters, select the Secure route check box.
  4. From the TLS Termination drop-down menu, choose Passthrough. This selection relays all communication to AMQ Broker without the OpenShift router decrypting and resending it.
  5. To view the Route, click Routes. For example:

    https://broker-amq-tcp-amq-demo.router.default.svc.cluster.local

This hostname will be used by external clients to connect to the broker using SSL with SNI.

Additional resources

  • For more information on Routes in the OpenShift Container Platform, see Routes.

7.3. Deploying a set of clustered brokers

Deploy a clustered set of brokers where each broker runs in its own Pod.

7.3.1. Distributing messages

Message distribution is configured to use ON_DEMAND. This means that when messages arrive at a clustered broker, the messages are distributed in a round-robin fashion to any broker that has consumers.

This message distribution policy safeguards against messages getting stuck on a specific broker while a consumer, connected either directly or through the OpenShift router, is connected to a different broker.

The redistribution delay is zero by default. If a message is on a queue that has no consumers, it will be redistributed to another broker.

Note

When redistribution is enabled, messages can be delivered out of order.

7.3.2. Deploy the image and template

Prerequisites

Procedure

  1. Navigate to the OpenShift web console and log in.
  2. Select the amq-demo project space.
  3. Click Add to Project > Browse catalog to list all of the default image streams and templates
  4. Use the Filter search bar to limit the list to those that match amq. Click See all to show the desired application template.
  5. Select the amq-broker-77-persistence-clustered template which is labeled Red Hat AMQ Broker 7.7 (no SSL, clustered).
  6. Set the following values in the configuration and click create.

    Table 7.3. Example template
    Environment variableDisplay NameValueDescription

    AMQ_PROTOCOL

    AMQ Protocols

    openwire,amqp,stomp,mqtt,hornetq

    The protocols to be accepted by the broker

    AMQ_QUEUES

    Queues

    demoQueue

    Creates an anycast queue called demoQueue

    AMQ_ADDRESSES

    Addresses

    demoTopic

    Creates an address (or topic) called demoTopic. By default, this address has no assigned routing type.

    VOLUME_CAPACITY

    AMQ Volume Size

    1Gi

    The persistent volume size created for the journal

    AMQ_CLUSTERED

    Clustered

    true

    This needs to be true to ensure the brokers cluster

    AMQ_CLUSTER_USER

    cluster user

    generated

    The username the brokers use to connect with each other

    AMQ_CLUSTER_PASSWORD

    cluster password

    generated

    The password the brokers use to connect with each other

    AMQ_USER

    AMQ Username

    amq-demo-user

    The username the client uses

    AMQ_PASSWORD

    AMQ Password

    password

    The password the client uses with the username

7.3.3. Deploying the application

Once the application has been created it needs to be deployed. Deploying the application creates a Pod and starts the broker.

Procedure

  1. Click Stateful Sets in the OpenShift Container Platform web console.
  2. Click the broker-amq deployment.
  3. Click Deploy to deploy the application.

    Note

    The default number of replicas for a clustered template is 0. You should not see any Pods.

  4. Scale up the Pods to three to create a cluster of brokers.

    $ oc scale statefulset broker-amq --replicas=3
    statefulset "broker-amq" scaled
  5. Check that there are three Pods running.

    $ oc get pods
    NAME           READY     STATUS    RESTARTS   AGE
    broker-amq-0   1/1       Running   0          33m
    broker-amq-1   1/1       Running   0          33m
    broker-amq-2   1/1       Running   0          29m
  6. If the Pod status shows ErrImagePull or ImagePullBackOff, your deployment was not able to directly pull the specified broker image from the Red Hat Container Registry. In this case, edit your Stateful Set to reference the correct broker image name and the image pull secret name associated with the account used for authentication in the Red Hat Container Registry. Then, you can import the broker image and start the brokers. To do this, complete steps similar to those in Deploying and starting the broker application.
  7. Verify that the brokers have clustered with the new Pod by checking the logs.

    $ oc logs broker-amq-2

    This shows the logs of the new broker and an entry for a clustered bridge created between the brokers:

    2018-08-29 07:43:55,779 INFO  [org.apache.activemq.artemis.core.server] AMQ221027: Bridge ClusterConnectionBridge@1b0e9e9d [name=$.artemis.internal.sf.my-cluster.4333c830-ab5f-11e8-afb8-0a580a82006e, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.4333c830-ab5f-11e8-afb8-0a580a82006e, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=9cedb69d-ab5e-11e8-87a4-0a580a82006c], temp=false]@5e0c0398 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@1b0e9e9d [name=$.artemis.internal.sf.my-cluster.4333c830-ab5f-11e8-afb8-0a580a82006e, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.4333c830-ab5f-11e8-afb8-0a580a82006e, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=9cedb69d-ab5e-11e8-87a4-0a580a82006c], temp=false]@5e0c0398 targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=10-130-0-110], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@806813022[nodeUUID=9cedb69d-ab5e-11e8-87a4-0a580a82006c, connector=TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=10-130-0-108, address=, server=ActiveMQServerImpl::serverUUID=9cedb69d-ab5e-11e8-87a4-0a580a82006c])) [initialConnectors=[TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=10-130-0-110], discoveryGroupConfiguration=null]] is connected

7.3.4. Creating Routes for the AMQ Broker management console

The clustering templates do not expose the AMQ Broker management console by default. This is because the OpenShift proxy performs load balancing across each broker in the cluster and it would not be possible to control which broker console is connected at a given time.

The following example procedure shows how to configure each broker in the cluster to connect to its own management console instance. You do this by creating a dedicated Service-and-Route combination for each broker Pod in the cluster.

Prerequisites

Procedure

  1. Create a regular Service for each Pod in the cluster, using a StatefulSet selector to select between Pods. To do this, deploy a Service template, in .yaml format, that looks like the following:

    apiVersion: v1
    kind: Service
    metadata:
      annotations:
        description: 'Service for the management console of broker pod XXXX'
      labels:
        app: application2
        application: application2
        template: amq-broker-77-persistence-clustered
      name: amq2-amq-console-XXXX
      namespace: amq75-p-c-ssl-2
    spec:
      ports:
        - name: console-jolokia
          port: 8161
          protocol: TCP
          targetPort: 8161
      selector:
        deploymentConfig: application2-amq
        statefulset.kubernetes.io/pod-name: application2-amq-XXXX
      type: ClusterIP

    In the preceding template, replace XXXX with the ordinal value of the broker Pod you want to associate with the Service. For example, to associate the Service with the first Pod in the cluster, set XXXX to 0. To associate the Service with the second Pod, set XXXX to 1, and so on.

    Save and deploy an instance of the template for each broker Pod in your cluster.

    Note

    In the example template shown above, the selector uses the Kubernetes-defined Pod name.

  2. Create a Route for each broker Pod, so that the AMQ Broker management console can connect to the Pod.

    Click Routes Create Route.

    The Edit Route page opens.

    1. In the Services drop-down menu, select the previously created broker Service that you want to associate the Route with, for example, amq2-amq-console-0.
    2. Set Target Port to 8161, to enable access for the AMQ Broker management console.
    3. To display the TLS parameters, select the Secure route check box.

      1. From the TLS Termination drop-down menu, choose Passthrough.

        This selection relays all communication to AMQ Broker without the OpenShift router decrypting and resending it.

    4. Click Create.

      When you create a Route associated with one of broker Pods, the resulting .yaml file includes lines that look like the following:

      spec:
        host: amq2-amq-console-0-amq75-p-c-2.apps-ocp311.example.com
        port:
          targetPort: console-jolokia
        tls:
          termination: passthrough
        to:
          kind: Service
          name: amq2-amq-console-0
          weight: 100
        wildcardPolicy: None
  3. To access the management console for a specific broker instance, copy the host URL shown above to a web browser.

Additional resources

7.4. Deploying a set of clustered SSL brokers

Deploy a clustered set of brokers, where each broker runs in its own Pod and the broker is configured to accept connections using SSL.

7.4.1. Distributing messages

Message distribution is configured to use ON_DEMAND. This means that when messages arrive at a clustered broker, the messages are distributed in a round-robin fashion to any broker that has consumers.

This message distribution policy safeguards against messages getting stuck on a specific broker while a consumer, connected either directly or through the OpenShift router, is connected to a different broker.

The redistribution delay is non-zero by default. If a message is on a queue that has no consumers, it will be redistributed to another broker.

Note

When redistribution is enabled, messages can be delivered out of order.

7.4.2. Deploying the image and template

Prerequisites

Procedure

  1. Navigate to the OpenShift web console and log in.
  2. Select the amq-demo project space.
  3. Click Add to Project > Browse catalog to list all of the default image streams and templates.
  4. Use the Filter search bar to limit the list to those that match amq. Click See all to show the desired application template.
  5. Select the amq-broker-77-persistence-clustered-ssl template which is labeled Red Hat AMQ Broker 7.7 (SSL, clustered).
  6. Set the following values in the configuration and click create.

    Table 7.4. Example template
    Environment variableDisplay NameValueDescription

    AMQ_PROTOCOL

    AMQ Protocols

    openwire,amqp,stomp,mqtt,hornetq

    The protocols to be accepted by the broker

    AMQ_QUEUES

    Queues

    demoQueue

    Creates an anycast queue called demoQueue

    AMQ_ADDRESSES

    Addresses

    demoTopic

    Creates an address (or topic) called demoTopic. By default, this address has no assigned routing type.

    VOLUME_CAPACITY

    AMQ Volume Size

    1Gi

    The persistent volume size created for the journal

    AMQ_CLUSTERED

    Clustered

    true

    This needs to be true to ensure the brokers cluster

    AMQ_CLUSTER_USER

    cluster user

    generated

    The username the brokers use to connect with each other

    AMQ_CLUSTER_PASSWORD

    cluster password

    generated

    The password the brokers use to connect with each other

    AMQ_USER

    AMQ Username

    amq-demo-user

    The username the client uses

    AMQ_PASSWORD

    AMQ Password

    password

    The password the client uses with the username

    AMQ_TRUSTSTORE

    Trust Store Filename

    broker.ts

    The SSL truststore file name

    AMQ_TRUSTSTORE_PASSWORD

    Truststore Password

    password

    The password used when creating the Truststore

    AMQ_KEYSTORE

    AMQ Keystore Filename

    broker.ks

    The SSL keystore file name

    AMQ_KEYSTORE_PASSWORD

    AMQ Keystore Password

    password

    The password used when creating the Keystore

7.4.3. Deploying the application

Deploy after creating the application. Deploying the application creates a Pod and starts the broker.

Procedure

  1. Click Stateful Sets in the OpenShift Container Platform web console.
  2. Click the broker-amq deployment.
  3. Click Deploy to deploy the application.

    Note

    The default number of replicas for a clustered template is 0, so you will not see any Pods.

  4. Scale up the Pods to three to create a cluster of brokers.

    $ oc scale statefulset broker-amq --replicas=3
    statefulset "broker-amq" scaled
  5. Check that there are three Pods running.

    $ oc get pods
    NAME           READY     STATUS    RESTARTS   AGE
    broker-amq-0   1/1       Running   0          33m
    broker-amq-1   1/1       Running   0          33m
    broker-amq-2   1/1       Running   0          29m
  6. If the Pod status shows ErrImagePull or ImagePullBackOff, your deployment was not able to directly pull the specified broker image from the Red Hat Container Registry. In this case, edit your Stateful Set to reference the correct broker image name and the image pull secret name associated with the account used for authentication in the Red Hat Container Registry. Then, you can import the broker image and start the brokers. To do this, complete steps similar to those in Deploy and start the broker application.
  7. Verify the brokers have clustered with the new Pod by checking the logs.

    $ oc logs broker-amq-2

    This shows all the logs of the new broker and an entry for a clustered bridge created between the brokers, for example:

    2018-08-29 07:43:55,779 INFO  [org.apache.activemq.artemis.core.server] AMQ221027: Bridge ClusterConnectionBridge@1b0e9e9d [name=$.artemis.internal.sf.my-cluster.4333c830-ab5f-11e8-afb8-0a580a82006e, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.4333c830-ab5f-11e8-afb8-0a580a82006e, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=9cedb69d-ab5e-11e8-87a4-0a580a82006c], temp=false]@5e0c0398 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@1b0e9e9d [name=$.artemis.internal.sf.my-cluster.4333c830-ab5f-11e8-afb8-0a580a82006e, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.4333c830-ab5f-11e8-afb8-0a580a82006e, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=9cedb69d-ab5e-11e8-87a4-0a580a82006c], temp=false]@5e0c0398 targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=10-130-0-110], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@806813022[nodeUUID=9cedb69d-ab5e-11e8-87a4-0a580a82006c, connector=TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=10-130-0-108, address=, server=ActiveMQServerImpl::serverUUID=9cedb69d-ab5e-11e8-87a4-0a580a82006c])) [initialConnectors=[TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=10-130-0-110], discoveryGroupConfiguration=null]] is connected

Additional resources

7.5. Deploying a broker with custom configuration

Deploy a broker with custom configuration. Although functionality can be obtained by using templates, broker configuration can be customized if needed.

Prerequisites

7.5.1. Deploy the image and template

Procedure

  1. Navigate to the OpenShift web console and log in.
  2. Select the amq-demo project space.
  3. Click Add to Project > Browse catalog to list all of the default image streams and templates.
  4. Use the Filter search bar to limit results to those that match amq. Click See all to show the desired application template.
  5. Select the amq-broker-77-custom template which is labeled Red Hat AMQ Broker 7.7(Ephemeral, no SSL).
  6. In the configuration, update broker.xml with the custom configuration you would like to use. Click Create.

    Note

    Use a text editor to create the broker’s XML configuration. Then, cut and paste confguration details into the broker.xml field.

    Note

    OpenShift Container Platform does not use a ConfigMap object to store the custom configuration that you specify in the broker.xml field, as is common for many applications deployed on this platform. Instead, OpenShift temporarily stores the specified configuration in an environment variable, before transferring the configuration to a standalone file when the broker container starts.

7.5.2. Deploy the application

Once the application has been created it needs to be deployed. Deploying the application creates a Pod and starts the broker.

Procedure

  1. Click Deployments in the OpenShift Container Platform web console.
  2. Click the broker-amq deployment
  3. Click Deploy to deploy the application.

7.6. Basic SSL client example

Implement a client that sends and receives messages from a broker configured to use SSL, using the Qpid JMS client.

Prerequisites

7.6.1. Configuring the client

Create a sample client that can be updated to connect to the SSL broker. The following procedure builds upon AMQ JMS Examples.

Procedure

  1. Add an entry into your /etc/hosts file to map the route name onto the IP address of the OpenShift cluster:

    10.0.0.1 broker-amq-tcp-amq-demo.router.default.svc.cluster.local
  2. Update the jndi.properties configuration file to use the route, truststore and keystore created previously, for example:

    connectionfactory.myFactoryLookup = amqps://broker-amq-tcp-amq-demo.router.default.svc.cluster.local:8443?transport.keyStoreLocation=<keystore-path>client.ks&transport.keyStorePassword=password&transport.trustStoreLocation=<truststore-path>/client.ts&transport.trustStorePassword=password&transport.verifyHost=false
  3. Update the jndi.properties configuration file to use the queue created earlier.

    queue.myDestinationLookup = demoQueue
  4. Execute the sender client to send a text message.
  5. Execute the receiver client to receive the text message. You should see:

    Received message: Message Text!

7.7. External clients using sub-domains example

Expose a clustered set of brokers through a node port and connect to it using the core JMS client. This enables clients to connect to a set of brokers which are configured using the amq-broker-77-persistence-clustered-ssl template.

7.7.1. Exposing the brokers

Configure the brokers so that the cluster of brokers are externally available and can be connected to directly, bypassing the OpenShift router. This is done by creating a route that exposes each pod using its own hostname.

Procedure

  1. Choose import YAML/JSON from Add to Project drop down
  2. Enter the following and click create.

    apiVersion: v1
    kind: Route
    metadata:
      labels:
        app: broker-amq
        application: broker-amq
      name: tcp-ssl
    spec:
      port:
        targetPort: ow-multi-ssl
      tls:
        termination: passthrough
      to:
        kind: Service
        name: broker-amq-headless
        weight: 100
      wildcardPolicy: Subdomain
      host: star.broker-ssl-amq-headless.amq-demo.svc
    Note

    The important configuration here is the wildcard policy of Subdomain. This allows each broker to be accessible through its own hostname.

7.7.2. Connecting the clients

Create a sample client that can be updated to connect to the SSL broker. The steps in this procedure build upon the AMQ JMS Examples.

Procedure

  1. Add entries into the /etc/hosts file to map the route name onto the actual IP addresses of the brokers:

    10.0.0.1 broker-amq-0.broker-ssl-amq-headless.amq-demo.svc broker-amq-1.broker-ssl-amq-headless.amq-demo.svc broker-amq-2.broker-ssl-amq-headless.amq-demo.svc
  2. Update the jndi.properties configuration file to use the route, truststore, and keystore created previously, for example:

    connectionfactory.myFactoryLookup = amqps://broker-amq-0.broker-ssl-amq-headless.amq-demo.svc:443?transport.keyStoreLocation=/home/ataylor/projects/jboss-amq-7-broker-openshift-image/client.ks&transport.keyStorePassword=password&transport.trustStoreLocation=/home/ataylor/projects/jboss-amq-7-broker-openshift-image/client.ts&transport.trustStorePassword=password&transport.verifyHost=false
  3. Update the jndi.properties configuration file to use the queue created earlier.

    queue.myDestinationLookup = demoQueue
  4. Execute the sender client code to send a text message.
  5. Execute the receiver client code to receive the text message. You should see:

    Received message: Message Text!

Additional resources

7.8. External clients using port binding example

Expose a clustered set of brokers through a NodePort and connect to it using the core JMS client. This enables clients that do not support SNI or SSL. It is used with clusters configured using the amq-broker-77-persistence-clustered template.

7.8.1. Exposing the brokers

Configure the brokers so that the cluster of brokers are externally available and can be connected to directly, bypassing the OpenShift router. This is done by creating a service that uses a NodePort to load balance around the clusters.

Procedure

  1. Choose import YAML/JSON from Add to Project drop down.
  2. Enter the following and click create.

    apiVersion: v1
    kind: Service
    metadata:
      annotations:
        description: The broker's OpenWire port.
        service.alpha.openshift.io/dependencies: >-
          [{"name": "broker-amq-amqp", "kind": "Service"},{"name":
          "broker-amq-mqtt", "kind": "Service"},{"name": "broker-amq-stomp", "kind":
          "Service"}]
      creationTimestamp: '2018-08-29T14:46:33Z'
      labels:
        application: broker
        template: amq-broker-77-statefulset-clustered
      name: broker-external-tcp
      namespace: amq-demo
      resourceVersion: '2450312'
      selfLink: /api/v1/namespaces/amq-demo/services/broker-amq-tcp
      uid: 52631fa0-ab9a-11e8-9380-c280f77be0d0
    spec:
      externalTrafficPolicy: Cluster
      ports:
       -  nodePort: 30001
          port: 61616
          protocol: TCP
          targetPort: 61616
      selector:
        deploymentConfig: broker-amq
      sessionAffinity: None
      type: NodePort
    status:
      loadBalancer: {}
    Note

    The NodePort configuration is important. The NodePort is the port in which the client will access the brokers and the type is NodePort.

7.8.2. Connecting the clients

Create consumers that are round-robinned around the brokers in the cluster using the AMQ broker CLI.

Procedure

  1. In a terminal create a consumer and attach it to the IP address where OpenShift is running.

    artemis consumer --url tcp://<IP_ADDRESS>:30001 --message-count 100 --destination queue://demoQueue
  2. Repeat step 1 twice to start another two consumers.

    Note

    You should now have three consumers load balanced across the three brokers.

  3. Create a producer to send messages.

    artemis producer --url tcp://<IP_ADDRESS>:30001 --message-count 300 --destination queue://demoQueue
  4. Verify each consumer receives messages.

    Consumer:: filter = null
    Consumer ActiveMQQueue[demoQueue], thread=0 wait until 100 messages are consumed
    Consumer ActiveMQQueue[demoQueue], thread=0 Consumed: 100 messages
    Consumer ActiveMQQueue[demoQueue], thread=0 Consumer thread finished
Red Hat logoGithubRedditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

© 2024 Red Hat, Inc.