Red Hat AMQ 6
As of February 2025, Red Hat is no longer supporting Red Hat AMQ 6. If you are using AMQ 6, please upgrade: Migrating to AMQ 7.Chapter 4. Get Started
4.1. Using the A-MQ for OpenShift Image Streams and Application Templates Copy linkLink copied to clipboard!
Red Hat JBoss A-MQ images were automatically created during the installation of OpenShift along with the other default image streams and templates.
4.2. Deployment Considerations for the A-MQ for OpenShift Image Copy linkLink copied to clipboard!
4.2.1. Service Accounts Copy linkLink copied to clipboard!
The A-MQ for OpenShift image requires a service account for deployments. Service accounts are API objects that exists within each project. Three service accounts are created automatically in every project: builder, deployer, and default.
- builder: This service account is used by build pods. It has system:image-builder role which allows pushing images to any image stream in the project using the internal Docker registry.
- deployer: This service account is used by deployment pods. It has system:deployer role which allows viewing and modifying replication controllers and pods in the project.
- default: This service account used to run all other pods unless you specify a different service account.
4.2.2. Creating the Service Account Copy linkLink copied to clipboard!
Service accounts are API objects that exists within each project and can be created or deleted like any other API object. For multiple node deployments, the service account must have the view role enabled so that it can discover and manage the various pods in the cluster. In addition, you will need to configure SSL to enable connections to A-MQ from outside of the OpenShift instance. There are two types of discovery protocols that can be possibly used for discovering of AMQ mesh endpoints. To use OpenShift DNS service, DNS based discovery protocol is used and in case of Kubernets REST API, Kubernetes based discovery protocol is used. To use the Kubernetes based discovery protocol, create a new service account and grant a ‘view’ role for the newly created service account.
Create the service account:
echo '{"kind": "ServiceAccount", "apiVersion": "v1", "metadata": {"name": "<service-account-name>"}}' | oc create -f -$ echo '{"kind": "ServiceAccount", "apiVersion": "v1", "metadata": {"name": "<service-account-name>"}}' | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow OpenShift 3.2 users can use the following command to create the service account:
oc create serviceaccount <service-account-name>
$ oc create serviceaccount <service-account-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the view role to the service account:
oc policy add-role-to-user view system:serviceaccount:<project-name>:<service-account-name>
$ oc policy add-role-to-user view system:serviceaccount:<project-name>:<service-account-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the deployment configuration to run the AMQ pod with newly created service account.
oc edit dc/<deployment_config>
$ oc edit dc/<deployment_config>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the serviceAccount and serviceAccountName parameters to the spec field, and specify the service account you want to use.
spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account>spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.3. Configuring SSL Copy linkLink copied to clipboard!
For a minimal SSL configuration to allow for connections outside of OpenShift, A-MQ requires a broker keyStore, a client keyStore, and a client trustStore that includes the broker keyStore. The broker keyStore is also used to create a secret for the A-MQ for OpenShift image, which is added to the service account.
The following example commands use keytool, a package included with the Java Development Kit, to generate the necessary certificates and stores:
Generate a self-signed certificate for the broker keyStore:
keytool -genkey -alias broker -keyalg RSA -keystore broker.ks
$ keytool -genkey -alias broker -keyalg RSA -keystore broker.ksCopy to Clipboard Copied! Toggle word wrap Toggle overflow Export the certificate so that it can be shared with clients:
keytool -export -alias broker -keystore broker.ks -file broker_cert
$ keytool -export -alias broker -keystore broker.ks -file broker_certCopy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a self-signed certificate for the client keyStore:
keytool -genkey -alias client -keyalg RSA -keystore client.ks
$ keytool -genkey -alias client -keyalg RSA -keystore client.ksCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a client trustStore that imports the broker certificate:
keytool -import -alias broker -keystore client.ts -file broker_cert
$ keytool -import -alias broker -keystore client.ts -file broker_certCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.4. Generating the A-MQ Secret Copy linkLink copied to clipboard!
The broker keyStore can then be used to generate a secret for the namespace, which is also added to the service account so that the applications can be authorized:
oc secrets new <secret-name> <broker-keystore> <broker-truststore>
$ oc secrets new <secret-name> <broker-keystore> <broker-truststore>
oc secrets add sa/<service-account-name> secret/<secret-name>
$ oc secrets add sa/<service-account-name> secret/<secret-name>
4.2.5. Creating a Route Copy linkLink copied to clipboard!
After the A-MQ for OpenShift image has been deployed, an SSL route needs to be created for the A-MQ transport protocol port to allow connections to A-MQ outside of OpenShift.
In addition, selecting Passthrough for TLS Termination relays all communication to the A-MQ broker without the OpenShift router decrypting and resending it. Only SSL routes can be exposed because the OpenShift router requires SNI to send traffic to the correct service. See Secured Routes for more information.
The default ports for the various A-MQ transport protocols are:
61616/TCP (OpenWire)
61617/TCP (OpenWire+SSL)
5672/TCP (AMQP)
5671/TCP (AMQP+SSL)
1883/TCP (MQTT)
8883/TCP (MQTT+SSL)
61613/TCP (STOMP)
61612/TCP (STOMP+SSL)
4.2.6. Scaling Up and Persistent Storage Partitioning Copy linkLink copied to clipboard!
There are two methods for deploying A-MQ with persistent storage: single-node and multi-node partitioning. Single-node partitioning stores the A-MQ logs and the kahadb store directory, with the message queue data, in the storage volume. Multi-node partitioning creates additional, independent split-n directories to store the messaging queue data for each broker, where n is an incremental integer. This communication is not altered if a broker pod is updated, goes down unexpectedly, or is redeployed. When the broker pod is operational again, it reconnects to the associated split directory and continues as before. If a new broker pod is added, a corresponding split-n directory is created for that broker.
In order to enable a multi-node configuration it is necessary to set the AMQ_SPLIT parameter to true, this will result in the server creating independent split-n directories for each instance within the Persistent Volume which can then be used as their data store. This is now the default setting in all persistent templates.
Due to the different storage methods of single-node and multi-node partitioning, changing a deployment from single-node to multi-node results in the application losing all previously stored messages. This is also true if changing a deployment from multi-node to single-node, as the storage paths will not match.
Similarly, if a Rolling Strategy is implemented, the maxSurge parameter must be set to 0%, otherwise the new broker creates a new partition and be unable to connect to the stored messages.
In multi-node partitioning, OpenShift routes new connections to the broker pod with the least amount of connections. Once this connection has been made, messages from that client are sent to the same broker every time, even if the client is run multiple times. This is because the OpenShift router is set to route requests from a client with the same IP to the same pod.
You can see which broker pod is connected to which split directory by viewing the logs for the pod, or by connecting to the broker console. In the ActiveMQ tab of the console, the PersistenceAdapter shows the KahaDBPersistenceAdapter, which includes the split directory as part of it’s name.
4.2.7. Scaling Down and Message Migration Copy linkLink copied to clipboard!
When A-MQ is deployed using a multi-node configuration it is possible for messages to be left in the kahadb store directory of a terminating pod should the cluster be scaled down. In order to prevent messages from remaining within the kahadb store of the terminating pod until the cluster next scales up, each A-MQ persistent template creates a second deployment containing a drainer pod which is responsible for managing the migration of messages. The drainer pod will scan each independent split-n directory within the A-MQ persistent volume, identify data stores associated with those pods which are terminating, and execute an application to migrate the remaining messages from those pods to other active members of the cluster.
Only messages sent through Message Queues will be migrated to other instances of the cluster when scaling down. Messages sent via topics will remain in storage until the cluster scales back up. Support for migrating Virtual Topics will be introduced in a future release.
4.2.8. Customizing A-MQ Configuration Files for Deployment Copy linkLink copied to clipboard!
If using a template from an alternate repository, A-MQ configuration files such as user.properties can be included. When the image is downloaded for deployment, these files are copied to the <amq-home>/amq/conf/ directory on the broker, which are committed to the container and pushed to the registry.
If using this method, it is important that the placeholders in the configuration files (such as ##### AUTHENTICATION #####) are not removed as these placeholders are necessary for building the A-MQ for OpenShift image.
4.2.9. Configuring Client Connections Copy linkLink copied to clipboard!
Clients for the A-MQ for OpenShift image must specify the OpenShift router port (443) when setting the broker URL for SSL connections. Otherwise, A-MQ attempts to use the default SSL port (61617). Including the failover protocol in the URL preserves the client connection in case the pod is restarted or upgraded, or there is a disruption on the router.
...
factory.setBrokerURL("failover://ssl://<route-to-broker-pod>:443");
...
...
factory.setBrokerURL("failover://ssl://<route-to-broker-pod>:443");
...
4.3. Upgrading the Image Repository Copy linkLink copied to clipboard!
On your master host(s), ensure you are logged into the CLI as a cluster administrator or user that has project administrator access to the global openshift project. For example:
oc login -u system:admin
$ oc login -u system:admin
Then, run the following command to update the core A-MQ OpenShift image stream in the openshift project:
oc -n openshift import-image jboss-amq-62
$ oc -n openshift import-image jboss-amq-62
Depending on the deployment configuration, OpenShift deletes one of the broker pods and start a new upgraded pod. The new pod connects to the same persistent storage so that no messages are lost in the process. Once the upgraded pod is running, the process is repeated for the next pod until all of the pods have been upgraded.
If a Rolling Strategy has been configured, OpenShift deletes and recreate pods based on the rolling update settings. Any new pod will only connect to the same persistent storage if the maxSurge parameter is set to 0%, otherwise the new pod creates a new partition and will not be able to connect to the stored messages in the previous partition.
4.4. Binary Builds Copy linkLink copied to clipboard!
To deploy existing applications on OpenShift, you can use the binary source capability.
The following example uses the helloworld-mdb quickstart to deploy an A-MQ 6.2-based broker together with a JBoss EAP 6.4 messaging application, using JMS 1.1.
4.4.1. Prerequisite Copy linkLink copied to clipboard!
Create a new project:
oc new-project amq-bin-demo
$ oc new-project amq-bin-demoCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a service account to be used for A-MQ broker deployment:
oc create serviceaccount eap-service-account serviceaccount "eap-service-account" created
$ oc create serviceaccount eap-service-account serviceaccount "eap-service-account" createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Grant the
viewrole to the service account. This enables the service account to view all resources in the namespace, which is necessary for managing the cluster.oc policy add-role-to-user view system:serviceaccount:$(oc project -q):eap-service-account role "view" added: "system:serviceaccount:amq-bin-demo:eap-service-account"
$ oc policy add-role-to-user view system:serviceaccount:$(oc project -q):eap-service-account role "view" added: "system:serviceaccount:amq-bin-demo:eap-service-account"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4.2. Deploy the A-MQ 6.2 Broker Copy linkLink copied to clipboard!
Identify the image stream for the A-MQ broker:
oc get is -n openshift | grep amq | cut -d ' ' -f 1 jboss-amq-62
$ oc get is -n openshift | grep amq | cut -d ' ' -f 1 jboss-amq-62Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the broker, specifying the following:
- Application name and image stream,
- User name and password for standard broker user,
- A-MQ protocols to configure,
- Names of the A-MQ queues and topics (separated by commas),
- The discovery agent type to use for discovering mesh endpoints,
- Name of the service used for mesh creation,
- The namespace in which the service resides, and
The A-MQ storage usage limit.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Modify the
eap-app-amqdeployment config to run the pods under theeap-service-accountservice account created above:oc patch dc/eap-app-amq --type=json \ -p '[{"op": "add", "path": "/spec/template/spec/serviceAccountName", "value": "eap-service-account"}]' "eap-app-amq" patched$ oc patch dc/eap-app-amq --type=json \ -p '[{"op": "add", "path": "/spec/template/spec/serviceAccountName", "value": "eap-service-account"}]' "eap-app-amq" patchedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4.3. Deploy Binary Build of EAP 6.4 Messaging Application Copy linkLink copied to clipboard!
Clone the source code:
git clone -b 6.4.x https://github.com/jboss-developer/jboss-eap-quickstarts.git
$ git clone -b 6.4.x https://github.com/jboss-developer/jboss-eap-quickstarts.gitCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Configure the Red Hat JBoss Middleware Maven repository.
Build the
helloworld-mdbapplication.cd jboss-eap-quickstarts/helloworld-mdb
$ cd jboss-eap-quickstarts/helloworld-mdbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Prepare the directory structure on the local file system.
Application archives in the deployments/ subdirectory of the main binary build directory are copied directly to the standard deployments folder of the image being built on OpenShift. For the application to deploy, the directory hierarchy containing the web application data must be correctly structured.
Create main directory for the binary build on the local file system and deployments/ subdirectory within it. Copy the previously built WAR archive for the helloworld-mdb quickstart to the deployments/ subdirectory:
ls pom.xml README.html README.md src target
$ ls pom.xml README.html README.md src targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow mkdir -p amq-binary-demo/deployments
$ mkdir -p amq-binary-demo/deploymentsCopy to Clipboard Copied! Toggle word wrap Toggle overflow cp target/jboss-helloworld-mdb.war amq-binary-demo/deployments/
$ cp target/jboss-helloworld-mdb.war amq-binary-demo/deployments/Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteLocation of the standard deployments directory depends on the underlying base image, that was used to deploy the application. See the following table:
Expand Table 4.1. Standard Location of the Deployments Directory Name of the Underlying Base Image(s) Standard Location of the Deployments Directory EAP for OpenShift 6.4 and 7.0
$JBOSS_HOME/standalone/deployments
Java S2I for OpenShift
/deployments
JWS for OpenShift
$JWS_HOME/webapps
Identify the image stream for the EAP 6.4 image:
oc get is -n openshift | grep eap64 | cut -d ' ' -f 1 jboss-eap64-openshift
$ oc get is -n openshift | grep eap64 | cut -d ' ' -f 1 jboss-eap64-openshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new binary build, specifying the image stream and the application name:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the binary build. Instruct the
ocexecutable to use the main directory created in a previous step as the directory containing the binary input for the OpenShift build:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new OpenShift application based on the build, specifying the following:
- Application name,
- A-MQ service prefix mapping,
- JNDI name for connection factory used by applications to connect to the A-MQ broker,
- User name and password for standard broker user,
- A-MQ protocols to configure, and
Names of the A-MQ queues and topics (separated by commas).
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Expose the service as a route:
oc get svc -o name service/eap-app service/eap-app-amq
$ oc get svc -o name service/eap-app service/eap-app-amqCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc get route No resources found.
$ oc get route No resources found.Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc expose svc/eap-app route "eap-app" exposed
$ oc expose svc/eap-app route "eap-app" exposedCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc get route NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD eap-app eap-app-amq-bin-demo.openshift.example.com eap-app 8080-tcp None
$ oc get route NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD eap-app eap-app-amq-bin-demo.openshift.example.com eap-app 8080-tcp NoneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Access the application.
Access the EAP 6.4 messaging application in your browser using the URL http://eap-app-amq-bin-demo.openshift.example.com/jboss-helloworld-mdb/.
Check the log of the EAP 6.4 pod to see the result of message processing.
oc get pods NAME READY STATUS RESTARTS AGE eap-app-1-build 0/1 Completed 0 15m eap-app-1-f8w3r 1/1 Running 0 9m eap-app-amq-2-8q1r6 1/1 Running 0 2h
$ oc get pods NAME READY STATUS RESTARTS AGE eap-app-1-build 0/1 Completed 0 15m eap-app-1-f8w3r 1/1 Running 0 9m eap-app-amq-2-8q1r6 1/1 Running 0 2hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow