Chapter 8. Message migration when scaling down pods
Message migration, which is enabled by the use of the scaledown controller, is currently a Technology Preview feature. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them for production. For more information about technology preview at Red Hat, see Technology Preview Support Scope.
When a persistent template is used to deploy a broker pod that uses a StatefulSet, that broker pod has its own external file system, which remains intact, even if the pod stops or restarts. However, if pods are scaled down and not restarted, data and information such as messages, destinations, or transactions are no longer available to clients.
Message migration addresses the issue of unavailable data and can be obtained by applying the scaledown controller image, which monitors each broker pod. If a broker pod is scaled down or stopped, the scaledown controller recovers the messages by transferring its contents to another broker running in the cluster.
If broker pods are scaled down to 0 (zero), message migration does not occur, since there is no running broker pod to which the message data can be migrated.
8.1. Installing the scaledown controller Copy linkLink copied to clipboard!
AMQ Broker on OCP message migration capabilites are packaged in the scaledown controller container image. This section describes how to enable the broker message migration capabilities on OpenShift Container Platform image streams and application templates.
Procedure
At the command line, log in to OpenShift as a cluster administrator (or as a user that has project administrator access to the global
openshiftproject), for example:$ oc login -u system:adminNoteUsing the
openshiftproject makes the image files that you install later in this procedure globally available to all projects in your OpenShift cluster.As an alternative to using the
openshiftproject (e.g., if a cluster administrator is unavailable), you can log in to a specific OpenShift project to which you have administrator access and in which you want to configure a scaledown controller, for example:$ oc login -u <USERNAME> $ oc project <PROJECT_NAME>Logging into a specific project means that the image files that you install later in this procedure are available only in that project’s namespace.
Run the following commands to install the scaledown controller images files and template:
$ oc replace --force -f \ https://raw.githubusercontent.com/jboss-container-images/jboss-amq-7-broker-openshift-image/73-7.3.0.GA/amq-broker-7-scaledown-controller-image-streams.yaml $ oc replace --force -f \ https://raw.githubusercontent.com/jboss-container-images/jboss-amq-7-broker-openshift-image/73-7.3.0.GA/templates/amq-broker-73-persistence-clustered-controller.yaml $ oc import-image amq-broker-72-scaledown-controller-openshift:1.0
8.2. Using the scaledown controller Copy linkLink copied to clipboard!
To deploy the scaledown controller to migrate messages and drain pods, run the the StatefulSet scaledown controller at the broker pod namespace level. The StatefulSet scaledown controller must be deployed in the same namespace as the stateful applications (in this case, broker pods). It operates only on StatefulSets in that namespace.
You do not need cluster-level privileges to complete this procedure. You must run the StatefulSet scaledown controller at the namespace level.
Prerequisites
- An understanding of Kubernetes StatefulSets definition and processing.
Procedure
- Configure the Broker on OCP StatefulSet controller template in your namespace.
Configure the scaledown controller template in your StatefulSet definition. The following code example represents the drainer pod definition:
apiVersion: apps/v1 kind: StatefulSet metadata: name: my-statefulset annotations: statefulsets.kubernetes.io/drainer-pod-template: | { "metadata": { "labels": { "app": "datastore-drainer" } }, "spec": { "containers": [ { "name": "drainer", "image": "my-drain-container", "volumeMounts": [ { "name": "data", "mountPath": "/var/data" } ] } ] } } spec: