Chapter 7. Deploying your Jboss EAP application on the OpenShift Container Platform


EAP operator is a JBoss EAP-specific controller that extends the OpenShift API. You can use the EAP operator to create, configure, manage, and seamlessly upgrade instances of complex stateful applications.

The EAP operator manages multiple JBoss EAP Java application instances across the cluster. It also ensures safe transaction recovery in your application cluster by verifying all transactions are completed before scaling down the replicas and marking a pod as clean for termination. The EAP operator uses StatefulSet for the appropriate handling of Jakarta Enterprise Beans remoting and transaction recovery processing. The StatefulSet ensures persistent storage and network hostname stability even after pods are restarted.

You must install the EAP operator using OperatorHub, which can be used by OpenShift cluster administrators to discover, install, and upgrade operators.

In OpenShift Container Platform 4, you can use the Operator Lifecycle Manager (OLM) to install, update, and manage the lifecycle of all operators and their associated services running across multiple clusters.

The OLM runs by default in OpenShift Container Platform 4. It aids cluster administrators in installing, upgrading, and granting access to operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install operators, as well as grant specific projects access to use the catalog of operators available on the cluster.

For more information about operators and the OLM, see the OpenShift documentation.

As a JBoss EAP cluster administrator, you can install an EAP operator from Red Hat OperatorHub using the OpenShift Container Platform web console. You can then subscribe the EAP operator to one or more namespaces to make it available for developers on your cluster.

Here are a few points you must be aware of before installing the EAP operator using the web console:

  • Installation Mode: Choose All namespaces on the cluster (default) to have the operator installed on all namespaces or choose individual namespaces, if available, to install the operator only on selected namespaces.
  • Update Channel: If the EAP operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list.
  • Approval Strategy: You can choose automatic or manual updates. If you choose automatic updates for the EAP operator, when a new version of the operator is available, the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of EAP operator. If you choose manual updates, when a newer version of the operator is available, the OLM creates an update request. You must then manually approve the update request to have the operator updated to the new version.
Note

The following procedure might change in accordance with the modifications in the OpenShift Container Platform web console. For the latest and most accurate procedure, see the Installing from the OperatorHub using the web console section in the latest version of the Working with Operators in OpenShift Container Platform guide.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.

Procedure

  1. In the OpenShift Container Platform web console, navigate to OperatorsOperatorHub.
  2. Scroll down or type EAP into the Filter by keyword box to find the EAP operator.
  3. Select JBoss EAP operator and click Install.
  4. On the Create Operator Subscription page:

    1. Select one of the following:

      • All namespaces on the cluster (default) installs the operator in the default openshift-operators namespace to watch and be made available to all namespaces in the cluster. This option is not always available.
      • A specific namespace on the cluster installs the operator in a specific, single namespace that you choose. The operator is made available for use only in this single namespace.
    2. Select an Update Channel.
    3. Select Automatic or Manual approval strategy, as described earlier.
  5. Click Subscribe to make the EAP operator available to the selected namespaces on this OpenShift Container Platform cluster.

    1. If you selected a manual approval strategy, the subscription’s upgrade status remains Upgrading until you review and approve its install plan. After you approve the install plan on the Install Plan page, the subscription upgrade status moves to Up to date.
    2. If you selected an automatic approval strategy, the upgrade status moves to Up to date without intervention.
  6. After the subscription’s upgrade status is Up to date, select Operators Installed Operators to verify that the EAP ClusterServiceVersion (CSV) shows up and its Status changes to InstallSucceeded in the relevant namespace.

    Note

    For the All namespaces…​ installation mode, the status displayed is InstallSucceeded in the openshift-operators namespace. In other namespaces the status displayed is Copied. . If the Status field does not change to InstallSucceeded, check the logs in any pod in the openshift-operators project (or other relevant namespace if A specific namespace…​ installation mode was selected) on the Workloads Pods page that are reporting issues to troubleshoot further.

7.1.2. Installing EAP operator using the CLI

As a JBoss EAP cluster administrator, you can install an EAP operator from Red Hat OperatorHub using the OpenShift Container Platform CLI. You can then subscribe the EAP operator to one or more namespaces to make it available for developers on your cluster.

When installing the EAP operator from the OperatorHub using the CLI, use the oc command to create a Subscription object.

Prerequisites

  • You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.
  • You have installed the oc tool in your local system.

Procedure

  1. View the list of operators available to the cluster from the OperatorHub:

    $ oc get packagemanifests -n openshift-marketplace | grep eap
    NAME        CATALOG               AGE
    ...
    eap         Red Hat Operators     43d
    ...
    Copy to Clipboard Toggle word wrap
  2. Create a Subscription object YAML file (for example, eap-operator-sub.yaml) to subscribe a namespace to your EAP operator. The following is an example Subscription object YAML file:

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: eap
      namespace: openshift-operators
    spec:
      channel: stable
      installPlanApproval: Automatic
      name: eap 
    1
    
      source:  redhat-operators 
    2
    
      sourceNamespace: openshift-marketplace
    Copy to Clipboard Toggle word wrap
    1
    Name of the operator to subscribe to.
    2
    The EAP operator is provided by the redhat-operators CatalogSource.

    For information about channels and approval strategy, see the web console version of this procedure.

  3. Create the Subscription object from the YAML file:

    $ oc apply -f eap-operator-sub.yaml
    $ oc get csv -n openshift-operators
    NAME                  DISPLAY     VERSION   REPLACES   PHASE
    eap-operator.v1.0.0   JBoss EAP   1.0.0                Succeeded
    Copy to Clipboard Toggle word wrap

    The EAP operator is successfully installed. At this point, the OLM is aware of the EAP operator. A ClusterServiceVersion (CSV) for the operator appears in the target namespace, and APIs provided by the EAP operator is available for creation.

The EAP operator helps automate Java application deployment on OpenShift. For information about the EAP operator APIs, see EAP Operator: API Information.

Prerequisites

  • You have installed EAP operator. For more information about installing the EAP operator, see Installing EAP operator using the web console and Installing EAP operator using the CLI.
  • You have built a Docker image of the user application using JBoss EAP for OpenShift Source-to-Image (S2I) build image.
  • You have created a Secret object, if your application’s CustomResourceDefinition (CRD) file references one. For more information about creating a new Secret object, see Creating a Secret.
  • You have created a ConfigMap, if your application’s CRD file references one. For information about creating a ConfigMap, see Creating a ConfigMap.
  • You have created a ConfigMap from the standalone.xml file, if you choose to do so. For information about creating a ConfigMap from the standalone.xml file, see Creating a ConfigMap from a standalone.xml File.
Note

Providing a standalone.xml file from the ConfigMap is not supported in JBoss EAP 8.1.

Procedure

  1. Open your web browser and log on to OperatorHub.
  2. Select the Project or namespace you want to use for your Java application.
  3. Navigate to Installed Operator and select JBoss EAP operator.
  4. On the Overview tab, click the Create Instance link.
  5. Specify the application image details.

    The application image specifies the Docker image that contains the Java application. The image must be built using the JBoss EAP for OpenShift Source-to-Image (S2I) build image. If the applicationImage field corresponds to an imagestreamtag, any change to the image triggers an automatic upgrade of the application.

    You can provide any of the following references of the JBoss EAP for OpenShift application image:

    • The name of the image: mycomp/myapp
    • A tag: mycomp/myapp:1.0
    • A digest: mycomp/myapp:@sha256:0af38bc38be93116b6a1d86a9c78bd14cd527121970899d719baf78e5dc7bfd2
    • An imagestreamtag: my-app:latest
  6. Specify the size of the application. For example:

    spec:
      replicas:2
    Copy to Clipboard Toggle word wrap
  7. Configure the application environment using the env spec. The Environment variables can come directly from values, such as POSTGRESQL_SERVICE_HOST or from Secret objects, such as POSTGRESQL_USER. For example:

    spec:
      env:
      - name: POSTGRESQL_SERVICE_HOST
        value: postgresql
      - name: POSTGRESQL_SERVICE_PORT
        value: '5432'
      - name: POSTGRESQL_DATABASE
        valueFrom:
          secretKeyRef:
            key: database-name
            name: postgresql
      - name: POSTGRESQL_USER
        valueFrom:
          secretKeyRef:
            key: database-user
            name: postgresql
      - name: POSTGRESQL_PASSWORD
        valueFrom:
          secretKeyRef:
            key: database-password
            name: postgresql
    Copy to Clipboard Toggle word wrap
  8. Complete the following optional configurations that are relevant to your application deployment:

    • Specify the storage requirements for the server data directory. For more information, see Configuring Persistent Storage for Applications.
    • Specify the name of the Secret you created in WildFlyServerSpec to mount it as a volume in the pods running the application. For example:

      spec:
        secrets:
          - my-secret
      Copy to Clipboard Toggle word wrap

      The Secret is mounted at /etc/secrets/<secret name> and each key/value is stored as a file. The name of the file is the key and the content is the value. The Secret is mounted as a volume inside the pod. The following example demonstrates commands that you can use to find key values:

      $ ls /etc/secrets/my-secret/
      my-key  my-password
      $ cat /etc/secrets/my-secret/my-key
      devuser
      $ cat /etc/secrets/my-secret/my-password
      my-very-secure-pasword
      Copy to Clipboard Toggle word wrap
      Note

      Modifying a Secret object might lead to project inconsistencies. Instead of modifying an existing Secret object, Red Hat recommends creating a new object with the same content as that of the old one. You can then update the content as required and change the reference in operator custom resource (CR) from old to new. This is considered a new CR update and the pods are reloaded.

    • Specify the name of the ConfigMap you created in WildFlyServerSpec to mount it as a volume in the pods running the application. For example:

      spec:
        configMaps:
        - my-config
      Copy to Clipboard Toggle word wrap

      The ConfigMap is mounted at /etc/configmaps/<configmap name> and each key/value is stored as a file. The name of the file is the key and the content is the value. The ConfigMap is mounted as a volume inside the pod. To find the key values:

      $ ls /etc/configmaps/my-config/
      key1 key2
      $ cat /etc/configmaps/my-config/key1
      value1
      $ cat /etc/configmaps/my-config/key2
      value2
      Copy to Clipboard Toggle word wrap
      Note

      Modifying a ConfigMap might lead to project inconsistencies. Instead of modifying an existing ConfigMap, Red Hat recommends creating a new ConfigMap with the same content as that of the old one. You can then update the content as required and change the reference in operator custom resource (CR) from old to new. This is considered a new CR update and the pods are reloaded.

    • If you choose to have your own standalone ConfigMap, provide the name of the ConfigMap as well as the key for the standalone.xml file:

        standaloneConfigMap:
          name: clusterbench-config-map
          key: standalone.xml
      Copy to Clipboard Toggle word wrap
      Note

      Creating a ConfigMap from the standalone.xml file is not supported in JBoss EAP 8.1.

    • If you want to disable the default HTTP route creation in OpenShift, set disableHTTPRoute to true:

      spec:
        disableHTTPRoute: true
      Copy to Clipboard Toggle word wrap

7.1.3.1. Creating a secret

If your application’s CustomResourceDefinition (CRD) file references a Secret, you must create the Secret before deploying your application on OpenShift using the EAP operator.

Procedure

  • To create a Secret:

    $ oc create secret generic my-secret --from-literal=my-key=devuser --from-literal=my-password='my-very-secure-pasword'
    Copy to Clipboard Toggle word wrap

7.1.3.2. Creating a configMap

If your application’s CustomResourceDefinition (CRD) file references a ConfigMap in the spec.ConfigMaps field, you must create the ConfigMap before deploying your application on OpenShift using the EAP operator.

Procedure

  • To create a configmap:

    $ oc create configmap my-config --from-literal=key1=value1 --from-literal=key2=value2
    configmap/my-config created
    Copy to Clipboard Toggle word wrap

You can create your own JBoss EAP standalone configuration instead of using the one in the application image that comes from JBoss EAP for OpenShift Source-to-Image (S2I). The standalone.xml file must be put in a ConfigMap that is accessible by the operator.

Note

Providing a standalone.xml file from the ConfigMap is not supported in JBoss EAP 8.1.

Procedure

  • To create a ConfigMap from the standalone.xml file:

    $ oc create configmap clusterbench-config-map --from-file examples/clustering/config/standalone.xml
    configmap/clusterbench-config-map created
    Copy to Clipboard Toggle word wrap

If your application requires persistent storage for some data, such as, transaction or messaging logs that must persist across pod restarts, configure the storage spec. If the storage spec is empty, an EmptyDir volume is used by each pod of the application. However, this volume does not persist after its corresponding pod is stopped.

Procedure

  • Specify volumeClaimTemplate to configure resources requirements to store the JBoss EAP standalone data directory. The name of the template is derived from the name of JBoss EAP. The corresponding volume is mounted in ReadWriteOnce access mode.

    spec:
      storage:
        volumeClaimTemplate:
          spec:
            resources:
              requests:
                storage: 3Gi
    Copy to Clipboard Toggle word wrap

    The persistent volume that meets this storage requirement is mounted on the /eap/standalone/data directory.

You can view the metrics of an application deployed on OpenShift using the EAP operator.

When your cluster administrator enables metrics monitoring in your project, the EAP operator automatically displays the metrics on the OpenShift console.

Prerequisites

Procedure

  1. In the OpenShift Container Platform web console, navigate to MonitoringMetrics.
  2. On the Metrics screen, type the name of your application in the text box to select your application. The metrics for your application appear on the screen.

7.1.5. Uninstalling EAP operator using web console

You can delete, or uninstall, EAP operator from your cluster, you can delete the subscription to remove it from the subscribed namespace. You can also remove the EAP operator’s ClusterServiceVersion (CSV) and deployment.

Note

To ensure data consistency and safety, scale down the number of pods in your cluster to 0 before uninstalling the EAP operator.

You can uninstall the EAP operator using the web console.

Warning

If you decide to delete the entire wildflyserver definition (oc delete wildflyserver <deployment_name>), then no transaction recovery process is started and the pod is terminated regardless of unfinished transactions. The unfinished work that results from this operation might block the data changes that you later initiate. The data changes for other JBoss EAP instances involved in transactional enterprise bean remote calls with this wildflyserver might also be blocked.

Procedure

  1. From the OperatorsInstalled Operators page, select JBoss EAP.
  2. On the right-hand side of the Operator Details page, select Uninstall Operator from the Actions drop-down menu.
  3. When prompted by the Remove Operator Subscription window, optionally select the Also completely remove the Operator from the selected namespace check box if you want all components related to the installation to be removed. This removes the CSV, which in turn removes the pods, deployments, custom resource definitions (CRDs), and custom resources (CRs) associated with the operator.
  4. Click Remove. The EAP operator stops running and no longer receives updates.

You can delete, or uninstall, the EAP operator from your cluster, you can delete the subscription to remove it from the subscribed namespace. You can also remove the EAP operator’s ClusterServiceVersion (CSV) and deployment.

Note

To ensure data consistency and safety, scale down the number of pods in your cluster to 0 before uninstalling the EAP operator.

You can uninstall the EAP operator using the command line.

When using the command line, you uninstall the operator by deleting the subscription and CSV from the target namespace.

Warning

If you decide to delete the entire wildflyserver definition (oc delete wildflyserver <deployment_name>), then no transaction recovery process is started and the pod is terminated regardless of unfinished transactions. The unfinished work that results from this operation might block the data changes that you later initiate. The data changes for other JBoss EAP instances involved in transactional enterprise bean remote calls with this wildflyserver might also be blocked.

Procedure

  1. Check the current version of the EAP operator subscription in the currentCSV field:

    $ oc get subscription eap-operator -n openshift-operators -o yaml | grep currentCSV
      currentCSV: eap-operator.v1.0.0
    Copy to Clipboard Toggle word wrap
  2. Delete the EAP operator’s subscription:

    $ oc delete subscription eap-operator -n openshift-operators
    subscription.operators.coreos.com "eap-operator" deleted
    Copy to Clipboard Toggle word wrap
  3. Delete the CSV for the EAP operator in the target namespace using the currentCSV value from the previous step:

    $ oc delete clusterserviceversion eap-operator.v1.0.0 -n openshift-operators
    clusterserviceversion.operators.coreos.com "eap-operator.v1.0.0" deleted
    Copy to Clipboard Toggle word wrap

JBoss EAP operator ensures data consistency before terminating your application cluster. To do this, the operator verifies that all transactions are completed before scaling down the replicas and marking a pod as clean for termination.

This means that if you want to remove the deployment safely without data inconsistencies, you must first scale down the number of pods to 0, wait until all pods are terminated, and only then delete the wildflyserver instance.

Warning

If you decide to delete the entire wildflyserver definition (oc delete wildflyserver <deployment_name>), then no transaction recovery process is started and the pod is terminated regardless of unfinished transactions. The unfinished work that results from this operation might block the data changes that you later initiate. The data changes for other JBoss EAP instances involved in transactional enterprise bean remote calls with this wildflyserver might also be blocked.

When the scaledown process begins the pod state (oc get pod <pod_name>) is still marked as Running, because the pod must complete all the unfinished transactions, including the remote enterprise beans calls that target it.

If you want to monitor the state of the scaledown process, observe the status of the wildflyserver instance. For more information, see Monitoring the Scaledown Process. For information about pod statuses during scaledown, see Pod Status During Scaledown.

The EAP operator that manages the wildflyserver creates a StatefulSet as an underlying object managing the JBoss EAP pods.

A StatefulSet is the workload API object that manages stateful applications. It manages the deployment and scaling of a set of pods, and provides guarantees about the ordering and uniqueness of these pods.

The StatefulSet ensures that the pods in a cluster are named in a predefined order. It also ensures that pod termination follows the same order. For example, let us say, pod-1 has a transaction with heuristic outcome, and so is in the state of SCALING_DOWN_RECOVERY_DIRTY. Even if pod-0 is in the state of SCALING_DOWN_CLEAN, it is not terminated before pod-1. Until pod-1 is clean and is terminated, pod-0 remains in the SCALING_DOWN_CLEAN state. However, even if pod-0 is in the SCALING_DOWN_CLEAN state, it does not receive any new request and is practically idle.

Note

Decreasing the replica size of the StatefulSet or deleting the pod itself has no effect and such changes are reverted.

7.1.7.2. Monitoring the scaledown process

If you want to monitor the state of the scaledown process, you must observe the status of the wildflyserver instance. For more information about the different pod statuses during scaledown, see Pod Status During Scaledown.

Procedure

  • To observe the state of the scaledown process:

    oc describe wildflyserver <name>
    Copy to Clipboard Toggle word wrap
    • The WildFlyServer.Status.Scalingdown Pods and WildFlyServer.Status.Replicas fields shows the overall state of the active and non-active pods.
    • The Scalingdown Pods field shows the number of pods which are about to be terminated when all the unfinished transactions are complete.
    • The WildFlyServer.Status.Replicas field shows the current number of running pods.
    • The WildFlyServer.Spec.Replicas field shows the number of pods in ACTIVE state.
    • If there are no pods in scaledown process the numbers of pods in the WildFlyServer.Status.Replicas and WildFlyServer.Spec.Replicas fields are equal.
7.1.7.2.1. Pod status during Scaledown

The following table describes the different pod statuses during scaledown:

Expand
Table 7.1. Pod Status Description
Pod StatusDescription

ACTIVE

The pod is active and processing requests.

SCALING_DOWN_RECOVERY_INVESTIGATION

The pod is about to be scaled down. The scale-down process is under investigation about the state of transactions in JBoss EAP.

SCALING_DOWN_RECOVERY_DIRTY

JBoss EAP contains some incomplete transactions. The pod cannot be terminated until they are cleaned. The transaction recovery process is periodically run at JBoss EAP and it waits until the transactions are completed.

SCALING_DOWN_CLEAN

The pod is processed by transaction scaled down processing and is marked as clean to be removed from the cluster.

When the outcome of a transaction is unknown, automatic transaction recovery is impossible. You must then manually recover your transactions.

Prerequisites

  • The status of your pod is stuck at SCALING_DOWN_RECOVERY_DIRTY.

Procedure

  1. Access your JBoss EAP instance using CLI.
  2. Resolve all the heuristics transaction records in the transaction object store. For more information, see Recovering Heuristic Outcomes in the Managing Transactions on JBoss EAP.
  3. Remove all records from the enterprise bean client recovery folder.

    1. Remove all files from the pod enterprise bean client recovery directory:

      $JBOSS_HOME/standalone/data/ejb-xa-recovery
      oc exec <podname> rm -rf $JBOSS_HOME/standalone/data/ejb-xa-recovery
      Copy to Clipboard Toggle word wrap
  4. The status of your pod changes to SCALING_DOWN_CLEAN and the pod is terminated.

In cases where the system does not provide a file system to store transaction logs, use the JBoss EAP S2I image to configure the JDBC object store.

Important

S2I environment variables are not usable when JBoss EAP is deployed as a bootable JAR. In this case, you must create a Galleon layer or configure a CLI script to make the necessary configuration changes.

The JDBC object store can be set up with the environment variable TX_DATABASE_PREFIX_MAPPING. This variable has the same structure as DB_SERVICE_PREFIX_MAPPING.

Prerequisite

  • You have created a datasource based on the value of the environment variables.
  • You have ensured consistent data reads and writes permissions exist between the database and the transaction manager communicating over the JDBC object store. For more information see configuring JDBC data sources

Procedure

  • Set up and configure the JDBC object store through the S2I environment variable.

    Example

# Narayana JDBC objectstore configuration via s2i env variables
- name: TX_DATABASE_PREFIX_MAPPING
  value: 'PostgresJdbcObjectStore-postgresql=PG_OBJECTSTORE'
- name: POSTGRESJDBCOBJECTSTORE_POSTGRESQL_SERVICE_HOST
  value: 'postgresql'
- name: POSTGRESJDBCOBJECTSTORE_POSTGRESQL_SERVICE_PORT
  value: '5432'
- name: PG_OBJECTSTORE_JNDI
  value: 'java:jboss/datasources/PostgresJdbc'
- name: PG_OBJECTSTORE_DRIVER
  value: 'postgresql'
- name: PG_OBJECTSTORE_DATABASE
  value: 'sampledb'
- name: PG_OBJECTSTORE_USERNAME
  value: 'admin'
- name: PG_OBJECTSTORE_PASSWORD
  value: 'admin'
Copy to Clipboard Toggle word wrap

Verification

  • You can verify both the datasource configuration and transaction subsystem configuration by checking the standalone.xml configuration file oc rsh <podname> cat /opt/server/standalone/configuration/standalone.xml.

    Expected output:

    <datasource jta="false" jndi-name="java:jboss/datasources/PostgresJdbcObjectStore" pool-name="postgresjdbcobjectstore_postgresqlObjectStorePool"
        enabled="true" use-java-context="true" statistics-enabled="${wildfly.datasources.statistics-enabled:${wildfly.statistics-enabled:false}}">
        <connection-url>jdbc:postgresql://postgresql:5432/sampledb</connection-url>
        <driver>postgresql</driver>
        <security>
            <user-name>admin</user-name>
            <password>admin</password>
        </security>
    </datasource>
    
    <!-- under subsystem urn:jboss:domain:transactions -->
    <jdbc-store datasource-jndi-name="java:jboss/datasources/PostgresJdbcObjectStore">
         <!-- the pod name was named transactions-xa-0 -->
        <action table-prefix="ostransactionsxa0"/>
        <communication table-prefix="ostransactionsxa0"/>
        <state table-prefix="ostransactionsxa0"/>
    </jdbc-store>
    Copy to Clipboard Toggle word wrap

7.1.7.5. Transaction recovery during scaledown

When you deploy applications using transactions in a JBoss EAP application server, it’s crucial to understand what happens during a cluster scaledown. Decreasing the number of active JBoss EAP replicas can leave in-doubt (or heuristic) transactions that need to be completed (or solved, in case of heuristic). This situation is a consequence of the XA standard, where transactions declared as prepared promise to complete successfully. Also, XA transactions can complete with a heuristic outcome, which then needs to be manually solved. Shutting down pods that are managing such transactions (i.e. in-doubt or heuristic transactions) can lead to data inconsistencies/loss or data locks.

The JBoss EAP operator provides a scaledown functionality to ensure all transactions finish before reducing the number of replicas. This functionality verifies that all transactions in a pod are completed/solved and only then the operator marks the pod as clean for termination.

For more information, see WildFly Operator User Guide.

Important
  • Directly decreasing the replica size at the StatefulSet, or deleting the pod, will have no effect. Such changes will revert automatically.
  • Deleting the entire JBoss EAP server definition (oc delete wildflyserver <deployment_name>`) does not initiate a transaction recovery process. The pod terminates regardless of unfinished transactions. To remove the deployment safely without data inconsistencies, first scale down the number of pods to zero, wait until all pods terminate, and then delete the JBoss EAP instance.
  • Ensure you enable the Narayana recovery listener in the JBoss EAP transaction subsystem. Without this, the scaledown transaction recovery processing skips for that particular JBoss EAP pod.

Procedure

  1. To decrease the replica size in your JBoss EAP application server, do one of the following:

    • Patch the replica size:

      oc patch wildflyserver <name> -p '[{"op":"replace", "path":"/spec/replicas", "value":0}]' --type json
      Copy to Clipboard Toggle word wrap
    • Manually edit the replica size:

      oc edit wildflyserver <name>
      Copy to Clipboard Toggle word wrap

7.1.7.6. Scaledown process

When the scaledown process begins, you will notice that the pod state oc get pod <pod_name> still shows as Running. In this state, the operator allows the pod to complete all unfinished transactions, including remote EJB calls targeting it. To observe the scaledown process, you can monitor the status of the JBoss EAP instance. Use oc describe wildflyserver <name> to see the pod statuses.

Expand
NameDescription

ACTIVE

The pod actively processes requests.

SCALING_DOWN_RECOVERY_INVESTIGATION

The pod is under investigation to find out if there are transactions that did not complete their lifecycle successfully.

SCALING_DOWN_RECOVERY_PROCESSING

There are in-doubt transactions in the log store. The pod cannot be terminated until these transactions are either completed or cleaned.

SCALING_DOWN_RECOVERY_HEURISTICS

There are heuristic transactions in the log store. The pod cannot be terminated until these transactions are either manually solved or cleaned.

SCALING_DOWN_CLEAN

The pod has completed the transaction scaledown process and is clean for removal from the cluster.

If you want to disable transaction recovery during scale-down, you can configure the property WildFlyServerSpec.DeactivateTransactionRecovery to true (by default, it’s set to false). When you enable DeactivateTransactionRecovery, in-doubt and heuristic transactions won’t be finalized or reported, potentially leading to data inconsistency or loss when you employ distributed transactions.

Heuristic Transactions

The outcome of XA transactions can be commit, roll-back, or heuristic. The latter outcome represents the acknowledgment that some participants of the distributed transactions didn’t complete according to the outcome of the first phase of the two-phase protocol (which is used to complete XA transactions). As a consequence, heuristic transactions require manual intervention to enforce the correct outcome (which the transaction coordinator enforced to all participants during the first phase).

If an JBoss EAP pod is handling a heuristic transaction, then that pod will be labeled as SCALING_DOWN_RECOVERY_HEURISTICS. The administrator must manually connect to the specific JBoss EAP pod (using jboss-cli) and manually resolve the heuristic transaction. Once all these records are solved/removed from the transaction object store, the operator will label the pod as SCALING_DOWN_CLEAN, and the pod will be terminated.

StatefulSet Behavior
The StatefulSet ensures stable network hostnames, which depend on the ordering of pods. Pods are named in a defined order, requiring the termination of pod-1 before pod-0. If pod-1 is in SCALING_DOWN_RECOVERY_HEURISTICS and pod-0 is in SCALING_DOWN_CLEAN, pod-0 will linger in its state until pod-1 is terminated. Even if the pod is in SCALING_DOWN_CLEAN, it does not receive new requests and remains idle.

With EAP operator, you can use a horizontal pod autoscaler HPA to automatically increase or decrease the scale of an EAP application based on metrics collected from the pods that belong to that EAP application.

Note

Using HPA ensures that transaction recovery is still handled when a pod is scaled down.

Procedure

  1. Configure the resources:

    apiVersion: wildfly.org/v1alpha1
    kind: WildFlyServer
    metadata:
      name: eap-helloworld
    spec:
      applicationImage: 'eap-helloworld:latest'
      replicas: 1
      resources:
        limits:
          cpu: 500m
          memory: 2Gi
        requests:
          cpu: 100m
          memory: 1Gi
    Copy to Clipboard Toggle word wrap
    Important

    You must specify the resource limits and requests for containers in a pod for autoscaling to work as expected.

  2. Create the Horizontal pod autoscaler:

    oc autoscale wildflyserver/eap-helloworld --cpu-percent=50 --min=1 --max=10
    Copy to Clipboard Toggle word wrap

Verification

  • You can verify the HPA behavior by checking the replicas. The number of replicas increase or decrease depending on the increase or decrease of the workload.
oc get hpa -w
NAME               REFERENCE                        TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
eap-helloworld   WildFlyServer/eap-helloworld   217%/50%   1         10        1          4s
eap-helloworld   WildFlyServer/eap-helloworld   217%/50%   1         10        4          17s
eap-helloworld   WildFlyServer/eap-helloworld   133%/50%   1         10        8          32s
eap-helloworld   WildFlyServer/eap-helloworld   133%/50%   1         10        10         47s
eap-helloworld   WildFlyServer/eap-helloworld   139%/50%   1         10        10         62s
eap-helloworld   WildFlyServer/eap-helloworld   180%/50%   1         10        10         92s
eap-helloworld   WildFlyServer/eap-helloworld   133%/50%   1         10        10         2m2s
Copy to Clipboard Toggle word wrap

For JBoss EAP to work correctly with enterprise bean remoting calls between different JBoss EAP clusters on OpenShift, you must understand the enterprise bean remoting configuration options on OpenShift.

Note

When deploying on OpenShift, consider the use of the EAP operator. The EAP operator uses StatefulSet for the appropriate handling of enterprise bean remoting and transaction recovery processing. The StatefulSet ensures persistent storage and network hostname stability even after pods are restarted.

Network hostname stability is required when the JBoss EAP instance is contacted using an enterprise bean remote call with transaction propagation. The JBoss EAP instance must be reachable under the same hostname even if the pod restarts. The transaction manager, which is a stateful component, binds the persisted transaction data to a particular JBoss EAP instance. Because the transaction log is bound to a specific JBoss EAP instance, it must be completed in the same instance.

To prevent data loss when the JDBC transaction log store is used, make sure your database provides data-consistent reads and writes. Consistent data reads and writes are important when the database is scaled horizontally with multiple instances.

An enterprise bean remote caller has two options to configure the remote calls:

  • Define a remote outbound connection.

You must reconfigure the value representing the address of the target node depending on the enterprise bean remote call configuration method.

Note

The name of the target enterprise bean for the remote call must be the DNS address of the first pod.

The StatefulSet behaviour depends on the ordering of the pods. The pods are named in a predefined order. For example, if you scale your application to three replicas, your pods have names such as eap-server-0, eap-server-1, and eap-server-2.

The EAP operator also uses a headless service that ensures a specific DNS hostname is assigned to the pod. If the application uses the EAP operator, a headless service is created with a name such as eap-server-headless. In this case, the DNS name of the first pod is eap-server-0.eap-server-headless.

The use of the hostname eap-server-0.eap-server-headless ensures that the enterprise bean call reaches any EAP instance connected to the cluster. A bootstrap connection is used to initialize the Jakarta Enterprise Beans client, which gathers the structure of the EAP cluster as the next step.

You must configure the JBoss EAP servers that act as callers for enterprise bean remoting. The target server must configure a user with permission to receive the enterprise bean remote calls.

Prerequisites

  • You have used the EAP operator and the supported JBoss EAP for OpenShift S2I image for deploying and managing the JBoss EAP application instances on OpenShift.
  • The clustering is set correctly. For more information about JBoss EAP clustering, see the Clustering section.

Procedure

  1. Create a user in the target server with permission to receive the enterprise bean remote calls:

    $JBOSS_HOME/bin/add-user.sh
    Copy to Clipboard Toggle word wrap
  2. Configure the caller JBoss EAP application server.

    1. Create the eap-config.xml file in $JBOSS_HOME/standalone/configuration using the custom configuration functionality. For more information, see Custom Configuration.
    2. Configure the caller JBoss EAP application server with the wildfly.config.url property:

      JAVA_OPTS_APPEND="-Dwildfly.config.url=$JBOSS_HOME/standalone/configuration/eap-config.xml"
      Copy to Clipboard Toggle word wrap
      Note

      If you use the following example for your configuration, replace the >>PASTE_…​_HERE<< with username and password you configured.

      Example Configuration
<configuration>
   <authentication-client xmlns="urn:elytron:1.0">
      <authentication-rules>
         <rule use-configuration="jta">
            <match-abstract-type name="jta" authority="jboss" />
         </rule>
      </authentication-rules>
      <authentication-configurations>
         <configuration name="jta">
            <sasl-mechanism-selector selector="DIGEST-MD5" />
            <providers>
               <use-service-loader />
            </providers>
            <set-user-name name="PASTE_USER_NAME_HERE" />
            <credentials>
               <clear-password password="PASTE_PASSWORD_HERE" />
            </credentials>
            <set-mechanism-realm name="ApplicationRealm" />
         </configuration>
      </authentication-configurations>
   </authentication-client>
</configuration>
Copy to Clipboard Toggle word wrap
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat