Este contenido no está disponible en el idioma seleccionado.

Chapter 6. xPaaS Middleware Images


6.1. Overview

This topic group includes information on the different xPaaS middleware images available for OpenShift users.

6.2. Red Hat JBoss Enterprise Application Platform (JBoss EAP) xPaaS Images

6.2.1. Overview

Red Hat offers a containerized xPaaS image for the Red Hat JBoss Enterprise Application Platform (JBoss EAP) that is designed for use with OpenShift. Using this image, developers can quickly and easily build, scale, and test applications deployed across hybrid environments.

6.2.2. Comparing the Product and Image

The xPaas JBoss EAP images differ from the JBoss EAP product in several ways:

  1. The image does not include the JBoss EAP Management Console used to manage xPaaS JBoss EAP images.
  2. The JBoss EAP Management CLI is included in the xPaaS JBoss EAP image, but can only access the Management CLI of a container from within the pod.
  3. Domain mode is not supported in the xPaaS JBoss EAP image. Instead, OpenShift manages the creation and distribution of applications in the containers.
  4. The image’s default root page is disabled. Deploy your own application to the root context as ROOT.war.
  5. The EAP 6.4 image supports A-MQ for inter-pod and remote messaging. HornetQ is only supported for intra-pod messaging and only enabled when A-MQ is absent. The EAP 7 Beta image includes Artemis as a replacement for HornetQ.

For further information about JBoss EAP functionality and features independent from the JBoss EAP image, see the JBoss EAP documentation on the Red Hat Customer Portal.

6.2.3. Comparing the xPaaS JBoss EAP 6.4 and 7.0 Beta Images

Red Hat offers two xPaaS EAP images for use with OpenShift. The first is based on JBoss EAP 6.4 and the second is based on JBoss EAP 7 Beta. There are several differences between the two images:

JBoss Web is replaced by Undertow

  • The xPaaS JBoss EAP 6.4 image uses JBoss Web.
  • The xPaas JBoss EAP 7 Beta image uses Undertow instead of JBoss Web. This change only affects users implementing custom JBoss Web Valves in their applications. Affected users must refer to the Red Hat JBoss EAP 7 Beta documentation for details about migrating JBoss EAP Web Valve handlers.

HornetQ is replaced by Artemis

  • The EAP 6.4 image only uses HornetQ for intra-pod messaging when A-MQ is absent.
  • The EAP 7 Beta image uses Artemis instead of HornetQ. This change resulted in renaming the HORNETQ_QUEUES and HORNETQ_TOPICS environment variables to MQ_QUEUES and MQ_TOPICS respectively. For complete instructions to deal with migrating applications from JBoss EAP 6.4 to 7 Beta, see the JBoss EAP 7 Beta Migration Guide.

6.2.4. Compatibility with xPaaS JBoss EAP

See the xPaaS section of the OpenShift and Atomic Platform Tested Integrations page for details about OpenShift EAP image version compatibility.

6.2.5. Setting Up the xPaaS JBoss EAP Image

The following is a list of prerequisites for using the xPaaS JBoss EAP images:

  1. Acquire Red Hat Subscriptions - Ensure that you have the relevant subscriptions for OpenShift as well as a subscription for xPaaS Middleware.
  2. Install OpenShift - Before using the xPaaS JBoss EAP images, you must have an OpenShift environment installed and configured:

    1. The Quick Installation method allows you to install OpenShift using an interactive CLI utility.
    2. The Advanced Installation method allows you to install OpenShift using a reference configuration. This method is best suited for production environments.
  3. Install and Deploy Docker Registry - Install the Docker Registry and then ensure that the Docker Registry is deployed to locally manage images as follows:

    $ oadm registry --config=/etc/origin/master/admin.kubeconfig --credentials=/etc/origin/master/openshift-registry.kubeconfig

    For further information, see Deploying a Docker Registry

  4. Deploy a Router - Use the instructions at the Deploying a Router page for this step.
  5. Privileges - Ensure that you can run the oc create command with cluster-admin privileges.
  6. Create Image Streams - Image streams are configured during the Quick or Advanced OpenShift Installation. If required, manually create the image streams for both versions of the xPaaS JBoss EAP image as follows:

    $ oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_examples/files/examples/v1.1/xpaas-streams/jboss-image-streams.json -n openshift
    Note

    For further information about creating image streams, see Loading the Default Image Streams and Templates

  7. Create Instant App Templates - Instant App templates define a full set of objects for running applications and are configured during the Quick or Advanced OpenShift Installation. If required, create Instant App templates as follows:

    1. Create the core Instant App templates:

      $ oc create -f \ openshift-ansible/roles/openshift_examples/files/examples/quickstart-templates -n openshift
    2. Register Instant App templates for xPaaS Middleware products:

      $ oc create -f \ openshift-ansible/roles/openshift_examples/files/examples/xPaaS-templates -n openshift

6.2.6. Modifying the JDK Used by the xPaaS JBoss EAP Image

The xPaaS JBoss EAP 6.4 image includes OpenJDK 1.7 and 1.8, with OpenJDK 1.8 as the default. The xPaaS JBoss EAP 7 Beta image only includes and supports OpenJDK 1.8.

To change the JDK version used by the xPaaS JBoss EAP 6.4 image:

  1. Ensure that the pom.xml file specifies that the code must be built using the intended JDK version.
  2. In the S2I application template, configure the image’s JAVA_HOME environment variable to point to the intended JDK version. For example:

    Example 6.1. Setting the JDK version

    Change the defined value to point to the required version of the JDK.

    name: "JAVA_HOME"
    value: "/usr/lib/jvm/java-1.7.0"

6.2.7. Getting Started Using xPaaS JBoss EAP Images

6.2.7.1. Configuring the xPaaS JBoss EAP Images

You can change the configuration for the xPaaS JBoss EAP images by either using the S2I (Source to Image) templates, or by using a modified xPaaS JBoss EAP image. Red Hat recommends using the S2I method to configure the xPaaS JBoss EAP image.

6.2.7.2. Configuring the xPaaS JBoss EAP Image using the S2I Templates

The recommended method to run and configure the xPaaS JBoss EAP image is to use the OpenShift S2I process together with the application template parameters and environment variables.

Note

The variable EAP_HOME is used to denote the path to the JBoss EAP installation. Replace this variable with the actual path to your JBoss EAP installation.

The S2I process for the xPaaS JBoss EAP image works as follows:

  1. If a pom.xml file is present in the source repository, a Maven build using the contents of the $MAVEN_ARGS environment variable is triggered. By default, the OpenShift profile uses the Maven package goal which includes system properties for skipping tests (-DskipTests) and enabling the Red Hat GA repository (-Dcom.redhat.xPaaS.repo.redhatga). The results of a successful Maven build are copied to EAP_HOME/standalone/deployments. This includes all JAR, WAR, and EAR files from the source repository specified by the $ARTIFACT_DIR environment variable. The default value of $ARTIFACT_DIR is the target directory.
  2. Any JAR, WAR, and EAR in the deployment’s source repository directory are copied to the EAP_HOME/standalone/deployments directory.
  3. All files in the configuration source repository directory are copied to EAP_HOME/standalone/configuration. If you want to use a custom JBoss EAP configuration file, it should be named standalone-openshift.xml.
  4. All files in the modules source repository directory are copied to EAP_HOME/modules.

6.2.7.3. Using a Modified xPaaS JBoss EAP Image

You can make changes to an image or create a custom image to use in OpenShift.

The JBoss EAP configuration file used by OpenShift in the xPaaS JBoss EAP image is EAP_HOME/standalone/configuration/standalone-openshift.xml. The script to start JBoss EAP is EAP_HOME/bin/openshift-launch.sh.

Important

Ensure that you have read the guidelines for creating images and follow them when creating a modified image.

To use a modified image in OpenShift:

Warning

This procedure results in losing configuration placeholders for various settings such as datasources, messaging, HTTPS, KeyCloak, etc. A workaround for this issue is to create a duplicate copy of the standalone.xml file to edit. The original and edited versions can be compared after all edits are complete and placeholder values can be copied to the edited version from the original version to retain these values.

  1. Run the xPaaS JBoss EAP image using Docker.
  2. Make the required changes using the JBoss EAP Management CLI by running the script at EAP_HOME/bin/jboss-cli.sh.
  3. Commit the changed container as a new image and then use the modified image in OpenShift.

6.2.7.4. Troubleshooting

If an application is not starting, use the following command to view details to locate and troubleshoot the problem:

$ oc describe po <pod_name>

To troubleshoot running xPaaS JBoss EAP containers, you can either view the OpenShift logs, or view the JBoss EAP logs displayed to the container’s console. Use the following command to view the JBoss EAP logs:

$ oc logs -f <pod_name> <container_name>
Note

By default, the xPaaS JBoss EAP image does not have a file log handler configured. Logs are therefore only sent to the console.

6.3. Red Hat JBoss A-MQ xPaaS Image

6.3.1. Overview

Red Hat JBoss A-MQ (JBoss A-MQ) is available as a containerized xPaaS image that is designed for use with OpenShift. It allows developers to quickly deploy an A-MQ message broker in a hybrid cloud environment.

Important

There are significant differences in supported configurations and functionality in the JBoss A-MQ image compared to the regular release of JBoss A-MQ.

This topic details the differences between the JBoss A-MQ xPaaS image and the regular release of JBoss A-MQ, and provides instructions specific to running and configuring the JBoss A-MQ xPaaS image. Documentation for other JBoss A-MQ functionality not specific to the JBoss A-MQ xPaaS image can be found in the JBoss A-MQ documentation on the Red Hat Customer Portal.

6.3.2. Differences Between the JBoss A-MQ xPaaS Image and the Regular Release of JBoss A-MQ

There are several major functionality differences in the OpenShift JBoss A-MQ xPaaS image:

  • The Karaf shell is not available.
  • The Fuse Management Console (Hawtio) is not available.
  • Configuration of the broker can be performed:

6.3.3. Using the JBoss A-MQ xPaaS Image Streams and Application Templates

The Red Hat xPaaS middleware images were automatically created during the installation of OpenShift along with the other default image streams and templates.

6.3.4. Configuring the JBoss A-MQ Image

6.3.4.1. Application Template Parameters

Basic configuration of the JBoss A-MQ xPaaS image is performed by specifying values of application template parameters. The following parameters can be configured:

AMQ_RELEASE
The JBoss A-MQ release version. This determines which JBoss A-MQ image will be used as a basis for the application. At the moment, only version 6.2 is available.
APPLICATION_NAME
The name of the application used internally in OpenShift. It is used in names of services, pods, and other objects within the application.
MQ_USERNAME
The user name used for authentication to the broker. In a standard non-containerized JBoss A-MQ, you would specify the user name in the AMQ_HOME/opt/user.properties file. If no value is specified, a random user name is generated.
MQ_PASSWORD
The password used for authentication to the broker. In a standard non-containerized JBoss A-MQ, you would specify the password in the AMQ_HOME/opt/user.properties file. If no value is specified, a random password is generated.
AMQ_ADMIN_USERNAME
The user name used as an admin authentication to the broker. If no value is specified, a random user name is generated.
AMQ_ADMIN_PASSWORD
The password used for authentication to the broker. If no value is specified, a random password is generated.
MQ_PROTOCOL
Comma-separated list of the messaging protocols used by the broker. Available options are amqp, mqtt, openwire, and stomp. If left empty, all available protocols will be available. Please note that for integration of the image with Red Hat JBoss Enterprise Application Platform, the openwire protocol must be specified, while other protocols can be optionally specified as well.
MQ_QUEUES
Comma-separated list of queues available by default on the broker on its startup.
MQ_TOPICS
Comma-separated list of topics available by default on the broker on its startup.
AMQ_SECRET
The name of a secret containing SSL related files. If no value is specified, a random password is generated.
AMQ_TRUSTSTORE
The SSL trust store filename. If no value is specified, a random password is generated.
AMQ_KEYSTORE
The SSL key store filename. If no value is specified, a random password is generated.

6.3.4.2. Configuration Using S2I

Configuration of the JBoss A-MQ image can also be modified using the Source-to-image feature, described in full detail at S2I Requirements.

Custom A-MQ broker configuration can be specified by creating an openshift-activemq.xml file inside the git directory of your application’s Git project root. On each commit, the file will be copied to the conf directory in the A-MQ root and its contents used to configure the broker.

6.3.5. Configuring the JBoss A-MQ Persistent Image

6.3.5.1. Application Template Parameters

Basic configuration of the JBoss A-MQ Persistent xPaaS image is performed by specifying values of application template parameters. The following parameters can be configured:

AMQ_RELEASE
The JBoss A-MQ release version. This determines which JBoss A-MQ image will be used as a basis for the application. At the moment, only version 6.2 is available.
APPLICATION_NAME
The name of the application used internally in OpenShift. It is used in names of services, pods, and other objects within the application.
MQ_PROTOCOL
Comma-separated list of the messaging protocols used by the broker. Available options are amqp, mqtt, openwire, and stomp. If left empty, all available protocols will be available. Please note that for integration of the image with Red Hat JBoss Enterprise Application Platform, the openwire protocol must be specified, while other protocols can be optionally specified as well.
MQ_QUEUES
Comma-separated list of queues available by default on the broker on its startup.
MQ_TOPICS
Comma-separated list of topics available by default on the broker on its startup.
VOLUME_CAPACITY
The size of the persistent storage for database volumes.
MQ_USERNAME
The user name used for authentication to the broker. In a standard non-containerized JBoss A-MQ, you would specify the user name in the AMQ_HOME/opt/user.properties file. If no value is specified, a random user name is generated.
MQ_PASSWORD
The password used for authentication to the broker. In a standard non-containerized JBoss A-MQ, you would specify the password in the AMQ_HOME/opt/user.properties file. If no value is specified, a random password is generated.
AMQ_ADMIN_USERNAME
The user name used as an admin authentication to the broker. If no value is specified, a random user name is generated.
AMQ_ADMIN_PASSWORD
The password used for authentication to the broker. If no value is specified, a random password is generated.
AMQ_SECRET
The name of a secret containing SSL related files. If no value is specified, a random password is generated.
AMQ_TRUSTSTORE
The SSL trust store filename. If no value is specified, a random password is generated.
AMQ_KEYSTORE
The SSL key store filename. If no value is specified, a random password is generated.

For more information, see Using Persistent Volumes.

6.3.6. Security

Only SSL connections can connect from outside of the OpenShift instance, regardless of the protocol specified in the MQ_PROTOCOL property of the A-MQ application templates. The non-SSL version of the protocols can only be used inside the OpenShift instance.

For security reasons, using the default KeyStore and TrustStore generated by the system is discouraged. It is recommended to generate your own KeyStore and TrustStore and supply them to the image using the OpenShift secrets mechanism or S2I.

6.3.7. High-Availability and Scalability

The JBoss xPaaS A-MQ image is supported in two modes:

  1. A single A-MQ pod mapped to a Persistent Volume for message persistence. This mode provides message High Availability and guaranteed messaging but does not provide scalability.
  2. Multiple A-MQ pods using local message persistence (i.e. no mapped Persistent Volume). This mode provides scalability but does not provide message High Availability or guaranteed messaging.

6.3.8. Logging

In addition to viewing the OpenShift logs, you can troubleshoot a running JBoss A-MQ image by viewing the JBoss A-MQ logs that are outputted to the container’s console:

$ oc logs -f <pod_name> <container_name>
Note

By default, the OpenShift JBoss A-MQ xPaaS image does not have a file log handler configured. Logs are only sent to the console.

6.4. Red Hat JBoss Web Server xPaaS Images

6.4.1. Overview

The Apache Tomcat 7 and Apache Tomcat 8 components of Red Hat JBoss Web Server 3 are available as containerized xPaaS images that are designed for use with OpenShift. Developers can use these images to quickly build, scale, and test Java web applications deployed across hybrid environments.

Important

There are significant differences in the functionality between the JBoss Web Server xPaaS images and the regular release of JBoss Web Server.

This topic details the differences between the JBoss Web Server xPaaS images and the regular release of JBoss Web Server, and provides instructions specific to running and configuring the JBoss Web Server xPaaS images. Documentation for other JBoss Web Server functionality not specific to the JBoss Web Server xPaaS images can be found in the JBoss Web Server documentation on the Red Hat Customer Portal.

The location of JWS_HOME/tomcat<version>/ inside a JBoss Web Server xPaaS image is: /opt/webserver/.

6.4.2. Functionality Differences in the OpenShift JBoss Web Server xPaaS Images

A major functionality difference compared to the regular release of JBoss Web Server is that there is no Apache HTTP Server in the OpenShift JBoss Web Server xPaaS images. All load balancing in OpenShift is handled by the OpenShift router, so there is no need for a load-balancing Apache HTTP Server with mod_cluster or mod_jk connectors.

6.4.3. Using the JBoss Web Server xPaaS Image Streams and Application Templates

The Red Hat xPaaS middleware images were automatically created during the installation of OpenShift along with the other default image streams and templates.

Note

The JBoss Web Server xPaaS application templates are distributed as two sets: one set for Tomcat 7, and another for Tomcat 8. endif::[]

6.4.4. Using the JBoss Web Server xPaaS Image Source-to-Image (S2I) Process

To run and configure the OpenShift JBoss Web Server xPaaS images, use the OpenShift S2I process with the application template parameters and environment variables.

The S2I process for the JBoss Web Server xPaaS images works as follows:

  1. If there is a pom.xml file in the source repository, a Maven build is triggered with the contents of $MAVEN_ARGS environment variable.

    By default the package goal is used with the openshift profile, including the system properties for skipping tests (-DskipTests) and enabling the Red Hat GA repository (-Dcom.redhat.xpaas.repo.redhatga).

    The results of a successful Maven build are copied to /opt/webserver/webapps. This includes all WAR files from the source repository directory specified by the $ARTIFACT_DIR environment variable. The default value of $ARTIFACT_DIR is the target directory.

  2. All WAR files from the deployments source repository directory are copied to /opt/webserver/webapps.
  3. All files in the configuration source repository directory are copied to /opt/webserver/conf.

    Note

    If you want to use custom Tomcat configuration files, the file names should be the same as for a normal Tomcat installation. For example, context.xml and server.xml.

6.4.5. Troubleshooting

In addition to viewing the OpenShift logs, you can troubleshoot a running JBoss Web Server container by viewing the logs that are outputted to the container’s console:

$ oc logs -f <pod_name> <container_name>

Additionally, access logs are written to /opt/webserver/logs/.

6.5. Red Hat JBoss Fuse Integration Services

6.5.1. Overview

Red Hat JBoss Fuse Integration Services provides a set of tools and containerized xPaaS images that enable development, deployment, and management of integration microservices within OpenShift.

Important

There are significant differences in supported configurations and functionality in Fuse Integration Services compared to the standalone JBoss Fuse product.

6.5.1.1. Differences Between Fuse Integration Services and JBoss Fuse

There are several major functionality differences:

  • Fuse Management Console is not included as Fuse administration views have been integrated directly within the OpenShift Web Console.
  • An application deployment with Fuse Integration Services consists of an application and all required runtime components packaged inside a Docker image. Applications are not deployed to a runtime as with Fuse, the application image itself is a complete runtime environment deployed and managed through OpenShift.
  • Patching in an OpenShift environment is different from standalone Fuse since each application image is a complete runtime environment. To apply a patch, the application image is rebuilt and redeployed within OpenShift. Core OpenShift management capabilities allow for rolling upgrades and side-by-side deployment to maintain availability of your application during upgrade.
  • Provisioning and clustering capabilities provided by Fabric in Fuse have been replaced with equivalent functionality in Kubernetes and OpenShift. There is no need to create or configure individual child containers as OpenShift automatically does this for you as part of deploying and scaling your application.
  • Messaging services are created and managed using the A-MQ xPaaS images for OpenShift and not included directly within Fuse. Fuse Integration Services provides an enhanced version of the camel-amq component to allow for seamless connectivity to messaging services in OpenShift through Kubernetes.
  • Live updates to running Karaf instances using the Karaf shell is strongly discouraged as updates will not be preserved if an application container is restarted or scaled up. This is a fundamental tenet of immutable architecture and essential to achieving scalability and flexibility within OpenShift.

Additional details on technical differences and support scope are documented in an associated KCS article.

6.5.2. Using Fuse Integration Services

You can start using Fuse Integration Services by creating an application and deploying it to OpenShift using one of the following application development workflows:

  • Fabric8 Maven Workflow
  • OpenShift Source-to-Image (S2I) Workflow

Both workflows begin with creating a new project from a Maven archetype.

6.5.2.1. Maven Archetypes Catalog

The Maven Archetype catalog includes the following examples:

cdi-camel-http-archetype

Creates a new Camel route using CDI in a standalone Java Container calling the remote camel-servlet quickstart

cdi-cxf-archetype

Creates a new CXF JAX-RS using CDI running in a standalone Java Container

cdi-camel-archetype

Creates a new Camel route using CDI in a standalone Java Container

cdi-camel-jetty-archetype

Creates a new Camel route using CDI in a standalone Java Container using Jetty as HTTP server

java-simple-mainclass-archetype

Creates a new Simple standalone Java Container (main class)

java-camel-spring-archetype

Creates a new Camel route using Spring XML in a standalone Java container

karaf-cxf-rest-archetype

Creates a new RESTful WebService Example using JAX-RS

karaf-camel-rest-sql-archetype

Creates a new Camel Example using Rest DSL with SQL Database

karaf-camel-log-archetype

Creates a new Camel Log Example

Begin by selecting the archetype which matches the type of application you would like to create.

6.5.2.2. Create an Application from the Maven Archetype Catalog

You must configure the Maven repositories, which hold the archetypes and artifacts you may need, before creating a sample project:

Use the maven archetype catalog to create a sample project with the required resources. The command to create a sample project is:

$ mvn archetype:generate \
  -DarchetypeCatalog=https://repo.fusesource.com/nexus/content/groups/public/archetype-catalog.xml \
  -DarchetypeGroupId=io.fabric8.archetypes \
  -DarchetypeVersion=2.2.0.redhat-079 \
  -DarchetypeArtifactId=<archetype-name>
Note

Replace <archetype-name> with the name of the archetype that you want to use. For example, karaf-camel-log-archetype creates a new Camel log example.

This will create a maven project with all required dependencies. Maven properties and plug-ins that are used to create Docker images are added to the pom.xml file.

6.5.2.3. Fabric8 Maven Workflow

Creates a new project based off a Maven application template created through Archetype catalog. This catalog provides examples of Java and Karaf projects and supports S2I and Maven deployment workflows.

  1. Set the following environment variables to communicate with OpenShift and a Docker daemon:

    DOCKER_HOST

    Specifies the connection to a Docker daemon used to build an application Docker image

    tcp://10.1.2.2:2375

    KUBERNETES_MASTER

    Specifies the URL for contacting the OpenShift API server

    https://10.1.2.2:8443

    KUBERNETES_DOMAIN

    Domain used for creating routes. Your OpenShift API server must be mapped to all hosts of this domain.

    openshift.dev

  2. Login to OpenShift using CLI and select the project to which to deploy.

    $ oc login
    
    $ oc project <projectname>
  3. Create a sample project as described in Create an Application from the Maven Archetype Catalog.
  4. Build and push the project to OpenShift. You can use following maven goals for building and pushing docker images.

    docker:build

    Builds the docker image for your maven project.

    docker:push

    Pushes the locally built docker image to the global or a local docker registry. This step is optional when developing on a single node OpenShift cluster.

    fabric8:json

    Generates kubernetes json file for your maven project. This goal is bound to the package phase and doesn’t need to be called explicitly when running mvn install

    fabric8:apply

    Applies the kubernetes json file to the current Kubernetes environment and namespace.

    There are few pre-configured maven profiles that you can use to build the project. These profiles are combinations of above maven goals that simplify the build process.

    mvn -Pf8-build

    Comprises of clean, install, docker:build, and fabric8:json. This will build dockerfile and JSON template for a project.

    mvn -Pf8-local-deploy

    Comprises of clean, install, docker:build, fabric8:json, and fabric8:apply. This will create docker and JSON templates and then apply them to OpenShift.

    mvn -Pf8-deploy:

    Comprises of clean, docker:build, fabric8:json, docker:push, and fabric8:apply. This will create docker and JSON templates, push them to docker registry and apply to OpenShift.

    In this example, we will build it locally by running the command:

    $ mvn -Pf8-local-deploy
  5. Login to OpenShift Web Console. A pod is created for the newly created application. You can view the status of this pod, deployments and services that the application is creating.
6.5.2.3.1. Authenticating Against a Registry

For multi node OpenShift setups, the image created must be pushed to the OpenShift registry. This registry must be reachable from the outside through a route. Authentication against this registry reuses the OpenShift authentication with oc login. Assuming that your OpenShift registry is exposed as registry.openshift.dev:80, the project image can be deployed to the registry with following command:

$ mvn docker:push -Ddocker.registry=registry.openshift.dev:80 \
                  -Ddocker.username=$(oc whoami) \
                  -Ddocker.password=$(oc whoami -t)

To push changes to the registry, the OpenShift project must exist and the users of Docker image must be connected to the OpenShift project. All the examples uses the property fabric8.dockerUser as Docker image user which has fabric8/ as default (note the trailing slash). When this user is used unaltered an OpenShift project 'fabric8' must exist. This can be created with 'oc new-project fabric8'.

6.5.2.3.2. Plug-in Configuration

Plug-ins docker-maven-plugin and fabric8-maven-plugin are responsible for creating Docker images and OpenShift API objects which can be configured flexibly. The examples from the archetypes introduces some extra properties which can be changed when running Maven:

docker.registry

Registry to use for docker:push and -Pf8-deploy

docker.username

Username for authentication against the registry

docker.password

Password for authentication against the registry

docker.from

Base image for the application Docker image

fabric8.dockerUser

User used in the image’s name as user part. It must contain a / as trailing part. The default value is fabric8/.

docker.image

The final Docker image name. Default value is ${fabric8.dockerUser}${project.artifactId}:${project.version}

6.5.2.4. OpenShift Source-to-Image (S2I) Workflow

Applications are created through OpenShift Admin Console and CLI using application templates. If you have a JSON or YAML file that defines a template, you can upload the template to the project using the CLI. This saves the template to the project for repeated use by users with appropriate access to that project. You can add the remote Git repository location to the template using template parameters. This allows you to pull the application source from remote repository and built using source-to-image (S2I) method.

JBoss Fuse Integration Services application templates depend on S2I builder ImageStreams, which MUST be created ONCE. The OpenShift installer creates them automatically. For users existing OpenShift setups, it can be achieved with the following command:

$ oc create -n openshift -f /usr/share/openshift/examples/xpaas-streams/fis-image-streams.json

The ImageStreams may be created in a namespace other than openshift by changing it in the command and corresponding template parameter IMAGE_STREAM_NAMESPACE when creating applications.

6.5.2.4.1. Create an Application Using Templates
  1. Create an application template using command mvn archetype:generate. To create an application, upload the template to your current project’s template library with the following command:

    $ oc create -f quickstart-template.json -n <project>

    The template is now available for selection using the web console or the CLI.

  2. Login to OpenShift Web Console. In the desired project, click Add to Project to create the objects from an uploaded template.
  3. Select the template from the list of templates in your project or from the global template library.
  4. Edit template parameters and then click Create. For example, template parameters for a camel-spring quickstart are:

    ParameterDescriptionDefault

    APP_NAME

    Application Name

    Artifact name of the project

    GIT_REPO

    Git repository, required

     

    GIT_REF

    Git ref to build

    master

    SERVICE_NAME

    Exposed Service name

     

    BUILDER_VERSION

    Builder version

    1.0

    APP_VERSION

    Application version

    Maven project version

    MAVEN_ARGS

    Arguments passed to mvn in the build

    package -DskipTests -e

    MAVEN_ARGS_APPEND

    Extra arguments passed to mvn, e.g. for multi-module builds use -pl groupId:module-artifactId -am

     

    ARTIFACT_DIR

    Maven build directory

    target/

    IMAGE_STREAM_NAMESPACE

    Namespace in which the JBoss Fuse ImageStreams are installed.

     

    BUILD_SECRET

    generated if empty. The secret needed to trigger a build.

     
  5. After successful creation of the application, you can view the status of application by clicking Pods tab or by running the following command:

    $ oc get pods

For more information, see Application Templates.

6.5.2.5. Developing Applications

6.5.2.5.1. Injecting Kubernetes Services into Applications

You can inject Kubernetes services into applications by labeling the pods and use those labels to select the required pods to provide a logical service. These labels are simple key, value pairs.

6.5.2.5.1.1. CDI Injection

Fabric8 provides a CDI extension that you can use to inject Kubernetes resources into your applications. To use the CDI extension, first add the dependency to the project’s pom.xml file.

<dependency>
  <groupId>io.fabric8</groupId>
  <artifactId>fabric8-cdi</artifactId>
  <version>{$fabric8.version}</version>
</dependency>

Next step is to identify the field that requires the service and then inject the service by adding a @ServiceName annotation to it. For example,

@Inject
@ServiceName("my-service")
private String service.

The @PortName annotation is used to select a specific port by name when multiple ports are defined for a service.

6.5.2.5.1.2. Using Environment Variables as Properties

You can use to access a service by using environment variables to expose the fixed IP address and port. These are, SERVICE_HOST and SERVICE_PORT. SERVICE_HOST is the host (IP) address of the service and SERVICE_PORT is the port of the service.

6.6. Decision Server xPaaS Image

6.6.1. Overview

Decision Server is available as a containerized xPaaS image that is designed for use with OpenShift as an execution environment for business rules. Developers can quickly build, scale, and test applications deployed across hybrid environments.

Important

There are significant differences in supported configurations and functionality in the Decision Server xPaaS image compared to the regular release of JBoss BRMS.

This topic details the differences between the Decision Server xPaaS image and the full, non-PaaS release of JBoss BRMS, and provides instructions specific to running and configuring the Decision Server xPaaS image. Documentation for other JBoss BRMS functionality not specific to the Decision Server xPaaS image can be found in the JBoss BRMS documentation on the Red Hat Customer Portal.

EAP_HOME in this documentation, as in the JBoss BRMS documentation, is used to refer to the JBoss EAP installation directory where the decision server is deployed. The location of EAP_HOME inside a Decision Server xPaaS image is /opt/eap/, which the JBOSS_HOME environment variable is also set to by default.

6.6.2. Comparing the Decision Server xPaaS Image to the Regular Release of JBoss BRMS

6.6.2.1. Functionality Differences for OpenShift Decision Server xPaaS Images

There are several major functionality differences in the OpenShift Decision Server xPaaS image:

  • The Decision Server image extends the OpenShift EAP image, and any capabilities or limitations it has are also found in the Decision Server image.
  • Only stateless scenarios are supported.
  • Authoring of any content through the BRMS Console or API is not supported.

6.6.2.2. Managing OpenShift Decision Server xPaaS Images

As the Decision Server image is built off the OpenShift JBoss EAP xPaaS image, the JBoss EAP Management CLI is accessible from within the container for troubleshooting purposes.

  1. First open a remote shell session to the running pod:

    $ oc rsh <pod_name>
  2. Then run the following from the remote shell session to launch the JBoss EAP Management CLI:

    $ /opt/eap/bin/jboss-cli.sh
Warning

Any configuration changes made using the JBoss EAP Management CLI on a running container will be lost when the container restarts.

Making configuration changes to the JBoss EAP instance inside the JBoss EAP xPaaS image is different from the process you may be used to for a regular release of JBoss EAP.

6.6.2.3. Security in the OpenShift Decision Server xPaaS Image

Access is limited to users with the kie-server authorization role. A user with this role can be specified via the KIE_SERVER_USER and KIE_SERVER_PASSWORD environment variables.

Note

The HTTP/REST endpoint is configured to only allow the execution of KIE containers and querying of KIE Server resources. Administrative functions like creating or disposing Containers, updating ReleaseIds or Scanners, etc. are restricted. The JMS endpoint currently does not support these restrictions. In the future, more fine-grained security configuration should be available for both endpoints.

6.6.3. Using the Decision Server xPaaS Image Streams and Application Templates

The Red Hat xPaaS middleware images were automatically created during the installation of OpenShift along with the other default image streams and templates.

6.6.4. Running and Configuring the Decision Server xPaaS Image

You can make changes to the Decision Server configuration in the xPaaS image using either the S2I templates, or by using a modified Decision Server image.

6.6.4.1. Using the Decision Server xPaaS Image Source-to-Image (S2I) Process

The recommended method to run and configure the OpenShift Decision Server xPaaS image is to use the OpenShift S2I process together with the application template parameters and environment variables.

The S2I process for the Decision Server xPaaS image works as follows:

  1. If there is a pom.xml file in the source repository, a Maven build is triggered with the contents of $MAVEN_ARGS environment variable.

    • By default, the package goal is used with the openshift profile, including the system properties for skipping tests (-DskipTests) and enabling the Red Hat GA repository (-Dcom.redhat.xpaas.repo.redhatga).
  2. The results of a successful Maven build are installed into the local Maven repository, /home/jboss/.m2/repository/, along with all dependencies for offline usage. The Decision Server xPaaS Image will load the created kjars from this local repository.

    • In addition to kjars resulting from the Maven build, any kjars found in the deployments source directory will also be installed into the local Maven repository. Kjars do not end up in the EAP_HOME/standalone/deployments/ directory.
  3. Any JAR (that is not a kjar) , WAR, and EAR in the deployments source repository directory will be copied to the EAP_HOME/standalone/deployments directory and subsequently deployed using the JBoss EAP deployment scanner.
  4. All files in the configuration source repository directory are copied to EAP_HOME/standalone/configuration.

    Note

    If you want to use a custom JBoss EAP configuration file, it should be named standalone-openshift.xml.

  5. All files in the modules source repository directory are copied to EAP_HOME/modules.

6.6.4.2. Using a Modified Decision Server xPaaS Image

An alternative method is to make changes to the image, and then use that modified image in OpenShift. The templates currently provided, along with the interfaces they support, are listed below:

Table 6.1. Provided Templates
Template NameSupported Interfaces

decisionserver62-basic-s2i.json

http-rest, jms-hornetq

decisionserver62-https-s2i.json

http-rest, https-rest, jms-hornetq

decisionserver62-amq-s2i.json

http-rest, https-rest, jms-activemq

You can run the Decision Server xPaaS image in Docker, make the required configuration changes using the JBoss EAP Management CLI (EAP_HOME/bin/jboss-cli.sh) included in the Decision Server xPaaS image, and then commit the changed container as a new image. You can then use that modified image in OpenShift.

Important

It is recommended that you do not replace the OpenShift placeholders in the JBoss EAP xPaaS configuration file, as they are used to automatically configure services (such as messaging, datastores, HTTPS) during a container’s deployment. These configuration values are intended to be set using environment variables.

Note

Ensure that you follow the guidelines for creating images.

6.6.4.3. Updating Rules

As each image is built from a snapshot of a specific Maven repository, whenever a new rule is added, or an existing rule modified, a new image must be created and deployed for the rule modifications to take effect.

6.6.5. Endpoints

Clients can access the Decision Server xPaaS Image via multiple endpoints; by default the provided templates include support for REST, HornetQ, and ActiveMQ.

6.6.5.1. REST

Clients can use the REST API in various ways:

6.6.5.1.1. Browser
6.6.5.1.2. Java
// HelloRulesClient.java
KieServicesConfiguration config = KieServicesFactory.newRestConfiguration(
  "http://host/kie-server/services/rest/server", "kieserverUser", "kieserverPassword");
config.setMarshallingFormat(MarshallingFormat.XSTREAM);
RuleServicesClient client =
  KieServicesFactory.newKieServicesClient(config).getServicesClient(RuleServicesClient.class);
ServiceResponse<String> response = client.executeCommands("HelloRulesContainer", myCommands);
6.6.5.1.3. Command Line
# request.sh
#!/bin/sh
curl -X POST \
  -d @request.xml \
  -H "Accept:application/xml" \
  -H "X-KIE-ContentType:XSTREAM" \
  -H "Content-Type:application/xml" \
  -H "Authorization:Basic a2llc2VydmVyOmtpZXNlcnZlcjEh" \
  -H "X-KIE-ClassType:org.drools.core.command.runtime.BatchExecutionCommandImpl" \
http://host/kie-server/services/rest/server/containers/instances/HelloRulesContainer
<!-- request.xml -->
<batch-execution lookup="HelloRulesSession">
  <insert>
    <org.openshift.quickstarts.decisionserver.hellorules.Person>
      <name>errantepiphany</name>
    </org.openshift.quickstarts.decisionserver.hellorules.Person>
  </insert>
  <fire-all-rules/>
  <query out-identifier="greetings" name="get greeting"/>
</batch-execution>

6.6.5.2. JMS

Client can also use the Java Messaging Service, as demonstrated below:

6.6.5.2.1. Java (HornetQ)
// HelloRulesClient.java
Properties props = new Properties();
props.setProperty(Context.INITIAL_CONTEXT_FACTORY,
  "org.jboss.naming.remote.client.InitialContextFactory");
props.setProperty(Context.PROVIDER_URL, "remote://host:4447");
props.setProperty(Context.SECURITY_PRINCIPAL, "kieserverUser");
props.setProperty(Context.SECURITY_CREDENTIALS, "kieserverPassword");
InitialContext context = new InitialContext(props);
KieServicesConfiguration config =
  KieServicesFactory.newJMSConfiguration(context, "hornetqUser", "hornetqPassword");
config.setMarshallingFormat(MarshallingFormat.XSTREAM);
RuleServicesClient client =
  KieServicesFactory.newKieServicesClient(config).getServicesClient(RuleServicesClient.class);
ServiceResponse<String> response = client.executeCommands("HelloRulesContainer", myCommands);
6.6.5.2.2. Java (ActiveMQ)
// HelloRulesClient.java
props.setProperty(Context.INITIAL_CONTEXT_FACTORY,
  "org.apache.activemq.jndi.ActiveMQInitialContextFactory");
props.setProperty(Context.PROVIDER_URL, "tcp://host:61616");
props.setProperty(Context.SECURITY_PRINCIPAL, "kieserverUser");
props.setProperty(Context.SECURITY_CREDENTIALS, "kieserverPassword");
InitialContext context = new InitialContext(props);
ConnectionFactory connectionFactory = (ConnectionFactory)context.lookup("ConnectionFactory");
Queue requestQueue = (Queue)context.lookup("dynamicQueues/queue/KIE.SERVER.REQUEST");
Queue responseQueue = (Queue)context.lookup("dynamicQueues/queue/KIE.SERVER.RESPONSE");
KieServicesConfiguration config = KieServicesFactory.newJMSConfiguration(
  connectionFactory, requestQueue, responseQueue, "activemqUser", "activemqPassword");
config.setMarshallingFormat(MarshallingFormat.XSTREAM);
RuleServicesClient client =
  KieServicesFactory.newKieServicesClient(config).getServicesClient(RuleServicesClient.class);
ServiceResponse<String> response = client.executeCommands("HelloRulesContainer", myCommands);

6.6.6. Troubleshooting

In addition to viewing the OpenShift logs, you can troubleshoot a running Decision Server xPaaS Image container by viewing its logs. These are outputted to the container’s standard out, and are accessible with the following command:

$ oc logs -f <pod_name> <container_name>
Note

By default, the OpenShift Decision Server xPaaS image does not have a file log handler configured. Logs are only sent to the container’s standard out.

6.7. Red Hat JBoss Data Grid xPaaS Image

6.7.1. Overview

Red Hat JBoss Data Grid is available as a containerized xPaaS image that is designed for use with OpenShift. This image provides an in-memory distributed database so that developers can quickly access large amounts of data in a hybrid environment.

Important

There are significant differences in supported configurations and functionality in the JBoss Data Grid xPaaS image compared to the full, non-PaaS release of JBoss Data Grid.

This topic details the differences between the JBoss Data Grid xPaaS image and the full, non-PaaS release of JBoss Data Grid, and provides instructions specific to running and configuring the JBoss Data Grid xPaaS image. Documentation for other JBoss Data Grid functionality not specific to the JBoss Data Grid xPaaS image can be found in the JBoss Data Grid documentation on the Red Hat Customer Portal.

Note

OpenShift Container Platform release 3.1 supports JBoss Data Grid release 6.6 and earlier. If you are using OpenShift release 3.2 and JBoss Data Grid release 7.1 or later, refer to the documentation in the Red Hat Customer Portal.

6.7.2. Comparing the JBoss Data Grid xPaaS Image to the Regular Release of JBoss Data Grid

6.7.2.1. Functionality Differences for OpenShift JBoss Data Grid xPaaS Images

There are several major functionality differences in the OpenShift JBoss Data Grid xPaaS image:

  • The JBoss Data Grid Management Console is not available to manage OpenShift JBoss Data Grid xPaaS images.
  • The JBoss Data Grid Management CLI is only bound locally. This means that you can only access the Management CLI of a container from within the pod.
  • Library mode is not supported.
  • Only JDBC is supported for a backing cache-store. Support for remote cache stores are present only for data migration purposes.

6.7.2.2. Forming a Cluster using the OpenShift JBoss Data Grid xPaaS Images

Clustering is achieved through one of two discovery mechanisms: Kubernetes or DNS. This is accomplished by configuring the JGroups protocol stack in clustered-openshift.xml with either the <openshift.KUBE_PING/> or <openshift.DNS_PING/> elements. By default KUBE_PING is the pre-configured and supported protocol.

For KUBE_PING to work the following steps must be taken:

  1. The OPENSHIFT_KUBE_PING_NAMESPACE environment variable must be set (as seen in the Configuration Environment Variables). If this variable is not set, then the server will act as if it is a single-node cluster, or a cluster that consists of only one node.
  2. The OPENSHIFT_KUBE_PING_LABELS environment variable must be set (as seen in the Configuration Environment Variables). If this variable is not set, then pods outside the application (but in the same namespace) will attempt to join.
  3. Authorization must be granted to the service account the pod is running under to be allowed to Kubernetes' REST api. This is done on the command line:

    Example 6.2. Policy commands

    Using the default service account in the myproject namespace:

    oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default -n $(oc project -q)

    Using the eap-service-account in the myproject namespace:

    oc policy add-role-to-user view system:serviceaccount:$(oc project -q):eap-service-account -n $(oc project -q)

Once the above is configured images will automatically join the cluster as they are deployed; however, removing images from an active cluster, and therefore shrinking the cluster, is not supported.

6.7.2.3. Endpoints

Clients can access JBoss Data Grid via REST, HotRod, and memcached endpoints defined as usual in the cache’s configuration.

If a client attempts to access a cache via HotRod and is in the same project it will be able to receive the full cluster view and make use of consistent hashing; however, if it is in another project then the client will unable to receive the cluster view. Additionally, if the client is located outside of the project that contains the HotRod cache there will be additional latency due to extra network hops being required to access the cache.

Important

Only caches with an exposed REST endpoint will be accessible outside of OpenShift.

6.7.2.4. Configuring Caches

A list of caches may be defined by the CACHE_NAMES environment variable. By default the following caches are created:

  • default
  • memcached

Each cache’s behavior may be controlled through the use of cache-specific environment variables, with each environment variable expecting the cache’s name as the prefix. For instance, consider the default cache, any configuration applied to this cache must begin with the DEFAULT_ prefix. To define the number of cache entry owners for each entry in this cache the DEFAULT_CACHE_OWNERS environment variable would be used.

A full list of these is found at Cache Environment Variables.

6.7.2.5. Datasources

Datasources are automatically created based on the value of some environment variables.

The most important variable is the DB_SERVICE_PREFIX_MAPPING which defines JNDI mappings for datasources. It must be set to a comma-separated list of <name><database_type>=<PREFIX> triplet, where *name is used as the pool-name in the datasource, database_type determines which database driver to use, and PREFIX is the prefix used in the names of environment variables, which are used to configure the datasource.

6.7.2.5.1. JNDI Mappings for Datasources

For each <name>-database_type>=PREFIX triplet in the DB_SERVICE_PREFIX_MAPPING environment variable, a separate datasource will be created by the launch script, which is executed when running the image.

The <database_type> will determine the driver for the datasource. Currently, only postgresql and mysql are supported.

The <name> parameter can be chosen on your own. Do not use any special characters.

Note

The first part (before the equal sign) of the DB_SERVICE_PREFIX_MAPPING should be lowercase.

6.7.2.5.2. Database Drivers

The JBoss Data Grid xPaaS image contains Java drivers for MySQL, PostgreSQL, and MongoDB databases deployed. Datasources are generated only for MySQL and PostGreSQL databases.

Note

For MongoDB databases there are no JNDI mappings created because this is not a SQL database.

6.7.2.5.3. Examples

The following examples demonstrate how datasources may be defined using the DB_SERVICE_PREFIX_MAPPING environment variable.

6.7.2.5.3.1. Single Mapping

Consider the value test-postgresql=TEST.

This will create a datasource named java:jboss/datasources/test_postgresql. Additionally, all of the required settings, such as username and password, will be expected to be provided as environment variables with the TEST_ prefix, such as TEST_USERNAME and TEST_PASSWORD.

6.7.2.5.3.2. Multiple Mappings

Multiple database mappings may also be specified; for instance, considering the following value for the DB_SERVICE_PREFIX_MAPPING environment variable: cloud-postgresql=CLOUD,test-mysql=TEST_MYSQL.

Note

Multiple datasource mappings should be separated with commas, as seen in the above example.

This will create two datasources:

  1. java:jboss/datasources/test_mysql
  2. java:jboss/datasources/cloud_postgresql

MySQL datasource configuration, such as the username and password, will be expected with the TEST_MYSQL prefix, for example TEST_MYSQL_USERNAME. Similarly the PostgreSQL datasource will expect to have environment variables defined with the CLOUD_ prefix, such as CLOUD_USERNAME.

6.7.2.5.4. Environment Variables

A full list of datasource environment variables may be found at Datasource Environment Variables.

6.7.2.6. Security Domains

To configure a new Security Domain the SECDOMAIN_NAME environment variable must be defined, which will result in the creation of a security domain named after the passed in value. This domain may be configured through the use of the Security Environment Variables.

6.7.2.7. Managing OpenShift JBoss Data Grid xPaaS Images

A major difference in managing an OpenShift JBoss Data Grid xPaaS image is that there is no Management Console exposed for the JBoss Data Grid installation inside the image. Because images are intended to be immutable, with modifications being written to a non-persistent file system, the Management Console is not exposed.

However, the JBoss Data Grid Management CLI (JDG_HOME/bin/jboss-cli.sh) is still accessible from within the container for troubleshooting purposes.

  1. First open a remote shell session to the running pod:

    $ oc rsh <pod_name>
  2. Then run the following from the remote shell session to launch the JBoss Data Grid Management CLI:

    $ /opt/datagrid/bin/jboss-cli.sh
Warning

Any configuration changes made using the JBoss Data Grid Management CLI on a running container will be lost when the container restarts.

Making configuration changes to the JBoss Data Grid instance inside the JBoss Data Grid xPaaS image is different from the process you may be used to for a regular release of JBoss Data Grid.

6.7.3. Using the JBoss Data Grid xPaaS Image Streams and Application Templates

The Red Hat xPaaS middleware images were automatically created during the installation of OpenShift along with the other default image streams and templates.

6.7.4. Running and Configuring the JBoss Data Grid xPaaS Image

You can make changes to the JBoss Data Grid configuration in the xPaaS image using either the S2I templates, or by using a modified JBoss Data Grid xPaaS image.

6.7.4.1. Using the JBoss Data Grid xPaaS Image Source-to-Image (S2I) Process

The recommended method to run and configure the OpenShift JBoss Data Grid xPaaS image is to use the OpenShift S2I process together with the application template parameters and environment variables.

The S2I process for the JBoss Data Grid xPaaS image works as follows:

  1. If there is a pom.xml file in the source repository, a Maven build is triggered with the contents of $MAVEN_ARGS environment variable.
  2. By default the package goal is used with the openshift profile, including the system properties for skipping tests (-DskipTests) and enabling the Red Hat GA repository (-Dcom.redhat.xpaas.repo.redhatga).
  3. The results of a successful Maven build are copied to JDG_HOME/standalone/deployments. This includes all JAR, WAR, and EAR files from the directory within the source repository specified by $ARTIFACT_DIR environment variable. The default value of $ARTIFACT_DIR is the target directory.

    • Any JAR, WAR, and EAR in the deployments source repository directory are copied to the JDG_HOME/standalone/deployments directory.
    • All files in the configuration source repository directory are copied to JDG_HOME/standalone/configuration.

      Note

      If you want to use a custom JBoss Data Grid configuration file, it should be named clustered-openshift.xml.

  4. All files in the modules source repository directory are copied to JDG_HOME/modules.
6.7.4.1.1. Using a Different JDK Version in the JBoss Data Grid xPaaS Image

The JBoss Data Grid xPaaS image may come with multiple versions of OpenJDK installed, but only one is the default. For example, the JBoss Data Grid 6.5 xPaaS image comes with OpenJDK 1.7 and 1.8 installed, but OpenJDK 1.8 is the default.

If you want the JBoss Data Grid xPaaS image to use a different JDK version than the default, you must:

  • Ensure that your pom.xml specifies to build your code using the intended JDK version.
  • In the S2I application template, configure the image’s JAVA_HOME environment variable to point to the intended JDK version. For example:

    name: "JAVA_HOME"
    value: "/usr/lib/jvm/java-1.7.0"

6.7.4.2. Using a Modified JBoss Data Grid xPaaS Image

An alternative method is to make changes to the image, and then use that modified image in OpenShift.

The JBoss Data Grid configuration file that OpenShift uses inside the JBoss Data Grid xPaaS image is JDG_HOME/standalone/configuration/clustered-openshift.xml, and the JBoss Data Grid startup script is JDG_HOME/bin/openshift-launch.sh.

You can run the JBoss Data Grid xPaaS image in Docker, make the required configuration changes using the JBoss Data Grid Management CLI (JDG_HOME/bin/jboss-cli.sh), and then commit the changed container as a new image. You can then use that modified image in OpenShift.

Important

It is recommended that you do not replace the OpenShift placeholders in the JBoss Data Grid xPaaS configuration file, as they are used to automatically configure services (such as messaging, datastores, HTTPS) during a container’s deployment. These configuration values are intended to be set using environment variables.

Note

Ensure that you follow the guidelines for creating images.

6.7.5. Environment Variables

6.7.5.1. Information Environment Variables

The following information environment variables are designed to convey information about the image and should not be modified by the user:

Table 6.2. Information Environment Variables
Variable NameDescriptionValue

JBOSS_DATAGRID_VERSION

The full, non-PaaS release that the xPaaS image is based from.

6.5.1.GA

JBOSS_HOME

The directory where the JBoss distribution is located.

/opt/datagrid

JBOSS_IMAGE_NAME

Image name, same as Name label

jboss-datagrid-6/datagrid65-openshift

JBOSS_IMAGE_RELEASE

Image release, same as Release label

Example: dev

JBOSS_IMAGE_VERSION

Image version, same as Version label

Example: 1.2

JBOSS_MODULES_SYSTEM_PKGS

 

org.jboss.logmanager

JBOSS_PRODUCT

 

datagrid

LAUNCH_JBOSS_IN_BACKGROUND

Allows the data grid server to be gracefully shutdown even when there is no terminal attached.

true

6.7.5.2. Configuration Environment Variables

Configuration environment variables are designed to conveniently adjust the image without requiring a rebuild, and should be set by the user as desired.

Table 6.3. Configuration Environment Variables
Variable NameDescriptionValue

CACHE_CONTAINER_START

Should this cache container be started on server startup, or lazily when requested by a service or deployment. Defaults to LAZY

Example: EAGER

CACHE_CONTAINER_STATISTICS

Determines if the cache container collects statistics. Disable for optimal performance. Defaults to true.

Example: false

CACHE_NAMES

List of caches to configure. Defaults to default,memcached, and each defined cache will be configured as a distributed-cache with a mode of SYNC.

Example: addressbook,addressbook_indexed

CONTAINER_SECURITY_CUSTOM_ROLE_MAPPER_CLASS

Class of the custom principal to role mapper.

Example: com.acme.CustomRoleMapper

CONTAINER_SECURITY_IDENTITY_ROLE_MAPPER

Set a role mapper for this cache container. Valid values are: identity-role-mapper,common-name-role-mapper,cluster-role-mapper,custom-role-mapper.

Example: identity-role-mapper

CONTAINER_SECURITY_ROLES

Define role names and assign permissions to them.

Example: admin=ALL,reader=READ,writer=WRITE

DB_SERVICE_PREFIX_MAPPING

Define a comma-separated list of datasources to configure.

Example: test-mysql=TEST_MYSQL

DEFAULT_CACHE

Indicates the default cache for this cache container.

Example: addressbook

ENCRYPTION_REQUIRE_SSL_CLIENT_AUTH

Whether to require client certificate authentication. Defaults to false.

Example: true

HOTROD_AUTHENTICATION

If defined the hotrod-connectors will be configured with authentication in the ApplicationRealm.

Example: true

HOTROD_ENCRYPTION

If defined the hotrod-connectors will be configured with encryption in the ApplicationRealm.

Example: true

HOTROD_SERVICE_NAME

Name of the OpenShift service used to expose HotRod externally.

Example: DATAGRID_APP_HOTROD

INFINISPAN_CONNECTORS

Comma separated list of connectors to configure. Defaults to hotrod,memcached,rest. Note that if authorization or authentication is enabled on the cache then memcached should be removed as this protocol is inherently insecure.

Example: hotrod

JAVA_OPTS_APPEND

The contents of JAVA_OPTS_APPEND is appended to JAVA_OPTS on startup.

Example: -Dfoo=bar

JGROUPS_CLUSTER_PASSWORD

A password to control access to JGroups. Needs to be set consistently cluster-wide. The image default is to use the OPENSHIFT_KUBE_PING_LABELS variable value; however, the JBoss application templates generate and supply a random value.

Example: miR0JaDR

MEMCACHED_CACHE

The name of the cache to use for the Memcached connector.

Example: memcached

OPENSHIFT_KUBE_PING_LABELS

Clustering labels selector.

Example: application=eap-app

OPENSHIFT_KUBE_PING_NAMESPACE

Clustering project namespace.

Example: myproject

PASSWORD

Password for the JDG user.

Example: p@ssw0rd

REST_SECURITY_DOMAIN

The security domain to use for authentication and authorization purposes. Defaults to none (no authentication).

Example: other

TRANSPORT_LOCK_TIMEOUT

Infinispan uses a distributed lock to maintain a coherent transaction log during state transfer or rehashing, which means that only one cache can be doing state transfer or rehashing at the same time. This constraint is in place because more than one cache could be involved in a transaction. This timeout controls the time to wait to acquire a distributed lock. Defaults to 240000.

Example: 120000

USERNAME

Username for the JDG user.

Example: openshift

6.7.5.3. Cache Environment Variables

The following environment variables all control behavior of individual caches; when defining these values for a particular cache substitute the cache’s name for CACHE_NAME.

Table 6.4. Cache Environment Variables
Variable NameDescriptionExample Value

<CACHE_NAME>_CACHE_TYPE

Determines whether this cache should be distributed or replicated. Defaults to distributed.

replicated

<CACHE_NAME>_CACHE_START

Determines if this cache should be started on server startup, or lazily when requested by a service or deployment. Defaults to LAZY.

EAGER

<CACHE_NAME>_CACHE_BATCHING

Enables invocation batching for this cache. Defaults to false.

true

<CACHE_NAME>_CACHE_STATISTICS

Determines whether or not the cache collects statistics. Disable for optimal performance. Defaults to true.

false

<CACHE_NAME>_CACHE_MODE

Sets the clustered cache mode, ASYNC for asynchronous operations, or SYNC for synchronous operations.

ASYNC

<CACHE_NAME>_CACHE_QUEUE_SIZE

In ASYNC mode this attribute can be used to trigger flushing of the queue when it reaches a specific threshold. Defaults to 0, which disables flushing.

100

<CACHE_NAME>_CACHE_QUEUE_FLUSH_INTERVAL

In ASYNC mode this attribute controls how often the asynchronous thread runs to flush the replication queue. This should be a positive integer that represents thread wakeup time in milliseconds. Defaults to 10.

20

<CACHE_NAME>_CACHE_REMOTE_TIMEOUT

In SYNC mode the timeout, in milliseconds, used to wait for an acknowledgement when making a remote call, after which the call is aborted and an exception is thrown. Defaults to 17500.

25000

<CACHE_NAME>_CACHE_OWNERS

Number of cluster-wide replicas for each cache entry. Defaults to 2.

5

<CACHE_NAME>_CACHE_SEGMENTS

Number of hash space segments per cluster. The recommended value is 10 * cluster size. Defaults to 80.

30

<CACHE_NAME>_CACHE_L1_LIFESPAN

Maximum lifespan, in milliseconds, of an entry placed in the L1 cache. Defaults to 0, indicating that L1 is disabled.

100.

<CACHE_NAME>_CACHE_EVICTION_STRATEGY

Sets the cache eviction strategy. Available options are UNORDERED, FIFO, LRU, LIRS, and NONE (to disable eviction). Defaults to NONE.

FIFO

<CACHE_NAME>_CACHE_EVICTION_MAX_ENTRIES

Maximum number of entries in a cache instance. If selected value is not a power of two the actual value will default to the least power of two larger than the selected value. A value of -1 indicates no limit. Defaults to 10000.

-1

<CACHE_NAME>_CACHE_EXPIRATION_LIFESPAN

Maximum lifespan, in milliseconds, of a cache entry, after which the entry is expired cluster-wide. Defaults to -1, indicating that the entries never expire.

10000

<CACHE_NAME>_CACHE_EXPIRATION_MAX_IDLE

Maximum idle time, in milliseconds, a cache entry will be maintained in the cache. If the idle time is exceeded, then the entry will be expired cluster-wide. Defaults to -1, indicating that the entries never expire.

10000

<CACHE_NAME>_CACHE_EXPIRATION_INTERVAL

Interval, in milliseconds, between subsequent runs to purge expired entries from memory and any cache stores. If you wish to disable the periodic eviction process altogether, then set the interval to -1. Defaults to 5000.

-1

<CACHE_NAME>_CACHE_COMPATIBILITY_ENABLED

Enables compatibility mode for this cache. Disabled by default.

true

<CACHE_NAME>_CACHE_COMPATIBILITY_MARSHALLER

A marshaller to use for compatibility conversions.

com.acme.CustomMarshaller

<CACHE_NAME>_JDBC_STORE_TYPE

Type of JDBC store to configure. This value may either be string or binary.

string

<CACHE_NAME>_JDBC_STORE_DATASOURCE

Defines the jndiname of the datasource.

java:jboss/datasources/ExampleDS

<CACHE_NAME>_KEYED_TABLE_PREFIX

Defines the prefix prepended to the cache name used when composing the name of the cache entry table. Defaults to ispn_entry.

JDG

<CACHE_NAME>_CACHE_INDEX

The indexing mode of the cache. Valid values are NONE, LOCAL, and ALL. Defaults to NONE.

ALL

<CACHE_NAME>_CACHE_INDEXING_PROPERTIES

Comma separated list of properties to pass on to the indexing system.

default.directory_provider=ram

<CACHE_NAME>_CACHE_SECURITY_AUTHORIZATION_ENABLED

Enables authorization checks for this cache. Defaults to false.

true

<CACHE_NAME>_CACHE_SECURITY_AUTHORIZATION_ROLES

Sets the valid roles required to access this cache.

admin,reader,writer

<CACHE_NAME>_CACHE_PARTITION_HANDLING_ENABLED

If enabled, then the cache will enter degraded mode when it loses too many nodes. Defaults to true.

false

6.7.5.4. Datasource Environment Variables

Datasource properties may be configured with the following environment variables:

Table 6.5. Datasource Environment Variables
Variable NameDescriptionExample Value

<NAME><DATABASE_TYPE>_SERVICE_HOST_

Defines the database server’s hostname or IP to be used in the datasource’s connection_url property.

192.168.1.3

<NAME>_DATABASE_TYPE>_SERVICE_PORT

Defines the database server’s port for the datasource.

5432

<PREFIX>_JNDI

Defines the JNDI name for the datasource. Defaults to java:jboss/datasources/<name><database_type>_, where name and database_type are taken from the triplet definition. This setting is useful if you want to override the default generated JNDI name.

java:jboss/datasources/test-postgresql

<PREFIX>_USERNAME

Defines the username for the datasource.

admin

<PREFIX>_PASSWORD

Defines the password for the datasource.

password

<PREFIX>_DATABASE

Defines the database name for the datasource.

myDatabase

<PREFIX>_TX_ISOLATION

Defines the java.sql.Connection transaction isolation level for the database.

TRANSACTION_READ_UNCOMMITTED

<PREFIX>_TX_MIN_POOL_SIZE

Defines the minimum pool size option for the datasource.

1

<PREFIX>_TX_MAX_POOL_SIZE

Defines the maximum pool size option for the datasource.

20

6.7.5.5. Security Environment Variables

The following environment variables may be defined to customize the environment’s security domain:

Table 6.6. Security Environment Variables
Variable NameDescriptionExample Value

SECDOMAIN_NAME

Define in order to enable the definition of an additional security domain.

myDomain

SECDOMAIN_PASSWORD_STACKING

If defined, the password-stacking module option is enabled and set to the value useFirstPass.

true

SECDOMAIN_LOGIN_MODULE

The login module to be used. Defaults to UsersRoles.

UsersRoles

SECDOMAIN_USERS_PROPERTIES

The name of the properties file containing user definitions. Defaults to users.properties.

users.properties

SECDOMAIN_ROLES_PROPERTIES

The name of the properties file containing role definitions. Defaults to roles.properties.

roles.properties

6.7.6. Exposed Ports

The following ports are exposed by default in the JBoss Data Grid xPaaS Image:

ValueDescription

8443

Secure Web

8778

-

11211

memcached

11222

internal hotrod

11333

external hotrod

Important

The external hotrod connector is only available if the HOTROD_SERVICE_NAME environment variables has been defined.

6.7.7. Troubleshooting

In addition to viewing the OpenShift logs, you can troubleshoot a running JBoss Data Grid xPaaS Image container by viewing its logs. These are outputted to the container’s standard out, and are accessible with the following command:

$ oc logs -f <pod_name> <container_name>
Note

By default, the OpenShift JBoss Data Grid xPaaS Image does not have a file log handler configured. Logs are only sent to the container’s standard out.

6.8. Red Hat Single Sign-On (SSO) xPaaS Image

Important

This image is currently in Technical Preview and not intended for production use.

6.8.1. Overview

Red Hat Single Sign-On (SSO) is an integrated sign-on solution available as a containerized xPaaS image designed for use with OpenShift. This image provides an authentication server for users to centrally log in, log out, register, and manage user accounts for web applications, mobile applications, and RESTful web services.

Red Hat offers five SSO application templates:

  • sso70-basic: SSO backed by a H2 database on the same pod
  • sso70-mysql: SSO backed by a MySQL database on a separate pod
  • sso70-mysql-persistent: SSO backed by a persistent PostgreSQL database on a separate pod
  • sso70-postgresql: SSO backed by a MySQL database on a separate pod
  • sso70-postgresql-persistent: SSO backed by a persistent PostgreSQL database on a separate pod

An SSO-enabled Red Hat JBoss Enterprise Application Platform (JBoss EAP) Image is also offered, which enables users to deploy a JBoss EAP instance that can be used with SSO for authentication:

  • eap64-sso-s2i: SSO-enabled JBoss EAP

6.8.2. Differences Between the SSO xPaaS Application and Keycloak

The SSO xPaaS application is based on Keycloak, a JBoss community project. There are some differences in functionality between the Red Hat Single Sign-On xPaaS Application and Keycloak:

  • This image is currently available as a Technical Preview for use only with SSO-enabled Red Hat JBoss Enterprise Application Platform (JBoss EAP) applications.
  • The SSO xPaaS Technical Preview Application includes all of the functionality of Keycloak 1.8.1. In addition, the SSO-enabled JBoss EAP image automatically handles OpenID Connect or SAML client registration and configuration for .war deployments that contain <auth-method>KEYCLOAK</auth-method> or <auth-method>KEYCLOAK-SAML</auth-method> in their respective web.xml files.

6.8.3. Versioning for xPaaS Images

See the xPaaS part of the OpenShift and Atomic Platform Tested Integrations page for details about OpenShift image version compatibility.

6.8.4. Prerequisites for Deploying the SSO xPaaS Image

The following is a list of prerequisites for using the SSO xPaaS image:

  1. Acquire Red Hat Subscriptions: Ensure that you have the relevant OpenShift subscriptions as well as a subscription for xPaaS Middleware.
  2. Install OpenShift: Before using the OpenShift xPaaS images, you must have an OpenShift environment installed and configured:

    1. The Quick Installation method allows you to install OpenShift using an interactive CLI utility.
    2. The Advanced Installation method allows you to install OpenShift using a reference configuration. This method is best suited for production environments.
  3. Ensure the DNS has been configured. This is required for communication between JBoss EAP and SSO, and for the requisite redirection. See DNS for more information.
  4. Install and Deploy Docker Registry: Install the Docker Registry and then ensure that the Docker Registry is deployed to locally manage images:

    $ oadm registry --config=/etc/origin/master/admin.kubeconfig \
        --credentials=/etc/origin/master/openshift-registry.kubeconfig

    For more information, see Deploying a Docker Registry

  5. Deploy a Router. For more information, see Deploying a Router.
  6. Ensure that you can run the oc create command with cluster-admin privileges.

6.8.5. Using the SSO Image Streams and Application Templates

The Red Hat xPaaS middleware images were automatically created during the installation of OpenShift along with the other default image streams and templates.

6.8.6. Preparing and Deploying the SSO xPaaS Application Templates

6.8.6.1. Using the OpenShift CLI

  1. Prepare the JBoss EAP and SSO application service accounts and their associated secrets.

    $ oc create -n <project-name> -f <application-templates_file_path>/secrets/eap-app-secret.json
    $ oc create -n <project-name> -f <application-templates_file_path>/secrets/sso-app-secret.json
  2. Deploy one of the SSO application templates. This example deploys the sso70-postgresql template, which deploys an SSO pod backed by a PostgreSQL database on a separate pod.

    $ oc process -f <application-templates_file_path>/sso/sso70-postgresql.json | oc create -n <project-name> -f -

    Or, if the template has been imported into common namespace:

    $ oc new-app --template=sso70-postgresql -n <project-name>

6.8.6.2. Using the OpenShift Web Console

Log in to the OpenShift web console:

  1. Click Add to project to list all of the default image streams and templates.
  2. Use the Filter by keyword search bar to limit the list to those that match sso. You may need to click See all to show the desired application template.
  3. Click an application template to list all of the deployment parameters. These parameters can be configured manually, or can be left as default.
  4. Click Create to deploy the application template.

6.8.6.3. Deployment Process

Once deployed, two pods will be created: one for the SSO web servers and one for the database. After the SSO web server pod has started, the web servers can be accessed at their custom configured hostnames, or at the default hostnames:

The default login username/password credentials are admin/admin.

6.8.7. Quickstart Example: Using the SSO xPaaS Image with the SSO-enabled JBoss EAP xPaaS Image

This example uses the OpenShift web console to deploy SSO xPaaS backed by a PostgreSQL database. Once deployed, an SSO realm, role, and user will be created to be used when configuring the SSO-enabled JBoss EAP xPaaS Image deployment. Once successfully deployed, the SSO user can then be used to authenticate and access JBoss EAP.

6.8.7.1. Deploy the SSO Application Template

  1. Log in to the OpenShift web console and select the <project-name> project space.
  2. Click Add to project to list all of the default image streams and templates.
  3. Use the Filter by keyword search bar to limit the list to those that match sso. You may need to click See all to show the desired application template.
  4. Click the sso70-postgresql application template to list all of the deployment parameters. These parameters will be left as default for this example.
  5. Click Create to deploy the application template and start pod deployment. This may take a couple of minutes.

6.8.7.2. Create SSO Credentials

Log in to the encrypted SSO web server at https://secure-sso-<project-name>.<hostname>/auth using the default admin/admin user name and password.

  • Create a Realm

    1. Create a new realm by hovering your cursor over the realm namespace (default is Master) at the top of the sidebar and click the Add Realm button.
    2. Enter a realm name and click Create.
  • Copy the Public Key In the newly created realm, click the Keys tab and copy the public key that has been generated. This will be needed to deploy the SSO-enabled JBoss EAP image.
  • Create a Role Create a role in SSO with a name that corresponds to the JEE role defined in the web.xml of the example application. This role will be assigned to an SSO application user to authenticate access to user applications.

    1. Click Roles in the Configure sidebar to list the roles for this realm. As this is a new realm, there should only be the default offline_access role. Click Add Role.
    2. Enter the role name and optional description and click Save.
  • Create Users and Assign Roles Create two users. The realm management user will be assigned the realm-management roles to handle automatic SSO client registration in the SSO server. The application user will be assigned the JEE role, created in the previous step, to authenticate access to user applications.

Create the realm management user:

  1. Click Users in the Manage sidebar to view the user information for the realm. Click Add User.
  2. Enter a valid Username and any additional optional information for the realm management user and click Save.
  3. Edit the user configuration. Click the Credentials tab in the user space and enter a password for the user. After the password has been confirmed you can click the Reset Password button to set the user password. A pop-up window will prompt for additional confirmation.
  4. Click Role Mappings to list the realm and client role configuration. In the Client Roles drop-down menu, select realm-management and add all of the available roles to the user. This provides the user SSO server rights that can be used by the JBoSS EAP image to create clients.

Create the application user:

  1. Click Users in the Manage sidebar to view the user information for the realm. Click Add User.
  2. Enter a valid Username and any additional optional information for the application user and click Save.
  3. Edit the user configuration. Click the Credentials tab in the user space and enter a password for the user. After the password has been confirmed you can click the Reset Password button to set the user password. A pop-up window will prompt for additional confirmation.
  4. Click Role Mappings to list the realm and client role configuration. In Available Roles, add the JEE role created earlier.

6.8.7.3. Deploy the SSO-enabled JBoss EAP Image

  1. Return to the OpenShift web console and click Add to project to list all of the default image streams and templates.
  2. Use the Filter by keyword search bar to limit the list to those that match sso. You may need to click See all to show the desired application template.
  3. Click the eap64-sso-s2i image to list all of the deployment parameters. Edit the configuration of the following SSO parameters:

    • SSO_URI: The SSO web server authentication address: https://secure-sso-<project-name>.<hostname>/auth
    • SSO_REALM: The SSO realm created for this procedure.
    • SSO_USERNAME: The name of the realm management user.
    • SSO_PASSWORD: The password of the user.
    • SSO_PUBLIC_KEY: The public key generated by the realm. It is located in the Keys tab of the Realm Settings in the SSO console.
    • SSO_BEARER_ONLY: If set to true, the OpenID Connect client will be registered as bearer-only.
    • SSO_ENABLE_CORS: If set to true, the Keycloak adapter enables Cross-Origin Resource Sharing (CORS).
  4. Click Create to deploy the JBoss EAP image.

It may take several minutes for the JBoss EAP image to deploy. When it does, it can be accessed at:

  • http://<application-name>-<project-name>.<hostname>/<app-context>: for the web server, and
  • https://secure-<application-name>-<project-name>.<hostname>/<app-context>: for the encrypted web server, where <app-context> is one of app-jee, app-profile-jee, app-profile-jee-saml, or service depending on the example application.
6.8.7.3.1. Alternate Deployments

You can also create the client registration in the Clients frame of the Configure sidebar. Once a client has been registered, click the Installation tab and download the configuration .xml:

  • For OpenID Connect application sources, save the Keycloak OIDC JBoss Subsystem XML to <file_path>/configuration/secure-deployments.
  • For SAML application sources, save the Keyclock SAML Wildfly/JBoss Subsystem to <file_path>/configuration/secure-saml-deployments.

You can also edit the standalone-openshift.xml of the JBoss EAP image, which will deploy the manual configuration instead of the default. For more information, see Using a Modified JBoss EAP xPaaS Image.

6.8.7.4. Log in to the JBoss EAP Server Using SSO

  1. Access the JBoss EAP application server and click Login. You will be redirected to the SSO login.
  2. Log in using the SSO user created in the example. You will be authenticated against the SSO server and returned to the JBoss EAP application server.

6.8.8. Known Issues

  • There is a known issue with the EAP6 Adapter HttpServletRequest.logout() in which the adapter does not log out from the application, which can create a login loop. The workaround is to call HttpSession.invalidate(); after request.logout() to clear the Keycloak token from the session. For more information, see KEYCLOAK-2665.
  • The SSO logs throw a duplication error if the SSO pod is restarted while backed by a database pod. This error can be safely ignored.
  • Setting adminUrl to a "https://…​" address in an OpenID Connect client will cause javax.net.ssl.SSLHandshakeException exceptions on the SSO server if the default secrets (sso-app-secret and eap-app-secret) are used. The application server must use either CA-signed certificates or configure the SSO trust store to trust the self-signed certificates.
  • If the client route uses a different domain suffix to the SSO service, the client registration script will erroneously configure the client on the SSO side, causing bad redirection.
  • The SSO-enabled JBoss EAP image does not properly set the adminUrl property during automatic client registration. As a workaround, log in to the SSO console after the application has started and manually modify the client registration adminUrl property to http://<application-name>-<project-name>.<hostname>/<app-context>.
Red Hat logoGithubRedditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

© 2024 Red Hat, Inc.