Red Hat JBoss Web Server for OpenShift


Red Hat JBoss Web Server 5.6

Installing and using Red Hat JBoss Web Server for OpenShift

Red Hat Customer Content Services

Abstract

Guide to using Red Hat JBoss Web Server for OpenShift

Providing feedback on Red Hat documentation

We appreciate your feedback on our technical content and encourage you to tell us what you think. If you’d like to add comments, provide insights, correct a typo, or even ask a question, you can do so directly in the documentation.

Note

You must have a Red Hat account and be logged in to the customer portal.

To submit documentation feedback from the customer portal, do the following:

  1. Select the Multi-page HTML format.
  2. Click the Feedback button at the top-right of the document.
  3. Highlight the section of text where you want to provide feedback.
  4. Click the Add Feedback dialog next to your highlighted text.
  5. Enter your feedback in the text box on the right of the page and then click Submit.

We automatically create a tracking issue each time you submit feedback. Open the link that is displayed after you click Submit and start watching the issue or add more comments.

Thank you for the valuable feedback.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

Chapter 1. Red Hat JBoss Web Server for OpenShift

The Apache Tomcat 9 component of Red Hat JBoss Web Server (JWS) 5.6 is available as a containerized image that is designed for Red Hat OpenShift. You can use this image to build, scale, and test Java web applications for deployment across hybrid cloud environments.

JWS for OpenShift images are different from a regular release of Red Hat JBoss Web Server.

Consider the following differences between the JWS for OpenShift images and a standard JBoss Web Server deployment:

  • In a JWS for OpenShift image, the /opt/jws-5.6/ directory is the location of JWS_HOME.
  • In a JWS for OpenShift deployment, all load balancing is handled by the OpenShift router rather than the JBoss Core Services mod_cluster connector or mod_jk connector.

OpenShift images are tested with different operating system versions, configurations and interface points that represent the most common combination of technologies that Red Hat OpenShift Container Platform customers are using.

Important

When you want to deploy new applications, you must use the 5.6 version of JWS for OpenShift images and application templates.

The 5.5 version of JWS for OpenShift images and application templates are deprecated and no longer receive updates.

1.3. Supported architectures for JBoss Web Server

JBoss Web Server supports the following architectures:

  • x86_64 (AMD64)
  • IBM Z (s390x) in the OpenShift environment
  • IBM Power (ppc64le) in the OpenShift environment

You can use the JBoss Web Server image for OpenJDK 11 with all supported architectures. For more information about images, see the Red Hat Container Catalog.

1.4. Health checks for Red Hat container images

All OpenShift Container Platform images have a health rating associated with them. You can find the health rating for Red Hat JBoss Web Server by navigating to the Certfied container images page, and then search for JBoss Web Server and select the 5.6 version.

You can also perform health checks on an OpenShift container to test the container for liveliness and readiness.

You can import the latest Red Hat JBoss Web Server for OpenShift image streams and templates from the Red Hat container registry. You can subsequently use the JWS for OpenShift Source-to-Image (S2I) process to create JBoss Web Server for OpenShift applications by using existing maven binaries or from source code.

Before you follow the instructions in this document, you must ensure that an OpenShift cluster is already installed and configured as a prerequisite. For more information about installing and configuring OpenShift clusters, see the OpenShift Container Platform Installing guide.

Note

The JWS for OpenShift application templates are distributed for Tomcat 9.

Before you can import and use a Red Hat JBoss Web Server for OpenShift image, you must first ensure that you have configured an authentication token to access the Red Hat Container Registry.

You can create an authentication token by using a registry service account. This means that you do not have to use or store your Red Hat account username and password in your OpenShift configuration.

Procedure

  1. Follow the instructions on the Red Hat Customer Portal to create an authentication token using a registry service account.
  2. On the Token Information page for your token, click the OpenShift Secret tab and download the YAML file that contains the OpenShift secret for the token.
  3. Use the YAML file that you have downloaded to create the authentication token secret for your OpenShift project.

    For example:

    oc create -f 1234567_myserviceaccount-secret.yaml
    Copy to Clipboard Toggle word wrap
  4. To configure the secret for your OpenShift project, enter the following commands:

    oc secrets link default 1234567-myserviceaccount-pull-secret --for=pull
    oc secrets link builder 1234567-myserviceaccount-pull-secret --for=pull
    Copy to Clipboard Toggle word wrap
    Note

    In the preceding examples, replace 1234567-myserviceaccount with the name of the secret that you created in the previous step.

You can import Red Hat JBoss Web Server for OpenShift image streams and templates from the Red Hat Container Registry. You must import the latest JBoss Web Server image streams and templates for your JDK into the namespace of your OpenShift project.

Procedure

  1. Log in to the Red Hat Container Registry by using your Customer Portal credentials. For more information, see Red Hat Container Registry Authentication.
  2. Depending on the JDK version that you are using, perform either of the following steps:

    • If you are using OpenJDK 8, enter the following command:

      for resource in \
      jws56-openjdk8-tomcat9-ubi8-basic-s2i.json \
      jws56-openjdk8-tomcat9-ubi8-https-s2i.json \
      jws56-openjdk8-tomcat9-ubi8-image-stream.json
      do
      oc replace -n openshift --force -f \
      https://raw.githubusercontent.com/jboss-container-images/jboss-webserver-5-openshift-image/jws56el8-v5.6.0/templates/${resource}
      done
      Copy to Clipboard Toggle word wrap

      The preceding command imports the UBI8 JDK 8 image stream, jboss-webserver56-openjdk8-tomcat9-openshift-ubi8, and all templates specified in the command.

    • If you are using OpenJDK 11, enter the following command:

      for resource in \
      jws56-openjdk11-tomcat9-ubi8-basic-s2i.json \
      jws56-openjdk11-tomcat9-ubi8-https-s2i.json \
      jws56-openjdk11-tomcat9-ubi8-image-stream.json
      do
      oc replace -n openshift --force -f \
      https://raw.githubusercontent.com/jboss-container-images/jboss-webserver-5-openshift-image/jws56el8-v5.6.0/templates/${resource}
      done
      Copy to Clipboard Toggle word wrap

      The preceding command imports the UBI8 JDK 11 image stream, jboss-webserver56-openjdk11-tomcat9-openshift-ubi8, and all templates specified in the command.

2.3. Importing the latest JWS for OpenShift image

You can import the latest available JWS for OpenShift image by using the import-image command. Red Hat provides separate JWS for OpenShift images for OpenJDK 8 and OpenJDK 11.

Procedure

  • Depending on the JDK version that you are using, perform either of the following steps:

    • To update the core JBoss Web Server 5.6 tomcat 9 with OpenJDK 8 OpenShift image, enter the following command:

      $ oc -n openshift import-image \
        jboss-webserver56-openjdk8-tomcat9-openshift-ubi8:5.6.2
      Copy to Clipboard Toggle word wrap
    • To update the core JBoss Web Server 5.6 tomcat 9 with OpenJDK 11 OpenShift image, enter the following command:

      $ oc -n openshift import-image \
        jboss-webserver56-openjdk11-tomcat9-openshift-ubi8:5.6.2
      Copy to Clipboard Toggle word wrap
Note

The 5.6.2 tag at the end of each image you import refers to the stream version that is set in the image stream.

2.4. JWS for OpenShift S2I process

You can run and configure the JWS for OpenShift images by using the OpenShift source-to-image (S2I) process with the application template parameters and environment variables.

The S2I process for the JWS for OpenShift images works as follows:

  • If the configuration source directory contains a Maven settings.xml file, the settings.xml file is moved to the $HOME/.m2/ directory of the new image.
  • If the source repository contains a pom.xml file, a Maven build is triggered using the contents of the $MAVEN_ARGS environment variable.

    By default, the package goal is used with the openshift profile, which includes the -DskipTests argument to skip tests, and the -Dcom.redhat.xpaas.repo.redhatga argument to enable the Red Hat GA repository.

  • The results of a successful Maven build are copied to the /opt/jws-5.6/tomcat/webapps directory. This includes all WAR files from the source directory that is specified by the $ARTIFACT_DIR environment variable. The default value of $ARTIFACT_DIR is the target/ directory.

    You can use the $MAVEN_ARGS_APPEND environment variable to modify the Maven arguments.

  • All WAR files from the deployments source directory are copied to the /opt/jws-5.6/tomcat/webapps directory.
  • All files in the configuration source directory are copied to the /opt/jws-5.6/tomcat/conf/ directory, excluding the Maven settings.xml file.
  • All files in the lib source directory are copied to the /opt/jws-5.6/tomcat/lib/ directory.

    Note

    If you want to use custom Tomcat configuration files, use the same file names that are used for a normal Tomcat installation such as context.xml and server.xml.

For more information about configuring the S2I process to use a custom Maven artifacts repository mirror, see Artifact repository mirrors.

You can create a JWS for OpenShift application by using existing Maven binaries. You can use the oc start-build command to deploy existing applications on OpenShift.

Note

This procedure shows how to create an example application that is based on the tomcat-websocket-chat quickstart example.

Prerequisites

  • You have an existing .war, .ear, or .jar file for the application that you want to deploy on JWS for OpenShift or you have built the application locally.

    For example, to build the tomcat-websocket-chat application locally, perform the following steps:

    1. To clone the source code, enter the following command:

      $ git clone https://github.com/jboss-openshift/openshift-quickstarts.git
      Copy to Clipboard Toggle word wrap
    2. Configure the Red Hat JBoss Middleware Maven repository, as described in Configure the Red Hat JBoss Middleware Maven Repository.

      For more information about the Maven repository, see the Red Hat JBoss Enerprise Maven Repository web page.

    3. To build the application, enter the following commands:

      $ cd openshift-quickstarts/tomcat-websocket-chat/
      $ mvn clean package
      Copy to Clipboard Toggle word wrap

      The preceding command produces the following output:

      [INFO] Scanning for projects...
      [INFO]
      [INFO] ------------------------------------------------------------------------
      [INFO] Building Tomcat websocket example 1.2.0.Final
      [INFO] ------------------------------------------------------------------------
      ...
      [INFO] ------------------------------------------------------------------------
      [INFO] BUILD SUCCESS
      [INFO] ------------------------------------------------------------------------
      [INFO] Total time: 01:28 min
      [INFO] Finished at: 2018-01-16T15:59:16+10:00
      [INFO] Final Memory: 19M/271M
      [INFO] ------------------------------------------------------------------------
      Copy to Clipboard Toggle word wrap

Procedure

  1. On your local file system, create a source directory for the binary build and a deployments subdirectory.

    For example, to create a /ocp source directory and a /deployments subdirectory for the tomcat-websocket-chat application, enter the following commands:

    $ cd openshift-quickstarts/tomcat-websocket-chat/
    $ mkdir -p ocp/deployments
    Copy to Clipboard Toggle word wrap
    Note

    The source directory can contain any content required by your application that is not included in the Maven binary. For more information, see JWS for OpenShift S2I process.

  2. Copy the .war,.ear, or .jar binary files to the deployments subdirectory.

    For example, to copy the .war file for the example tomcat-websocket-chat application, enter the following command:

    $ cp target/websocket-chat.war ocp/deployments/
    Copy to Clipboard Toggle word wrap
    Note

    In the preceding example, target/websocket-chat.war is the path to the binary file you want to copy.

    Application archives in the deployments subdirectory of the source directory are copied to the $JWS_HOME/tomcat/webapps/ directory of the image that is being built on OpenShift. To allow the application to be deployed successfully, you must ensure that the directory hierarchy that contains the web application data is structured correctly. For more information, see JWS for OpenShift S2I process.

  3. Log in to the OpenShift instance:

    $ oc login <url>
    Copy to Clipboard Toggle word wrap
  4. Create a new project if required.

    For example:

    $ oc new-project jws-bin-demo
    Copy to Clipboard Toggle word wrap
    Note

    In the preceding example, jws-bin-demo is the name of the project you want to create.

  5. Identify the JWS for OpenShift image stream to use for your application:

    $ oc get is -n openshift | grep ^jboss-webserver | cut -f1 -d ' '
    Copy to Clipboard Toggle word wrap

    The preceding command produces the following type of output:

    jboss-webserver56-openjdk8-tomcat9-openshift-ubi8
    Copy to Clipboard Toggle word wrap
    Note

    The -n openshift option specifies the project to use. The oc get is -n openshift command gets the image stream resources from the openshift project.

  6. Create the new build configuration, and ensure that you specify the image stream and application name.

    For example, to create the new build configuration for the example tomcat-websocket-chat application:

    $ oc new-build --binary=true \
     --image-stream=jboss-webserver56-openjdk8-tomcat9-openshift-ubi8:latest\*
     --name=jws-wsch-app
    Copy to Clipboard Toggle word wrap
    Note

    In the preceding example, jws-wsch-app is the name of the JWS for OpenShift application.

    The preceding command produces the following type of output:

    --> Found image 8c3b85b (4 weeks old) in image stream "openshift/jboss-webserver56-tomcat9-openshift" under tag "latest" for "jboss-webserver56"
    
        JBoss Web Server 5.6
        --------------------
        Platform for building and running web applications on JBoss Web Server 5.6 - Tomcat v9
    
        Tags: builder, java, tomcat9
    
        * A source build using binary input will be created
          * The resulting image will be pushed to image stream "jws-wsch-app:latest"
          * A binary build was created, use 'start-build --from-dir' to trigger a new build
    
    --> Creating resources with label build=jws-wsch-app ...
        imagestream "jws-wsch-app" created
        buildconfig "jws-wsch-app" created
    --> Success
    Copy to Clipboard Toggle word wrap
  7. Start the binary build.

    For example:

    $ oc start-build jws-wsch-app --from-dir=./ocp --follow
    Copy to Clipboard Toggle word wrap
    Note

    In the preceding example, jws-wsch-app is the name of the JWS for OpenShift application, and ocp is the name of the source directory.

    The preceding command instructs OpenShift to use the source directory that you have created for binary input of the OpenShift image build.

    The preceding command produces the following type of output:

    Uploading directory "ocp" as binary input for the build ...
    build "jws-wsch-app-1" started
    Receiving source from STDIN as archive ...
    
    Copying all deployments war artifacts from /home/jboss/source/deployments directory into `/opt/jws-5.6/tomcat/webapps` for later deployment...
    '/home/jboss/source/deployments/websocket-chat.war' -> '/opt/jws-5.6/tomcat/webapps/websocket-chat.war'
    
    Pushing image 172.30.202.111:5000/jws-bin-demo/jws-wsch-app:latest ...
    Pushed 0/7 layers, 7% complete
    Pushed 1/7 layers, 14% complete
    Pushed 2/7 layers, 29% complete
    Pushed 3/7 layers, 49% complete
    Pushed 4/7 layers, 62% complete
    Pushed 5/7 layers, 92% complete
    Pushed 6/7 layers, 100% complete
    Pushed 7/7 layers, 100% complete
    Push successful
    Copy to Clipboard Toggle word wrap
  8. Create a new OpenShift application based on the image:

    For example:

    $ oc new-app jws-wsch-app
    Copy to Clipboard Toggle word wrap
    Note

    In the preceding example, jws-wsch-app is the name of the JWS for OpenShift application.

    The preceding command produces the following type of output:

    --> Found image e5f3a6b (About a minute old) in image stream "jws-bin-demo/jws-wsch-app" under tag "latest" for "jws-wsch-app"
    
        JBoss Web Server 5.6
        --------------------
        Platform for building and running web applications on JBoss Web Server 5.6 - Tomcat v9
    
        Tags: builder, java, tomcat9
    
        * This image will be deployed in deployment config "jws-wsch-app"
        * Ports 8080/tcp, 8443/tcp, 8778/tcp will be load balanced by service "jws-wsch-app"
          * Other containers can access this service through the hostname "jws-wsch-app"
    
    --> Creating resources ...
        deploymentconfig "jws-wsch-app" created
        service "jws-wsch-app" created
    --> Success
        Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
         'oc expose svc/jws-wsch-app'
        Run 'oc status' to view your app.
    Copy to Clipboard Toggle word wrap
  9. Expose the service to make the application accessible to users:

    For example, to make the example jws-wsch-app application accessible, perform the following steps:

    1. Check the name of the service to expose:

      $ oc get svc -o name
      Copy to Clipboard Toggle word wrap

      The preceding command produces the following type of output:

      service/jws-wsch-app
      Copy to Clipboard Toggle word wrap
    2. Expose the service:

      $ oc expose svc/jws-wsch-app
      Copy to Clipboard Toggle word wrap

      The preceding command produces the following type of output:

      route "jws-wsch-app" exposed
      Copy to Clipboard Toggle word wrap
  10. Retrieve the address of the exposed route:

    oc get routes --no-headers -o custom-columns='host:spec.host' jws-wsch-app
    Copy to Clipboard Toggle word wrap
  11. Open a web browser and enter the URL to access the application.

    For example, to access the example jws-wsch-app application, enter the following URL:

    \http://<address_of_exposed_route>/websocket-chat

    Note

    In the preceding example, replace <address_of_exposed_route> with the appropriate value for your deployment.

You can create a JWS for OpenShift application from source code.

For detailed information about creating new OpenShift applications from source code, see OpenShift.com - Creating an application from source code.

Prerequisites

Procedure

  1. Log in to the OpenShift instance:

    $ oc login <url>
    Copy to Clipboard Toggle word wrap
  2. Create a new project if required:

    $ oc new-project <project-name>
    Copy to Clipboard Toggle word wrap
    Note

    In the preceding example, replace <project-name> with the name of the project you want to create.

  3. Identify the JWS for OpenShift image stream to use for your application:

    $ oc get is -n openshift | grep ^jboss-webserver | cut -f1 -d ' '
    Copy to Clipboard Toggle word wrap

    The preceding command produces the following type of output:

    jboss-webserver56-openjdk8-tomcat9-openshift-ubi8
    Copy to Clipboard Toggle word wrap
    Note

    The -n openshift option specifies the project to use. The oc get is -n openshift command gets the image stream resources from the openshift project.

  4. Create the new OpenShift application from source code by using Red Hat JBoss Web Server for OpenShift images:

    $ oc new-app \
     _<source_code_location>_\
     --image-stream=jboss-webserver56-openjdk8-tomcat9-openshift-ubi8\
     --name=_<openshift_application_name>_
    Copy to Clipboard Toggle word wrap

    For example:

    $ oc new-app \
     \https://github.com/jboss-openshift/openshift-quickstarts.git#main \
     --image-stream=jboss-webserver56-openjdk8-tomcat9-openshift-ubi8\
     --context-dir='tomcat-websocket-chat' \
     --name=jws-wsch-app
    Copy to Clipboard Toggle word wrap

    The preceding command adds the source code to the image and compiles the source code. The preceding command also creates the build configuration and services.

  5. To expose the application, perform the following steps:

    1. To check the name of the service to expose:

      $ oc get svc -o name
      Copy to Clipboard Toggle word wrap

      The preceding command produces the following type of output:

      service/<openshift_application_name>
      Copy to Clipboard Toggle word wrap
    2. To expose the service:

      $ oc expose svc/<openshift_application_name>
      Copy to Clipboard Toggle word wrap

      The preceding command produces the following type of output:

      route "<openshift_application_name>" exposed
      Copy to Clipboard Toggle word wrap
  6. To retrieve the address of the exposed route:

    oc get routes --no-headers -o custom-columns='host:spec.host' <openshift_application_name>
    Copy to Clipboard Toggle word wrap
  7. Open a web browser and enter the following URL to access the application:

    \http://<address_of_exposed_route>/<java_application_name>

    Note

    In the preceding example, replace <address_of_exposed_route> and <java_application_name> with appropriate values for your deployment.

You can use Docker to add additional Java Archive (JAR) files in the tomcat/lib directory.

Procedure

  1. Start the image in Docker:

    docker run --network host -i -t -p 8080:8080 ImageURL
    Copy to Clipboard Toggle word wrap
  2. Find the CONTAINER ID:

     docker ps | grep <ImageName>
    Copy to Clipboard Toggle word wrap
  3. Copy the library to the tomcat/lib/ directory:

    docker cp <yourLibrary> <CONTAINER ID>:/opt/jws-5.6/tomcat/lib/
    Copy to Clipboard Toggle word wrap
  4. Commit the changes to a new image:

    docker commit <CONTAINER ID> <NEW IMAGE NAME>
    Copy to Clipboard Toggle word wrap
  5. Create a new image tag:

    docker tag <NEW IMAGE NAME>:latest <NEW IMAGE REGISTRY URL>:_<TAG>_
    Copy to Clipboard Toggle word wrap
  6. Push the image to a registry:

    docker push <NEW IMAGE REGISTRY URL>
    Copy to Clipboard Toggle word wrap

Chapter 3. JWS Operator for OpenShift

The Operator Framework is a toolkit to manage Kubernetes-native applications, which are called Operators, in an effective, automated, and scalable way. Operators make it easy to manage complex stateful applications that are running on top of Kubernetes. All Operators are based around three key components, which are the Operator SDK, the Operator Lifecycle Manager, and OperatorHub.io. These tools allow you to develop your own Operators, manage any Operators that you are using on your Kubernetes cluster, and discover or share any Operators that the community creates.

3.1. JBoss Web Server operator

Red Hat JBoss Web Server (JWS) provides an Operator that you can use to manage JWS for OpenShift images. You can build, test, and package the JWS Operator for OpenShift.

The JWS Operator uses different environment variables than a standard JWS for OpenShift setup. For more information about the environment variables that the JWS Operator uses, see Parameters to use in CRD.

Important

In this release, the Use Session Clustering functionality is available as a Technology Preview feature only. The session clustering is set to Off by default. The current Operator version uses the DNS Membership Provider, which is limited because of DNS limitations. InetAddress.getAllByName() results are cached, which means session replications might not work while scaling up.

You can follow the instructions in this document to install the JWS Operator, deploy an existing JWS image, and delete Operators from a cluster. For a faster but less detailed guide to deploying a prepared image or building an image from an existing image stream, see the QuickStart guide.

Important

Red Hat supports images for JWS 5.4 or later versions. Support is not available for images earlier than JWS 5.4.

3.2. Operator groups

An Operator group is an Operator Lifecycle Manger (OLM) resource that provides multitenant configuration to OLM-installed Operators. An Operator group selects target namespaces in which to generate role-based access control (RBAC) for all Operators that are deployed in the same namespace as the OperatorGroup object.

When you subscribe the Operator to a namespace, you must ensure that the namespace has an OperatorGroup object that uses the same InstallModeType setting as the Operator. The InstallModeType settings are AllNamespaces and SingleNamespace.

Consider the following guidelines:

  • If the Operator you want to install uses AllNamespaces mode, the openshift-operators namespace already provides an appropriate Operator group.
  • If the Operator you want to install uses SingleNamespace mode, you must create only one Operator group in that namespace.

3.3. What is new in the JWS Operator 2.0 release?

The JWS Operator 2.0 release provides level-2 Operator capabilities such as seamless integration. JWS Operator 2.0 also supports Red Hat JBoss Web Server metering labels and includes some enhanced Custom Resource Definition (CRD) parameters.

Level-2 Operator capabilities

JWS Operator 2.0 provides the following level-2 Operator capability features:

  • Enables seamless upgrades
  • Supports patch and minor version upgrades
  • Manages web servers deployed by the JWS Operator 1.1.x.
Enabling level-2 seamless integration for new images

The DeploymentConfig object definition includes a trigger that OpenShift uses to deploy new pods when a new image is pushed to the image stream. The image stream can monitor the repository for new images or you can instruct the image stream that a new image is available for use.

Procedure

  1. In your project namespace, create an image stream by using the oc import-image command to import the tag and other information for an image.

    For example:

    oc import-image <my-image>-imagestream:latest \
    --from=quay.io/$user/<my-image>:latest \
    --confirm
    Copy to Clipboard Toggle word wrap

    In the preceding example, replace each occurrence of <my-image> with the name of the image that you want to import.

    The preceding command creates an image stream named <my-image>-imagestream by importing information for the quay.io/$user/<my-image> image. For more information about the format and management of image streams, see Managing image streams.

  2. Create a custom resource of the WebServer kind for the web application that you want the JWS Operator to deploy whenever the image stream is updated. You can define the custom resource in YAML file format.

    For example:

    apiVersion: web.servers.org/v1alpha1
    kind: WebServer
    metadata:
      name: <my-image>
    spec:
      # Add fields here
      applicationName: my-app
      useSessionClustering: true
      replicas: 2
      webImageStream:
        imageStreamNamespace: <project-name>
        imageStreamName: <my-image>-imagestream
    Copy to Clipboard Toggle word wrap
  3. Trigger an update to the image stream by using the oc tag command.

    For example:

    oc tag quay.io/$user/<my-image> <my-image>-imagestream:latest --scheduled
    Copy to Clipboard Toggle word wrap

    The preceding command causes OpenShift Container Platform to update the specified image stream tag periodically. This period is a cluster-wide setting that is set to 15 minutes by default.

Level-2 seamless integration for rebuilding existing images

The BuildConfig object definition includes a trigger for image stream updates and a webhook, which is either a GitHub or Generic webhook, that enables the rebuilding of images when the webhook is triggered by Git or GitHub.

For more information about creating a secret for a webhook and configuring a generic or GitHub webhook in a custom resource WebServer file, see Parameters to use in CRD.

Support for Red Hat JBoss Web Server metering labels

JWS Operator 2.0 supports the ability to add metering labels to the Red Hat JBoss Web Server pods that the JWS Operator creates.

Red Hat JBoss Web Server can use the following metering labels:

  • com.company: Red_Hat
  • rht.prod_name: Red_Hat_Runtimes
  • rht.prod_ver: 2022-Q2
  • rht.comp: JBoss_Web_Server
  • rht.comp_ver: 5.6.2
  • rht.subcomp: Tomcat 9
  • rht.subcomp_t: application

You can add labels under the metadata section in the custom resource WebServer file for a web application that you want to deploy. For example:

---
apiVersion: web.servers.org/v1alpha1
kind: WebServer
metadata:
  name: <my-image>
  labels:
    com.company: Red_Hat
    rht.prod_name: Red_Hat_Runtimes
    rht.prod_ver: 2022-Q2
    rht.comp: JBoss_Web_Server
    rht.comp_ver: 5.6.2
    rht.subcomp: Tomcat 9
    rht.subcomp_t: application
spec:
----
Copy to Clipboard Toggle word wrap
Note

If you change any label key or label value for a deployed web server, the JWS Operator redeploys the web server application. If the deployed web server was built from source code, the JWS Operator also rebuilds the web server application.

Enhanced webImage parameter

In the JWS Operator 2.0 release, the webImage parameter in the CRD contains the following additional fields:

  • imagePullSecret

    The secret that the JWS Operator uses to pull images from the repository

    Note

    The secret must contain the key .dockerconfigjson. The JWS Operator mounts and uses the secret (for example, --authfile /mount_point/.dockerconfigjson) to pull the images from the repository. The Secret object definition file might contain server username and password values or tokens to allow access to images in the image stream, the builder image, and images built by the JWS Operator.

  • webApp

    A set of parameters that describe how the JWS Operator builds the web server application

Enhanced webApp parameter

In the JWS Operator 2.0 release, the webApp parameter in the CRD contains the following additional fields:

  • name

    The name of the web server application

  • sourceRepositoryURL

    The URL where the application source files are located

  • sourceRepositoryRef

    The branch of the source repository that the Operator uses

  • sourceRepositoryContextDir

    The subdirectory where the pom.xml file is located and where the mvn install command must be run

  • webAppWarImage

    The URL of the images where the JWS Operator pushes the built image

  • webAppWarImagePushSecret

    The secret that the JWS Operator uses to push images to the repository

  • builder

    A set of parameters that contain all the information required to build the web application and create and push the image to the image repository

    Note

    To ensure that the builder can operate successfully and run commands with different user IDs, the builder must have access to the anyuid security context constraint (SCC).

    To grant the builder access to the anyuid SCC, enter the following command:

    oc adm policy add-scc-to-user anyuid -z builder

    The builder parameter contains the following fields:

    • image

      The image of the container where the web application is built (for example, quay.io/$user/tomcat10-buildah)

    • imagePullSecret

      The secret (if specified) that the JWS Operator uses to pull the builder image from the repository

    • applicationBuildScript

      The script that the builder image uses to build the application .war file and move it to the /mnt directory

      Note

      If you do not specify a value for this parameter, the builder image uses a default script that uses Maven and Buildah.

3.4. JWS Operator installation

You can install the JBoss Web Server (JWS) Operator for OpenShift by using either of the following methods:

You can install the JWS Operator by using the OpenShift web console.

Prerequisites

  • You have deployed an OpenShift Container Platform cluster using an account with cluster admin and Operator installation permissions.

Procedure

  1. Open the web console and navigate to the Operators tab.

    The OpenShift OperatorHub opens.

  2. Search for JWS and select the JWS Operator.

    A new menu displays.

  3. Select the Capacity Level that you want to use.
  4. To install the Operator, at the start of the console, click Install.
  5. To set up the Operator installation, perform the following steps:

    1. Specify the installation mode by specifying the namespace on your cluster where you want to install the Operator.

      Note

      If you do not specify a namespace, the Operator is installed to all namespaces on your cluster by default.

    2. Specify the update channel where the JWS Operator is available.

      Note

      The JWS Operator is currently available only through one channel.

    3. Specify the approval strategy by selecting Automatic or Manual updates.

      Note

      If you select Automatic updates, when a new version of the Operator is available, the Operator Lifecycle Manager (OLM) upgrades the running instance of your Operator automatically.

      If you select Manual updates, when a newer version of the Operator is available, the OLM creates an update request. As a cluster administrator, you must then manually approve the update request to ensure that the Operator is updated to the new version.

  6. Click Install.

    Note

    If you have selected a Manual approval strategy, you must approve the install plan before the installation is complete. The JWS Operator now appears in the Installed Operators section of the Operators tab.

You can install the JWS Operator by using the oc command-line tool. The steps to install the JWS Operator from the command line include verifying the supported installModes and available channels for the Operator, creating an Operator group, and creating a Subscription object.

Note

When you install the JWS Operator by using the web console, and the Operator is using SingleNamespace mode, the OperatorGroup and Subscription objects are installed automatically

Prerequisites

  • You have deployed an OpenShift Container Platform cluster using an account with Operator installation permissions.
  • You have installed the oc tool on your local system.

Procedure

  1. To inspect the JWS Operator, perform the following steps:

    1. To verify the supported installation modes for the JWS Operator, enter the following command:

      $ oc get packagemanifests -n openshift-marketplace | grep jws
      Copy to Clipboard Toggle word wrap

      The preceding command produces the following type of output:

      jws-operator    Red Hat Operators   16h
      Copy to Clipboard Toggle word wrap
    2. To verify the available channels for the JWS Operator, enter the following command:

      $ oc describe packagemanifests jws-operator -n openshift-marketplace | grep "Catalog Source"
      Copy to Clipboard Toggle word wrap

      The preceding command produces the following type of output:

      Catalog Source:     redhat-operators
      Copy to Clipboard Toggle word wrap
  2. To create an Operator group, perform the following steps:

    1. To check the actual list of Operator groups, enter the following command:

      $ oc get operatorgroups -n <project_name>
      Copy to Clipboard Toggle word wrap
      Note

      In the preceding example, replace <project_name> with your OpenShift project name.

      The preceding command produces the following type of output:

      NAME       AGE
      mygroup    17h
      Copy to Clipboard Toggle word wrap
    2. Create a YAML file for the OperatorGroup object.

      For example:

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: <operatorgroup_name>
        namespace: <project_name>
      spec:
        targetNamespaces:
        - <project_name>
      Copy to Clipboard Toggle word wrap
      Note

      In the preceding example, replace <project_name> with the namespace of the project where you want to install the Operator (oc project -q). and replace `<operatorgroup_name> with the name of the OperatorGroup object.

    3. Create the OperatorGroup object from the YAML file:

      $ oc apply -f <filename>.yaml
      Copy to Clipboard Toggle word wrap
      Note

      In the preceding example, replace <filename>.yaml with the name of the YAML file that you have created for the OperatorGroup object.

  3. To create a Subscription object, perform the following steps:

    1. Create a YAML file for the Subscription object.

      For example:

      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
          name: jws-operator
          namespace: <project_name>
      spec:
          channel: alpha
          name: jws-operator
          source: redhat-operators
          sourceNamespace: openshift-marketplace
      Copy to Clipboard Toggle word wrap
      Note

      In the preceding example, replace <project_name> with the namespace of the project where you want to install the Operator (oc project -q). If the Operator is using AllNamespaces mode, replace <project_name> with openshift-operators.

      Ensure that the source setting matches the Catalog source value based on the command-line output when you verified the available channels for the Operator (for example, redhat-operators).

    2. Create the Subscription object from the YAML file:

      $ oc apply -f <filename>.yaml
      Copy to Clipboard Toggle word wrap
      Note

      In the preceding example, replace <filename>.yaml with the name of the YAML file that you have created for the Subscription object.

Verification

  • To verify that the JWS Operator is installed successfully, enter the following command:

    $ oc get csv -n <project_name>
    Copy to Clipboard Toggle word wrap
    Note

    In the preceding example, replace <project_name> with the namespace of the project where have installed the Operator.

    The preceding command produces the following type of output:

    Expand
    NAMEDISPLAYVERSIONREPLACES PHASE

    jws-operator.V<version>

    JBoss Web Server Operator

    <version>

    Succeeded

    Note

    In the preceding example, <version> refers to the Operator version (for example, 1.1.0).

3.5. Deploying an existing JWS image

You can deploy an existing JWS image by using the OpenShift web console.

Prerequisites

  • You have installed the JWS Operator by using the web console or from the command line.

    To ensure that the JWS Operator is installed, enter the following command:

    $ oc get deployment.apps/jws-operator
    Copy to Clipboard Toggle word wrap

    The preceding command produces the following type of output:

    NAME            READY 	UP-TO-DATE   AVAILABLE   AGE
    jws-operator    1/1   	1            1           15h
    Copy to Clipboard Toggle word wrap
    Note

    If you want to view more detailed output, you can use the following command:

    oc describe deployment.apps/jws-operator

Procedure

  1. Prepare your image and push it to the location where you want to display the image (for example, quay.io/<USERNAME>/tomcat-demo:latest).
  2. To create a YAML file for a Custom Resource web server, perform the following steps:

    1. Create a file named, for example, webservers_cr.yaml.
    2. Enter details in the following format:

      apiVersion: web.servers.org/v1alpha1
      kind: WebServer
      metadata:
          name: example-image-webserver
      spec:
          # Add fields here
          applicationName: jws-app
          replicas: 2
      webImage:
         applicationImage: quay.io/<USERNAME>/tomcat-demo:latest
      Copy to Clipboard Toggle word wrap
  3. To deploy your web application, perform the following steps:

    1. Go to the directory where you have created the web application.
    2. Enter the following command:

      $ oc apply -f webservers_cr.yaml
      Copy to Clipboard Toggle word wrap

      The preceding command produces the following output:

      webserver/example-image-webserver created
      Copy to Clipboard Toggle word wrap
      Note

      The Operator creates a route automatically.

  4. Verify the route that the Operator creates:

    $ oc get routes
    Copy to Clipboard Toggle word wrap
  5. Optional: Delete the webserver that you created in the preceding step:

    $ oc delete webserver example-image-webserver
    Copy to Clipboard Toggle word wrap
    Note

    Alternatively, you can delete the webserver by deleting the YAML file. For example:

    oc delete -f webservers_cr.yaml

3.6. JWS Operator deletion

You can delete the JWS Operator from a cluster by using either of the following methods:

You can delete the JWS Operator from a cluster by using the OpenShift web console.

Prerequisites

  • You have deployed an OpenShift Container Platform cluster using an account with cluster admin permissions.

    Note

    If you do not have cluster admin permissions, you can circumvent this requirement. For more information, see Allowing non-cluster administrators to install Operators.

Procedure

  1. Open the web console and click Operators > Installed Operators.
  2. Select the Actions menu and click Uninstall Operator.

    Note

    The Uninstall Operator option automatically removes the Operator, any Operator deployments, and Pods.

    Deleting the Operator does not remove any custom resource definitions or custom resources for the Operator, including CRDs or CRs. If the Operator has deployed applications on the cluster, or if the Operator has configured off-cluster resources, you must clean up these applications and resources manually.

You can delete the JWS Operator from a cluster by using the oc command-line tool.

Prerequisites

  • You have deployed an OpenShift Container Platform cluster using an account with cluster admin permissions.

    Note

    If you do not have cluster admin permissions, you can circumvent this requirement. For more information, see Allowing non-cluster administrators to install Operators.

  • You have installed the oc tool on your local system.

Procedure

  1. Check the current version of the subscribed Operator:

    $ oc get subscription jws-operator -n <project_name> -o yaml | grep currentCSV
    Copy to Clipboard Toggle word wrap
    Note

    In the preceding command, replace <project_name> with the namespace of the project where you installed the Operator. If your Operator was installed to all namespaces, replace <project_name> with openshift-operators.

    The preceding command produces the following output, where v<version> refers to the Operator version (for example, v1.1.0):

    f:currentCSV: {}
    currentCSV: jws-operator.v<version>
    Copy to Clipboard Toggle word wrap
  2. Delete the subscription for the Operator:

    $ oc delete subscription jws-operator -n <project_name>
    Copy to Clipboard Toggle word wrap
    Note

    In the preceding command, replace <project_name> with the namespace of the project where you installed the Operator. If your operator was installed to all namespaces, replace <project_name> with openshift-operators.

  3. Delete the CSV for the Operator in the target namespace by using the currentCSV value that you obtained from the previous steps:

    $ oc delete clusterserviceversion <currentCSV> -n <project_name>
    Copy to Clipboard Toggle word wrap
    Note

    In the preceding command, replace <project_name> with the namespace of the project where you installed the Operator, and replace <currentCSV> with the currentCSV value that you obained in the preceding steps (for example, jws-operator.v<version>).

    The preceding command produces the following type of output:

    clusterserviceversion.operators.coreos.com "jws-operator.v<version>" deleted
    Copy to Clipboard Toggle word wrap
    Note

    In the preceding command, <project_name> refers to the namespace of the project where you installed the operator, and v<version> refers to the operator version (for example, v1.1.0). If your operator was installed to all namespaces, use openshift-operators in place of <project_name>.

You can add metering labels to your Red Hat JBoss Web Server pods and check Red Hat subscription details with the OpenShift Metering Operator.

Note
  • Do not add metering labels to any pods that an operator or a template deploys and manages.
  • You can apply labels to pods using the Metering Operator on OpenShift Container Platform version 4.8 and earlier. From version 4.9 onward, the Metering Operator is no longer available without a direct replacement.

Red Hat JBoss Web Server can use the following metering labels:

  • com.company: Red_Hat
  • rht.prod_name: Red_Hat_Runtimes
  • rht.prod_ver: 2022-Q2
  • rht.comp: JBoss_Web_Server
  • rht.comp_ver: 5.6.2
  • rht.subcomp: Tomcat 9
  • rht.subcomp_t: application

Appendix A. S2I scripts and Maven

The Red Hat JBoss Web Server for OpenShift image includes S2I scripts and Maven.

A Maven repository holds build artifacts and dependencies, such as the project Java archive (JAR) files, library JAR files, plugins or any other project-specific artifacts. A Maven repository also defines locations that you can download artifacts from while performing the source-to-image (S2I) build. In addition to using the Maven Central Repository, some organizations also deploy a local custom repository (mirror).

A local mirror provides the following benefits:

  • Availability of a synchronized mirror that is geographically closer and faster
  • Greater control over the repository content
  • Possibility to share artifacts across different teams (developers and continuous integration (CI)) without relying on public servers and repositories
  • Improved build times

A Maven repository manager can serve as local cache to a mirror. If the repository manager is already deployed and can be reached externally at a specified URL location, the S2I build can use this repository. You can use an internal Maven repository by adding the MAVEN_MIRROR_URL environment variable to the build configuration of the application.

You can add the MAVEN_MIRROR_URL environment variable to a new build configuration of your application, by specifying the --build-env option with the oc new-app command or the oc new-build command.

Procedure

  1. Enter the following command:

    $ oc new-app \
     https://github.com/jboss-openshift/openshift-quickstarts.git#main \
     --image-stream=jboss-webserver56-openjdk8-tomcat9-openshift-ubi8:latest\*
     --context-dir='tomcat-websocket-chat' \
     --build-env MAVEN_MIRROR_URL=\http://10.0.0.1:8080/repository/internal/ \
     --name=jws-wsch-app
    Copy to Clipboard Toggle word wrap
    Note

    The preceding command assumes that the repository manager is already deployed and can be reached at http://10.0.0.1:8080/repository/internal/.

You can add the MAVEN_MIRROR_URL environment variable to an existing build configuration of your application, by specifying the name of the build configuration with the oc env command.

Procedure

  1. Identify the build configuration that requires the MAVEN_MIRROR_URL variable:

    $ oc get bc -o name
    Copy to Clipboard Toggle word wrap

    The preceding command produces the following type of output:

    buildconfig/jws
    Copy to Clipboard Toggle word wrap
    Note

    In the preceding example, jws is the name of the build configuration.

  2. Add the MAVEN_MIRROR_URL environment variable to buildconfig/jws:

    $ oc env bc/jws MAVEN_MIRROR_URL="http://10.0.0.1:8080/repository/internal/"
    
    buildconfig "jws" updated
    Copy to Clipboard Toggle word wrap
  3. Verify the build configuration has updated:

    $ oc env bc/jws --list
    
    # buildconfigs jws
    MAVEN_MIRROR_URL=http://10.0.0.1:8080/repository/internal/
    Copy to Clipboard Toggle word wrap
  4. Schedule a new build of the application by using oc start-build
Note

During the application build process, Maven dependencies are downloaded from the repository manager rather than from the default public repositories. When the build process is completed, the mirror contains all the dependencies that are retrieved and used during the build process.

The Red Hat JBoss Web Server for OpenShift image includes scripts to run Catalina and to use Maven to create and deploy the .war package.

run
Runs Catalina (Tomcat)
assemble
Uses Maven to build the web application source, create the .war file, and move the .war file to the $JWS_HOME/tomcat/webapps directory.

A.3. JWS for OpenShift datasources

JWS for OpenShift provides three type of data sources:

Default internal data sources
PostgreSQL, MySQL, and MongoDB data sources are available on OpenShift by default through the Red Hat Registry. These data sources do not require additional environment files to be configured for image streams. To enable a database to be discovered and used as a data source, you can set the DB_SERVICE_PREFIX_MAPPING environment variable to the name of the OpenShift service.
Other internal data sources
These data sources are run on OpenShift but they are not available by default through the Red Hat Registry. Environment files that are added to OpenShift Secrets provide configuration of other internal data sources.
External data sources
These data sources are not run on OpenShift. Environment files that are added to OpenShift Secrets provide configuration of external data sources.

ENV_FILES property

You can add the environment variables for data sources to the OpenShift Secret for the project. You can use the ENV_FILES property to call these environment files within the template.

DB_SERVICE_PREFIX_MAPPING environment variable

Data sources are automatically created based on the value of certain environment variables. The DB_SERVICE_PREFIX_MAPPING environment variable defines JNDI mappings for the data sources.

The allowed value for the DB_SERVICE_PREFIX_MAPPING variable is a comma-separated list of POOLNAME-DATABASETYPE=PREFIX triplets. Each triplet consists of the following values:

  • POOLNAME is used as the pool-name in the data source.
  • DATABASETYPE is the database driver to use.
  • PREFIX is the prefix in the names of environment variables that are used to configure the data source.

For each POOLNAME-DATABASETYPE=PREFIX triplet that is defined in the DB_SERVICE_PREFIX_MAPPING environment variable, the launch script creates a separate data source, which is executed when running the image.

You can modify the build configuration by including environment variables with the source-to-image (S2I) build command. For more information, see Maven artifact repository mirrors and JWS for OpenShift.

The following table lists the valid environment variables for the Red Hat JBoss Web Server for OpenShift images:

Expand
Variable NameDisplay NameDescriptionExample Value

ARTIFACT_DIR

N/A

.war, .ear, and .jar files from this directory will be copied into the deployments directory

target

APPLICATION_NAME

Application Name

The name for the application

jws-app

CONTEXT_DIR

Context Directory

Path within Git project to build; empty for root project directory

tomcat-websocket-chat

GITHUB_WEBHOOK_SECRET

Github Webhook Secret

Github trigger secret

Expression from: [a-zA-Z0-9]{8}

GENERIC_WEBHOOK_SECRET

Generic Webhook Secret

Generic build trigger secret

Expression from: [a-zA-Z0-9]{8}

HOSTNAME_HTTP

Custom HTTP Route Hostname

Custom hostname for http service route. Leave blank for default hostname

<application-name>-<project>.<default-domain-suffix>

HOSTNAME_HTTPS

Custom HTTPS Route Hostname

Custom hostname for https service route. Leave blank for default hostname

<application-name>-<project>.<default-domain-suffix>

IMAGE_STREAM_NAMESPACE

Imagestream Namespace

Namespace in which the ImageStreams for Red Hat Middleware images are installed

openshift

JWS_HTTPS_SECRET

Secret Name

The name of the secret containing the certificate files

jws-app-secret

JWS_HTTPS_CERTIFICATE

Certificate Name

The name of the certificate file within the secret

server.crt

JWS_HTTPS_CERTIFICATE_KEY

Certificate Key Name

The name of the certificate key file within the secret

server.key

JWS_HTTPS_CERTIFICATE_PASSWORD

Certificate Password

The Certificate Password

P5ssw0rd

SOURCE_REPOSITORY_URL

Git Repository URL

Git source URI for Application

https://github.com/jboss-openshift/openshift-quickstarts.git

SOURCE_REPOSITORY_REFERENCE

Git Reference

Git branch/tag reference

1.2

IMAGE_STREAM_NAMESPACE

Imagestream Namespace

Namespace in which the ImageStreams for Red Hat Middleware images are installed

openshift

MAVEN_MIRROR_URL

Maven Mirror URL

URL of a Maven mirror/repository manager to configure.

http://10.0.0.1:8080/repository/internal/

Appendix B. Valves on JWS for OpenShift

You can define the following environment variable to insert the valve component into the request processing pipeline for the associated Catalina container.

Expand
Variable NameDescriptionExample ValueDefault Value

ENABLE_ACCESS_LOG

Enable the Access Log Valve to log access messages to the standard output channel.

true

false

Appendix C. Checking OpenShift logs

You can use the oc logs command to view the OpenShift logs or the logs that are provided by the console for a running container.

Procedure

  • Enter the following command:

    $ oc logs -f <pod_name> <container_name>
    Copy to Clipboard Toggle word wrap
    Note

    In the preceding command, replace <pod_name> and <container_name> with appropriate values for your deployment.

    Access logs are stored in the /opt/jws-5.6/tomcat/logs/ directory.

Legal Notice

Copyright © 2023 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat