Red Hat JBoss Web Server for OpenShift
Installing and using Red Hat JBoss Web Server for OpenShift
Abstract
Making open source more inclusive Copy linkLink copied to clipboard!
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. Introduction Copy linkLink copied to clipboard!
1.1. Overview of Red Hat JBoss Web Server for OpenShift Copy linkLink copied to clipboard!
The Apache Tomcat 9 component of Red Hat JBoss Web Server (JWS) 5.5 is available as a containerized image designed for OpenShift. Developers can use this image to build, scale, and test Java web applications for deployment across hybrid cloud environments.
For more information about supported configurations of the Middleware products running on OpenShift, refer to the article Support of Red Hat Middleware products and components on Red Hat OpenShift.
Chapter 2. Before You Begin Copy linkLink copied to clipboard!
2.1. The difference between Red Hat JBoss Web Server and JWS for OpenShift Copy linkLink copied to clipboard!
The differences between the JWS for OpenShift images and the regular release of JWS are:
-
The location of
JWS_HOMEinside a JWS for OpenShift image is:/opt/jws-5.5/. - All load balancing is handled by the OpenShift router, not Apache HTTP Server mod_cluster or mod_jk connectors.
Documentation for JWS functionality not specific to JWS for OpenShift images is found in the Red Hat JBoss Web Server documentation.
2.2. Version compatibility and support Copy linkLink copied to clipboard!
See the xPaaS table on the OpenShift Container Platform Tested 3.X Integrations page and OpenShift Container Platform Tested 4.X Integrations page for details about OpenShift image version compatibility.
The 5.5 version of JWS for OpenShift images and application templates should be used for deploying new applications.
The 5.4 version of JWS for OpenShift images and application templates are deprecated and no longer receives updates.
2.3. Supported Architectures by JBoss Web Server Copy linkLink copied to clipboard!
JBoss Web server supports the following architectures:
- x86_64 (AMD64)
- IBM Z (s390x) in the OpenShift environment
- IBM Power (ppc64le) in the OpenShift environment
Different images are supported for different architectures. The example codes in this guide demonstrate the commands for x86_64 architecture. If you are using other architectures, specify the relevant image name in the commands. See the Red Hat Container Catalog for more information about images.
2.4. Health checks for Red Hat container images Copy linkLink copied to clipboard!
All container images available for OpenShift have a health rating associated with it. You can find the health rating for Red Hat JBoss Web Server by navigating to the catalog of container images, searching for JBoss Web Server and selecting the 5.5 version.
For more information on how OpenShift container can be tested for liveliness and readiness, please refer to the following documentation
Chapter 3. Get Started Copy linkLink copied to clipboard!
3.1. Initial setup Copy linkLink copied to clipboard!
Before you follow the instructions in this guide, you must ensure that an OpenShift cluster is already installed and configured as a prerequisite. For more information about installing and configuring OpenShift clusters, see the OpenShift Container Platform Installing guide.
The JWS for OpenShift application templates are distributed for Tomcat 9.
3.2. Configure Authentication to the Red Hat Container Registry Copy linkLink copied to clipboard!
Before you can import and use the Red Hat JBoss Web Server image, you must first configure authentication to the Red Hat Container Registry.
Red Hat recommends that you create an authentication token using a registry service account to configure access to the Red Hat Container Registry. This means that you don’t have to use or store your Red Hat account’s username and password in your OpenShift configuration.
- Follow the instructions on Red Hat Customer Portal to create an authentication token using a registry service account.
- Download the YAML file containing the OpenShift secret for the token. You can download the YAML file from the OpenShift Secret tab on your token’s Token Information page.
Create the authentication token secret for your OpenShift project using the YAML file that you downloaded:
oc create -f 1234567_myserviceaccount-secret.yaml
oc create -f 1234567_myserviceaccount-secret.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the secret for your OpenShift project using the following commands, replacing the secret name below with the name of your secret created in the previous step.
oc secrets link default 1234567-myserviceaccount-pull-secret --for=pull oc secrets link builder 1234567-myserviceaccount-pull-secret --for=pull
oc secrets link default 1234567-myserviceaccount-pull-secret --for=pull oc secrets link builder 1234567-myserviceaccount-pull-secret --for=pullCopy to Clipboard Copied! Toggle word wrap Toggle overflow
See the OpenShift documentation for more information on other methods for configuring access to secured registries.
See the Red Hat Customer Portal for more information on configuring authentication to the Red Hat Container Registry.
3.3. Import the Latest Red Hat JBoss Web Server Image Streams and Templates Copy linkLink copied to clipboard!
You must import the latest Red Hat JBoss Web Server for OpenShift image streams and templates for your JDK into the namespace of your OpenShift project.
Log in to the Red Hat Container Registry using your Customer Portal credentials to import the Red Hat JBoss Web Server image streams, templates and update image streams. For more information, see Red Hat Container Registry Authentication.
Import command for JDK 8
This command imports the following image streams and templates.
- The RHEL8 JDK 8 imagestream: jboss-webserver55-openjdk8-tomcat9-openshift-rhel8
- All templates specified in the command.
Import command for JDK 11
This command imports the following image streams and templates.
- The RHEL8 JDK 11 image stream: jboss-webserver55-openjdk11-tomcat9-openshift-rhel8
- All templates specified in the command.
3.3.1. Update Commands Copy linkLink copied to clipboard!
- In order to update the core JWS 5.5 tomcat 9 OpenJDK8 RHEL8 OpenShift, you must execute
oc -n openshift import-image \ jboss-webserver55-openjdk8-tomcat9-openshift-rhel8:1.0
$ oc -n openshift import-image \
jboss-webserver55-openjdk8-tomcat9-openshift-rhel8:1.0
- In order to update the core JWS 5.5 tomcat 9 OpenJDK11 RHEL8 OpenShift image, you must execute
oc -n openshift import-image \ jboss-webserver55-openjdk11-tomcat9-openshift-rhel8:1.0
$ oc -n openshift import-image \
jboss-webserver55-openjdk11-tomcat9-openshift-rhel8:1.0
The 1.0 tag at the end of each image you import refers to the stream version that is set in the image stream.
3.4. Using the JWS for OpenShift Source-to-Image (S2I) process Copy linkLink copied to clipboard!
To run and configure the JWS for OpenShift images, use the OpenShift S2I process with the application template parameters and environment variables.
The S2I process for the JWS for OpenShift images works as follows:
If there is a Maven settings.xml file in the
configuration/source directory, it is moved to$HOME/.m2/of the new image.See the Apache Maven Project website for more information on Maven and the Maven settings.xml file.
If there is a pom.xml file in the source repository, a Maven build is triggered using the contents of the
$MAVEN_ARGSenvironment variable.By default, the
packagegoal is used with theopenshiftprofile, including the arguments for skipping tests (-DskipTests) and enabling the Red Hat GA repository (-Dcom.redhat.xpaas.repo.redhatga).The results of a successful Maven build are copied to
/opt/jws-5.5/tomcat/webapps. This includes all WAR files from the source directory specified by the$ARTIFACT_DIRenvironment variable. The default value of$ARTIFACT_DIRis thetarget/directory.Use the
MAVEN_ARGS_APPENDenvironment variable to modify the Maven arguments.-
All WAR files from the
deployments/source directory are copied to/opt/jws-5.5/tomcat/webapps. -
All files in the
configuration/source directory are copied to/opt/jws-5.5/tomcat/conf/(excluding the Maven settings.xml file). All files in the
lib/source directory are copied to/opt/jws-5.5/tomcat/lib/.NoteIf you want to use custom Tomcat configuration files, the file names should be the same as for a normal Tomcat installation. For example, context.xml and server.xml.
See the Artifact Repository Mirrors section for guidance on configuring the S2I process to use a custom Maven artifacts repository mirror.
3.4.1. Create a JWS for OpenShift application using existing maven binaries Copy linkLink copied to clipboard!
Existing applications are deployed on OpenShift using the oc start-build command.
Prerequisite: An existing .war, .ear, or .jar of the application to deploy on JWS for OpenShift.
Prepare the directory structure on the local file system.
Create a source directory containing any content required by your application not included in the binary (if required, see Using the JWS for OpenShift Source-to-Image (S2I) process), then create a subdirectory
deployments/:mkdir -p <build_dir>/deployments
$ mkdir -p <build_dir>/deploymentsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the binaries (
.war,.ear,.jar) todeployments/:cp /path/to/binary/<filenames_with_extensions> <build_dir>/deployments/
$ cp /path/to/binary/<filenames_with_extensions> <build_dir>/deployments/Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteApplication archives in the
deployments/subdirectory of the source directory are copied to the$JWS_HOME/tomcat/webapps/directory of the image being built on OpenShift. For the application to deploy, the directory hierarchy containing the web application data must be structured correctly (see Section 3.4, “Using the JWS for OpenShift Source-to-Image (S2I) process”).Log in to the OpenShift instance:
oc login <url>
$ oc login <url>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new project if required:
oc new-project <project-name>
$ oc new-project <project-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the JWS for OpenShift image stream to use for your application with
oc get is -n openshift:oc get is -n openshift | grep ^jboss-webserver | cut -f1 -d ' ' jboss-webserver50-tomcat9-openshift
$ oc get is -n openshift | grep ^jboss-webserver | cut -f1 -d ' ' jboss-webserver50-tomcat9-openshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe option
-n openshiftspecifies the project to use.oc get is -n openshiftretrieves (get) the image stream resources (is) from theopenshiftproject.Create the new build configuration, specifying image stream and application name:
oc new-build --binary=true \ --image-stream=jboss-webserver<version>-openjdk8-tomcat9-openshift-rhel8:latest \ --name=<my-jws-on-openshift-app>
$ oc new-build --binary=true \ --image-stream=jboss-webserver<version>-openjdk8-tomcat9-openshift-rhel8:latest \ --name=<my-jws-on-openshift-app>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Instruct OpenShift to use the source directory created above for binary input of the OpenShift image build:
oc start-build <my-jws-on-openshift-app> --from-dir=./<build_dir> --follow
$ oc start-build <my-jws-on-openshift-app> --from-dir=./<build_dir> --followCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new OpenShift application based on the image:
oc new-app <my-jws-on-openshift-app>
$ oc new-app <my-jws-on-openshift-app>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expose the service to make the application accessible to users:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the address of the exposed route:
oc get routes --no-headers -o custom-columns='host:spec.host' my-jws-on-openshift-app
oc get routes --no-headers -o custom-columns='host:spec.host' my-jws-on-openshift-appCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To access the application in your browser: http://<address_of_exposed_route> / <my-war-ear-jar-filename-without-extension>
3.4.2. Example: Creating a JWS for OpenShift application using existing maven binaries Copy linkLink copied to clipboard!
The example below uses the tomcat-websocket-chat quickstart using the procedure from Section 3.4.1, “Create a JWS for OpenShift application using existing maven binaries”.
3.4.2.1. Prerequisites: Copy linkLink copied to clipboard!
Get the WAR application archive or build the application locally.
Clone the source code:
git clone https://github.com/jboss-openshift/openshift-quickstarts.git
$ git clone https://github.com/jboss-openshift/openshift-quickstarts.gitCopy to Clipboard Copied! Toggle word wrap Toggle overflow Build the application:
cd openshift-quickstarts/tomcat-websocket-chat/
$ cd openshift-quickstarts/tomcat-websocket-chat/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Prepare the directory structure on the local file system.
Create the source directory for the binary build on your local file system and the
deployments/subdirectory. Copy the WAR archive todeployments/:ls pom.xml README.md src/ target/
[tomcat-websocket-chat]$ ls pom.xml README.md src/ target/Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkdir -p ocp/deployments
$ mkdir -p ocp/deploymentsCopy to Clipboard Copied! Toggle word wrap Toggle overflow cp target/websocket-chat.war ocp/deployments/
$ cp target/websocket-chat.war ocp/deployments/Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.2.2. To setup the example application on OpenShift Copy linkLink copied to clipboard!
Log in to the OpenShift instance:
oc login <url>
$ oc login <url>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new project if required:
oc new-project jws-bin-demo
$ oc new-project jws-bin-demoCopy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the JWS for OpenShift image stream to use for your application with
oc get is -n openshift:oc get is -n openshift | grep ^jboss-webserver | cut -f1 -d ' ' jboss-webserver50-tomcat9-openshift
$ oc get is -n openshift | grep ^jboss-webserver | cut -f1 -d ' ' jboss-webserver50-tomcat9-openshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create new build configuration, specifying image stream and application name:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the binary build. Instruct OpenShift to use source directory for the binary input for the OpenShift image build:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new OpenShift application based on the image:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expose the service to make the application accessible to users:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the address of the exposed route:
oc get routes --no-headers -o custom-columns='host:spec.host' jws-wsch-app
oc get routes --no-headers -o custom-columns='host:spec.host' jws-wsch-appCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Access the application in your browser: http://<address_of_exposed_route>/websocket-chat
3.4.3. Create a JWS for OpenShift application from source code Copy linkLink copied to clipboard!
For detailed instructions on creating new OpenShift applications from source code, see OpenShift.com - Creating an application from source code.
Before proceeding, ensure that the applications' data is structured correctly (see Section 3.4, “Using the JWS for OpenShift Source-to-Image (S2I) process”).
Log in to the OpenShift instance:
oc login <url>
$ oc login <url>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new project if required:
oc new-project <project-name>
$ oc new-project <project-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the JWS for OpenShift image stream to use for your application with
oc get is -n openshift:oc get is -n openshift | grep ^jboss-webserver | cut -f1 -d ' ' jboss-webserver50-tomcat9-openshift
$ oc get is -n openshift | grep ^jboss-webserver | cut -f1 -d ' ' jboss-webserver50-tomcat9-openshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the new OpenShift application from source code using Red Hat JBoss Web Server for OpenShift images, use the
--image-streamoption:oc new-app \ <source_code_location>\ --image-stream=jboss-webserver<version>-openjdk8-tomcat9-openshift-rhel8\ --name=<openshift_application_name>
$ oc new-app \ <source_code_location>\ --image-stream=jboss-webserver<version>-openjdk8-tomcat9-openshift-rhel8\ --name=<openshift_application_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For Example:
oc new-app \ https://github.com/jboss-openshift/openshift-quickstarts.git#master \ --image-stream=jboss-webserver<version>-openjdk8-tomcat9-openshift-rhel8\ --context-dir='tomcat-websocket-chat' \ --name=jws-wsch-app
$ oc new-app \ https://github.com/jboss-openshift/openshift-quickstarts.git#master \ --image-stream=jboss-webserver<version>-openjdk8-tomcat9-openshift-rhel8\ --context-dir='tomcat-websocket-chat' \ --name=jws-wsch-appCopy to Clipboard Copied! Toggle word wrap Toggle overflow The source code is added to the image and the source code is compiled. The build configuration and services are also created.
To expose the application:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To retrieve the address of the exposed route:
oc get routes --no-headers -o custom-columns='host:spec.host' <openshift_application_name>
oc get routes --no-headers -o custom-columns='host:spec.host' <openshift_application_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To access the application in your browser: http://<address_of_exposed_route>/<java_application_name>
3.5. Adding additional jar files in tomcat/lib/ directory Copy linkLink copied to clipboard!
Additional jar files can be added to tomcat/lib/ directory using docker.
For adding jar files in tomcat/lib/
Get the image started in docker
docker run --network host -i -t -p 8080:8080 ImageURL
docker run --network host -i -t -p 8080:8080 ImageURLCopy to Clipboard Copied! Toggle word wrap Toggle overflow Find the
CONTAINER IDdocker ps | grep <ImageName>
docker ps | grep <ImageName>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the library to
tomcat/lib/directorydocker cp <yourLibrary> <CONTAINER ID>:/opt/jws-5.5/tomcat/lib/
docker cp <yourLibrary> <CONTAINER ID>:/opt/jws-5.5/tomcat/lib/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Commit the changes to a new image
docker commit <CONTAINER ID> <NEW IMAGE NAME>
docker commit <CONTAINER ID> <NEW IMAGE NAME>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new image tag
docker tag <NEW IMAGE NAME>:latest <NEW IMAGE REGISTRY URL>:<TAG>
docker tag <NEW IMAGE NAME>:latest <NEW IMAGE REGISTRY URL>:<TAG>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the image to a registry
docker push <NEW IMAGE REGISTRY URL>
docker push <NEW IMAGE REGISTRY URL>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. JWS Operator Copy linkLink copied to clipboard!
4.1. JBoss Web Server Operator Copy linkLink copied to clipboard!
4.1.1. OpenShift Operators Copy linkLink copied to clipboard!
The Operator Framework is a toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. Operators make it easy to manage complex stateful applications on top of Kubernetes. All Operators are based around 3 key components: The Operator SDK, The Operator Lifecycle Manager, and OperatorHub.io. These tools allow you to develop your own operators, manage any operators you are using on your Kubernetes cluster, and discover or share any Operators the community creates.
The Red Hat JBoss Web Server project provides an Operator to manage its OpenShift images. This section covers how to build, test, and package the OpenShift Operator for JWS.
For full instructions on cluster setup please refer to the Openshift Documentation subsection ‘Install’
Additionally, The JWS operator uses different environment variables than the JWS-on-OpenShift setup. A full listing of these parameters can be found here.
At this time, the Use Session Clustering functionality is available as technology preview (not supported). The clustering is Off by default. The current operator version uses the DNS Membership Provider which is limited due to DNS limitations. InetAddress.getAllByName() results are cached and as a result, session replications may not work while scaling up.
This guide covers installation, deployment, and deletion of the JWS Operator in detail. For a faster, but less detailed, guide Please refer to the quickstarts guide.
Currently, we only support JWS 5.4 images. Images older than 5.4 are NOT supported.
4.1.2. Installing the JWS Operator Copy linkLink copied to clipboard!
This section covers the installation of the JWS Operator on the OpenShift Container Platform.
4.1.2.1. Prerequisites Copy linkLink copied to clipboard!
-
OpenShift Container Platform cluster using an account with
cluster adminpermissions (web console only) - OpenShift Container Platform cluster using an account with operator installation permissions
-
octool installed on your local system (CLI only)
4.1.2.2. Installing the JWS Operator - web console Copy linkLink copied to clipboard!
- Navigate to the ‘Operators’ tab, found in the menu on the left-hand side
- This will open OpenShift OperatorHub. From here, search for JWS and select the ‘JWS Operator’
- A new menu should appear - Select your desired Capacity Level and then click ‘Install’ at the top to install the Operator.
You are now able to set up the operator installation. You will specify the following 3 options:
- Installation Mode: Specify a specific namespace on your cluster to install. If you do not specify this, it will install the operator to all namespaces on your cluster by default.
- Update Channel: The JWS operator is currently available only through one channel.
- Approval Strategy: You can choose Automatic or Manual updates. If you choose Automatic updates for an installed Operator, when a new version of that Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select Manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.
-
Click ‘Install’ at the bottom. If you selected
Manual Approval Strategy, you must approve the install plan before installation is complete. The JWS Operator will now appear in the ‘Installed Operators’ section of the ‘Operators’ tab.
4.1.2.3. Installing the JWS Operator - command line interface Copy linkLink copied to clipboard!
Inspect the JWS operator to verify its supported installModes and available channels using the following commands:
oc get packagemanifests -n openshift-marketplace | grep jws jws-operator Red Hat Operators 16h$ oc get packagemanifests -n openshift-marketplace | grep jws jws-operator Red Hat Operators 16hCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc describe packagemanifests jws-operator -n openshift-marketplace | grep "Catalog Source" Catalog Source: redhat-operators$ oc describe packagemanifests jws-operator -n openshift-marketplace | grep "Catalog Source" Catalog Source: redhat-operatorsCopy to Clipboard Copied! Toggle word wrap Toggle overflow An OperatorGroup is an OLM resource that selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the OperatorGroup.
The namespace to which you subscribe the Operator must have an OperatorGroup that matches the InstallMode of the Operator, either the AllNamespaces or SingleNamespace mode. If the Operator you intend to install uses the AllNamespaces, then the openshift-operators namespace already has an appropriate OperatorGroup in place.
However, if the Operator uses the SingleNamespace mode, exactly one OperatorGroup has to be created in that namespace. To check actual list of OperatorGroups use the following command:
oc get operatorgroups -n <project_name>
$ oc get operatorgroups -n <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example of an output for OperatorGroup listing:
NAME AGE mygroup 17h
NAME AGE mygroup 17hCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe web console version of this procedure handles the creation of the OperatorGroup and Subscription objects automatically behind the scenes for you when choosing SingleNamespace mode.
Create an OperatorGroup object YAML file, for example:
OperatorGroupExample.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow <project_name> is the namespace of the project where you install the operator (oc project -q). <operatorgroup_name> is the name of the OperatorGroup.
Create the OperatorGroup object using the following command:
oc apply -f OperatorGroupExample.yaml
$ oc apply -f OperatorGroupExample.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a subscription object YAML file, for example
jws-operator-sub.yaml. Configure yourSubscriptionobject YAML file to look as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow <project_name> is the namespace of the project where you install the operator (oc project -q). to install in all namespace use openshift-operators.
The
sourceis the Catalog Source. This is the value from the$ oc describe packagemanifests jws-operator -n openshift-marketplace | grep "Catalog Source:"command we ran in step 1 of this section. The value should beredhat-operators.Create the
Subscriptionobject from the YAML file with the following command:oc apply -f jws-operator-sub.yaml
$ oc apply -f jws-operator-sub.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow To verify a successful installation, run the following command:
oc get csv -n <project_name>
$ oc get csv -n <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expand NAME DISPLAY VERSION REPLACES PHASE jws-operator.V1.0.0
JBoss Web Server Operator
1.0.0
Succeeded
4.1.3. Deploying an existing JWS image Copy linkLink copied to clipboard!
Ensure your operator is installed with the following command:
oc get deployment.apps/jws-operator NAME READY UP-TO-DATE AVAILABLE AGE jws-operator 1/1 1 1 15h$ oc get deployment.apps/jws-operator NAME READY UP-TO-DATE AVAILABLE AGE jws-operator 1/1 1 1 15hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Or if you need a more detailed output:
oc describe deployment.apps/jws-operator
$ oc describe deployment.apps/jws-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Prepare your image and push it to the desired location. In this example it is pushed to
quay.io/<USERNAME>/tomcat-demo:latest Create a
Custom ResourceWebServer .yaml file. In this example a file calledwebservers_cr.yamlis used. Your file should follow this format:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy your webapp, from the directory in which you created it, with the following command:
oc apply -f webservers_cr.yaml webserver/example-image-webserver created$ oc apply -f webservers_cr.yaml webserver/example-image-webserver createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe operator will create a route automatically. You can verify the route with the following command:
oc get routes
$ oc get routesCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information on routes, please see the OpenShift documentation
If you need delete the
webserveryou created in step 4:oc delete webserver example-image-webserver
$ oc delete webserver example-image-webserverCopy to Clipboard Copied! Toggle word wrap Toggle overflow OR
oc delete -f webservers_cr.yaml
$ oc delete -f webservers_cr.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.1.4. Deleting Operators from a cluster Copy linkLink copied to clipboard!
4.1.4.1. Prerequisites Copy linkLink copied to clipboard!
- OpenShift Container platform cluster with admin privileges (Alternatively, you can circumvent this requirement by following these instructions )
-
octool installed on your local system (CLI only)
4.1.4.2. Deleting an operator from a cluster - web console Copy linkLink copied to clipboard!
- In the left hand menu, click ‘Operators’ → ‘Installed Operators’
- Underneath ‘Operator Details’ select the ‘Actions’ menu, and then click ‘Uninstall Operator’
- Selecting this option will remove the Operator, any Operator deployments, and Pods. HOWEVER removing the Operator will not remove any of its custom resource definitions or custom resources, including CRDs or CRs. If your operator has deployed applications on the cluster or configured off-cluster resources, these will continue to run and need to be cleaned up manually.
4.1.4.3. Deleting an operator from a cluster - command line interface Copy linkLink copied to clipboard!
Check the current version of the subscribed operator in the
currentCSVfield by using the following command:oc get subscription jws-operator -n <project_name> -o yaml | grep currentCSV f:currentCSV: {} currentCSV: jws-operator.v1.0.0$ oc get subscription jws-operator -n <project_name> -o yaml | grep currentCSV f:currentCSV: {} currentCSV: jws-operator.v1.0.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIn the above command,
<project_name>refers to the namespace of the project where you installed the operator. If your operator was installed to all namespaces, useopenshift-operatorsin place of<project_name>.Delete the operator’s subscription using the following command:
oc delete subscription jws-operator -n <project_name>
$ oc delete subscription jws-operator -n <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIn the above command,
<project_name>refers to the namespace of the project where you installed the operator. If your operator was installed to all namespaces, useopenshift-operatorsin place of<project_name>.Delete the CSV for the operator in the target namespace using the currentCSV value from the previous step, using the following command:
oc delete clusterserviceversion <currentCSV> -n <project_name>
$ oc delete clusterserviceversion <currentCSV> -n <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where
<currentCSV>is the value obtained in step 1oc delete clusterserviceversion jws-operator.v1.0.0 clusterserviceversion.operators.coreos.com "jws-operator.v1.0.0" deleted$ oc delete clusterserviceversion jws-operator.v1.0.0 clusterserviceversion.operators.coreos.com "jws-operator.v1.0.0" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIn the above command,
<project_name>refers to the namespace of the project where you installed the operator. If your operator was installed to all namespaces, useopenshift-operatorsin place of<project_name>.
4.1.5. Additional resources Copy linkLink copied to clipboard!
For additional information on Operators, you may refer to the formal OpenShift Documentation:
And
Chapter 5. Reference Copy linkLink copied to clipboard!
5.1. Source-to-Image (S2I) Copy linkLink copied to clipboard!
The Red Hat JBoss Web Server for OpenShift image includes S2I scripts and Maven.
5.1.1. Using maven artifact repository mirrors with JWS for OpenShift Copy linkLink copied to clipboard!
A Maven repository holds build artifacts and dependencies, such as the project jars, library jars, plugins or any other project specific artifacts. It also defines locations to download artifacts from while performing the S2I build. Along with using the Maven Central Repository, some organizations also deploy a local custom repository (mirror).
Benefits of using a local mirror are:
- Availability of a synchronized mirror, which is geographically closer and faster.
- Greater control over the repository content.
- Possibility to share artifacts across different teams (developers, CI), without the need to rely on public servers and repositories.
- Improved build times.
A Maven repository manager can serve as local cache to a mirror. Assuming that the repository manager is already deployed and reachable externally at http://10.0.0.1:8080/repository/internal/, the S2I build can use this repository. To use an internal Maven repository, add the MAVEN_MIRROR_URL environment variable to the build configuration of the application.
For a new build configuration, use the --build-env option with oc new-app or oc new-build:
For an existing build configuration:
Identify the build configuration which requires the
MAVEN_MIRROR_URLvariable:oc get bc -o name buildconfig/jws
$ oc get bc -o name buildconfig/jwsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
MAVEN_MIRROR_URLenvironment variable tobuildconfig/jws:oc env bc/jws MAVEN_MIRROR_URL="http://10.0.0.1:8080/repository/internal/" buildconfig "jws" updated
$ oc env bc/jws MAVEN_MIRROR_URL="http://10.0.0.1:8080/repository/internal/" buildconfig "jws" updatedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the build configuration has updated:
oc env bc/jws --list buildconfigs jws MAVEN_MIRROR_URL=http://10.0.0.1:8080/repository/internal/
$ oc env bc/jws --list # buildconfigs jws MAVEN_MIRROR_URL=http://10.0.0.1:8080/repository/internal/Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Schedule a new build of the application using
oc start-build
During application build, Maven dependencies are download from the repository manager, instead of the default public repositories. Once the build has finished, the mirror contains all the dependencies retrieved and used during the build.
5.1.2. Scripts included on the Red Hat JBoss Web Server for OpenShift image Copy linkLink copied to clipboard!
run- runs Catalina (Tomcat)
assemble-
uses Maven to build the source, create package (
.war) and move it to the$JWS_HOME/tomcat/webappsdirectory.
5.1.3. JWS for OpenShift datasources Copy linkLink copied to clipboard!
There are 3 types of data sources:
-
Default Internal Datasources: These are PostgreSQL, MySQL, and MongoDB. These datasources are available on OpenShift by default through the Red Hat Registry and do not require additional environment files to be configured for image streams. To make a database discoverable and used as a datasource, set the
DB_SERVICE_PREFIX_MAPPINGenvironment variable to the name of the OpenShift service. - Other Internal Datasources: These are datasources not available by default through the Red Hat Registry but run on OpenShift. Configuration of these datasources is provided by environment files added to OpenShift Secrets.
- External Datasources: Datasources not run on OpenShift.Configuration of external datasources is provided by environment files added to OpenShift Secrets.
The datasources environment files are added to the OpenShift Secret for the project. These environment files are then called within the template using the ENV_FILES environment property.
Datasources are automatically created based on the value of certain environment variables.The most important environment variable is DB_SERVICE_PREFIX_MAPPING. DB_SERVICE_PREFIX_MAPPING defines JNDI mappings for the datasources. The allowed value for this variable is a comma-separated list of POOLNAME-DATABASETYPE=PREFIX triplets, where:
-
POOLNAMEis used as the pool-name in the datasource. -
DATABASETYPEis the database driver to use. -
PREFIXis the prefix used in the names of environment variables that are used to configure the datasource.
For each POOLNAME-DATABASETYPE=PREFIX triplet defined in the DB_SERVICE_PREFIX_MAPPING environment variable, the launch script creates a separate datasource, which is executed when running the image.
For a full listing of datasource configuration environment variables, please see the Datasource Configuration Environment Variables list given here.
5.1.4. JWS for OpenShift compatible environment variables Copy linkLink copied to clipboard!
The build configuration can be modified by including environment variables to the Source-to-Image build command (see Section 5.1.1, “Using maven artifact repository mirrors with JWS for OpenShift”). The valid environment variables for the Red Hat JBoss Web Server for OpenShift images are:
| Variable Name | Display Name | Description | Example Value |
|---|---|---|---|
| ARTIFACT_DIR | N/A |
| target |
| APPLICATION_NAME | Application Name | The name for the application | jws-app |
| CONTEXT_DIR | Context Directory | Path within Git project to build; empty for root project directory | tomcat-websocket-chat |
| GITHUB_WEBHOOK_SECRET | Github Webhook Secret | Github trigger secret | Expression from: [a-zA-Z0-9]{8} |
| GENERIC_WEBHOOK_SECRET | Generic Webhook Secret | Generic build trigger secret | Expression from: [a-zA-Z0-9]{8} |
| HOSTNAME_HTTP | Custom HTTP Route Hostname | Custom hostname for http service route. Leave blank for default hostname | <application-name>-<project>.<default-domain-suffix> |
| HOSTNAME_HTTPS | Custom HTTPS Route Hostname | Custom hostname for https service route. Leave blank for default hostname | <application-name>-<project>.<default-domain-suffix> |
| IMAGE_STREAM_NAMESPACE | Imagestream Namespace | Namespace in which the ImageStreams for Red Hat Middleware images are installed | openshift |
| JWS_HTTPS_SECRET | Secret Name | The name of the secret containing the certificate files | jws-app-secret |
| JWS_HTTPS_CERTIFICATE | Certificate Name | The name of the certificate file within the secret | server.crt |
| JWS_HTTPS_CERTIFICATE_KEY | Certificate Key Name | The name of the certificate key file within the secret | server.key |
| JWS_HTTPS_CERTIFICATE_PASSWORD | Certificate Password | The Certificate Password | P5ssw0rd |
| JWS_ADMIN_USERNAME | JWS Admin Username | JWS Admin account username | ADMIN |
| JWS_ADMIN_PASSWORD | JWS Admin Password | JWS Admin account password | P5sw0rd |
| SOURCE_REPOSITORY_URL | Git Repository URL | Git source URI for Application | https://github.com/jboss-openshift/openshift-quickstarts.git |
| SOURCE_REPOSITORY_REFERENCE | Git Reference | Git branch/tag reference | 1.2 |
| IMAGE_STREAM_NAMESPACE | Imagestream Namespace | Namespace in which the ImageStreams for Red Hat Middleware images are installed | openshift |
| MAVEN_MIRROR_URL | Maven Mirror URL | URL of a Maven mirror/repository manager to configure. | http://10.0.0.1:8080/repository/internal/ |
5.2. Valves on JWS for OpenShift Copy linkLink copied to clipboard!
5.2.1. JWS for OpenShift compatible environmental variables (valve component) Copy linkLink copied to clipboard!
You can define the following environment variables to insert the valve component into the request processing pipeline for the associated Catalina container.
| Variable Name | Description | Example Value | Default Value |
|---|---|---|---|
| ENABLE_ACCESS_LOG | Enable the Access Log Valve to log access messages to the standard output channel. | true | false |
5.3. Checking logs Copy linkLink copied to clipboard!
To view the OpenShift logs or the logs provided by a running container’s console:
oc logs -f <pod_name> <container_name>
$ oc logs -f <pod_name> <container_name>
Access logs are stored in /opt/jws-5.5/tomcat/logs/.