Red Hat JBoss Web Server Operator


Red Hat JBoss Web Server 6.2

Installing and using the Red Hat JBoss Web Server Operator 2.x for OpenShift

Red Hat Customer Content Services

Abstract

Install and use the Red Hat JBoss Web Server Operator 2.x to manage web applications in Red Hat OpenShift

To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account.

Procedure

  1. Click the following link to create a ticket.
  2. Enter a brief description of the issue in the Summary.
  3. Provide a detailed description of the issue or enhancement in the Description. Include a URL to where the issue occurs in the documentation.
  4. Clicking Create creates and routes the issue to the appropriate documentation team.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

Chapter 1. Red Hat JBoss Web Server Operator

An Operator is a Kubernetes-native application that makes it easy to manage complex stateful applications in Kubernetes and OpenShift environments. Red Hat JBoss Web Server (JWS) provides an Operator to manage JWS for OpenShift images. You can use the JWS Operator to create, configure, manage, and seamlessly upgrade instances of web server applications in OpenShift.

Operators include the following key concepts:

  • The Operator Framework is a toolkit to manage Operators in an effective, automated, and scalable way. The Operator Framework consists of three main components:

    • You can use OperatorHub to discover Operators that you want to install.
    • You can use the Operator Lifecycle Manager (OLM) to install and manage Operators in your OpenShift cluster.
    • You can use the Operator SDK if you want to develop your own custom Operators.
  • An Operator group is an OLM resource that provides multitenant configuration to OLM-installed Operators. An Operator group selects target namespaces in which to generate role-based access control (RBAC) for all Operators that are deployed in the same namespace as the OperatorGroup object.
  • Custom resource definitions (CRDs) are a Kubernetes extension mechanism that Operators use. CRDs allow the custom objects that an Operator manages to behave like native Kubernetes objects. The JWS Operator provides a set of CRD parameters that you can specify in custom resource files for web server applications that you want to deploy.

This document describes how to install the JWS Operator, deploy an existing JWS image, and delete Operators from a cluster. This document also provides details of the CRD parameters that the JWS Operator provides.

Note

Before you follow the instructions in this guide, you must ensure that an OpenShift cluster is already installed and configured as a prerequisite. For more information about installing and configuring OpenShift clusters, see the OpenShift Container Platform Installing guide.

For a faster but less detailed guide to deploying a prepared image or building an image from an existing image stream, see the JWS Operator QuickStart guide.

Important

Red Hat supports images for JWS 5.4 or later versions only.

Chapter 2. What is new in JWS Operator 2.x?

JWS Operator 2.x provides level-2 Operator capabilities such as seamless integration. JWS Operator 2.x also supports Red Hat JBoss Web Server metering labels and includes some new or enhanced Custom Resource Definition (CRD) parameters.

Important

Due to a known issue, seamless upgrades do not work properly between versions 2.2 and 2.3 of the JWS Operator. If you have an existing Operator 2.2.x installation, you must first uninstall Operator 2.2.x, as described in JWS Operator deletion from a cluster. Then install the latest Operator 2.3.x version, as described in JWS Operator installation from OperatorHub.

2.1. What is new in the JWS Operator 2.3 release?

The JWS Operator 2.3 release includes the following new features, enhancements, and deprecations.

Operator package name change

From JWS Operator 2.3 onward, the package names for the JWS Operator and its respective bundle are changed to jboss-webserver-operator-container-<version> and jboss-webserver-operator-bundle-container-<version>.

This change supersedes the behavior in earlier releases where the package names were jboss-webserver-5-operator-container-<version> and jboss-webserver-5-operator-bundle-container-<version>.

This removal of the JBoss Web Server major version from the package names reflects the fact that the Operator is version-agnostic across different major versions of JBoss Web Server.

New sourceRepositorySecret parameter

JWS Operator 2.3 introduces a sourceRepositorySecret parameter under the webImageStream:webSources hierarchy in the CRD. This parameter specifies the secret for a private repository that contains the application source files.

For more information, see JWS Operator CRD parameters.

Removed Operator responsibility for managing cluster-wide configuration for monitoring system

JWS Operator 2.3 no longer automatically manages the cluster-wide configuration that is required for the cluster monitoring system. From JWS Operator 2.3 onward, users must maintain responsibility for creating this cluster-wide configuration file for monitoring.

For more information, see Management of cluster-wide configuration for monitoring.

Fixed issues
JWS Operator 2.3 includes fixes for various issues that were observed in earlier releases.

2.2. What is new in the JWS Operator 2.2 release?

The JWS Operator 2.2 release includes the following new features and enhancements.

New volumeSpec parameter

JWS Operator 2.2 introduces a volumeSpec parameter in the CRD. This parameter specifies the volumes that are to be mounted.

The volumeSpec parameter contains persistentVolumeClaims, secrets, configMaps, and volumeClaimTemplates fields.

For more information, see JWS Operator CRD parameters.

2.3. What is new in the JWS Operator 2.1 release?

The JWS Operator 2.1 release includes the following new features, enhancements, and deprecations.

New webhookSecrets parameter

JWS Operator 2.1 introduces a webhookSecrets parameter under the webImageStream:webSources hierarchy in the CRD. This parameter specifies secret names for triggering a build through a generic, GitHub, or GitLab webhook.

The webhookSecrets parameter contains generic, github, and gitlab fields.

For more information, see JWS Operator CRD parameters.

New tlsConfig parameter

JWS Operator 2.1 introduces a tlsConfig parameter in the CRD. This parameter specifies the TLS configuration for a web server.

The tlsConfig parameter contains routeHostname, certificateVerification, tlsSecret, and tlsPassword fields.

For more information, see JWS Operator CRD parameters.

New environmentVariables parameter

JWS Operator 2.1 introduces an environmentVariables parameter in the CRD. This parameter specifies the environment variables for the deployment.

For more information, see JWS Operator CRD parameters.

New persistentLogs parameter

JWS Operator 2.1 introduces a persistentLogs parameter in the CRD. This parameter specifies persistent volume and logging configuration.

The persistentLogs parameter contains catalinaLogs, enableAccessLogs, volumeName, and storageClass fields.

For more information, see JWS Operator CRD parameters.

New podResources parameter

JWS Operator 2.1 introduces a podResources parameter in the CRD. This parameter specifies the configuration of the central processing unit (CPU) and memory resources that the web server uses.

For more information, see JWS Operator CRD parameters.

New securityContext parameter

JWS Operator 2.1 introduces a securityContext parameter in the CRD. This parameter defines the security capabilities that are required to run the application.

For more information, see JWS Operator CRD parameters.

New useInsightsClient parameter

JWS Operator 2.1 introduces a useInsightsClient parameter in the CRD. This parameter indicates whether to create a connection with the runtimes inventory operator that Red Hat provides.

You can enable debug logging for the Insights client by setting the INSIGHTS_DEBUG environment variable to true.

Note

The useInsightsClient parameter requires use of a Red Hat JBoss Web Server 6.1 or later image.

This parameter is available as a Technology Preview only.

For more information, see JWS Operator CRD parameters.

Deprecated genericWebhookSecret parameter

JWS Operator 2.1 deprecates the genericWebhookSecret parameter that is under the webImageStream.webSources.webSourcesParams hierarchy in the CRD.

This parameter is superseded by the webImageStream.webSources.webhookSecrets.generic parameter.

For more information, see JWS Operator CRD parameters.

Deprecated githubWebhookSecret parameter

JWS Operator 2.1 deprecates the githubWebhookSecret parameter that is under the webImageStream.webSources.webSourcesParams hierarchy in the CRD.

This parameter is superseded by the webImageStream.webSources.webhookSecrets.github parameter.

For more information, see JWS Operator CRD parameters.

Enhanced format for default generated hostnames
JWS Operator 2.1 uses an enhanced format for default generated hostnames that can consist of an application name and a project name that are each up to 63 characters in length.

2.4. What is new in the JWS Operator 2.0 release?

The JWS Operator 2.0 release includes the following new features and enhancements.

Level-2 Operator capabilities

JWS Operator 2.0 provides the following level-2 Operator capability features:

  • Enables seamless upgrades
  • Supports patch and minor version upgrades
  • Manages web servers deployed by the JWS Operator 1.1.x.
Level-2 seamless integration for new images

The Deployment object definition includes a trigger that OpenShift uses to deploy new pods when a new image is pushed to the image stream. The image stream can monitor the repository for new images or you can instruct the image stream that a new image is available for use.

For more information, see Enabling level-2 seamless integration for new images.

Level-2 seamless integration for rebuilding existing images

The BuildConfig object definition includes a trigger for image stream updates and a webhook, which is a GitHub, GitLab, or Generic webhook, that enables the rebuilding of images when the webhook is triggered.

For more information about creating a secret for a webhook, see Creating a secret for a generic or GitHub webhook.

For more information about configuring a generic or GitHub webhook in a custom resource WebServer file, see JWS Operator CRD parameters.

Support for Red Hat JBoss Web Server metering labels

JWS Operator 2.0 supports the ability to add metering labels to the Red Hat JBoss Web Server pods that the JWS Operator creates.

Red Hat JBoss Web Server can use the following metering labels:

  • com.company: Red_Hat
  • rht.prod_name: Red_Hat_Runtimes
  • rht.prod_ver: 2026-Q1
  • rht.comp: JBoss_Web_Server
  • rht.comp_ver: 6.2.0
  • rht.subcomp: Tomcat 10
  • rht.subcomp_t: application

    You can add labels under the metadata section in the custom resource WebServer file for a web application that you want to deploy. For example:

    apiVersion: web.servers.org/v1alpha1
    kind: WebServer
    metadata:
      name: <my-image>
      labels:
        com.company: Red_Hat
        rht.prod_name: Red_Hat_Runtimes
        rht.prod_ver: 2026-Q1
        rht.comp: JBoss_Web_Server
        rht.comp_ver: 6.2.0
        rht.subcomp: Tomcat 10
        rht.subcomp_t: application
    spec:
    Copy to Clipboard Toggle word wrap
    Note

    If you change any label key or label value for a deployed web server, the JWS Operator redeploys the web server application. If the deployed web server was built from source code, the JWS Operator also rebuilds the web server application.

Enhanced webImage parameter

In the JWS Operator 2.0 release, the webImage parameter in the CRD contains the following additional fields:

  • imagePullSecret

    The secret that the JWS Operator uses to pull images from the repository

    Note

    The secret must contain the key .dockerconfigjson. The JWS Operator mounts and uses the secret (for example, --authfile /mount_point/.dockerconfigjson) to pull the images from the repository. The Secret object definition file might contain server username and password values or tokens to allow access to images in the image stream, the builder image, and images built by the JWS Operator.

  • webApp

    A set of parameters that describe how the JWS Operator builds the web server application

Enhanced webApp parameter

In the JWS Operator 2.0 release, the webApp parameter in the CRD contains the following additional fields:

  • name

    The name of the web server application

  • sourceRepositoryURL

    The URL where the application source files are located

  • sourceRepositoryRef

    The branch of the source repository that the Operator uses

  • sourceRepositoryContextDir

    The subdirectory where the pom.xml file is located and where the mvn install command must be run

  • webAppWarImage

    The URL of the images where the JWS Operator pushes the built image

  • webAppWarImagePushSecret

    The secret that the JWS Operator uses to push images to the repository

  • builder

    A set of parameters that contain all the information required to build the web application and create and push the image to the image repository

    Note

    To ensure that the builder can operate successfully and run commands with different user IDs, the builder must have access to the anyuid security context constraint (SCC).

    To grant the builder access to the anyuid SCC, enter the following command:

    oc adm policy add-scc-to-user anyuid -z builder

    The builder parameter contains the following fields:

    • image

      The image of the container where the web application is built (for example, quay.io/$user/tomcat10-buildah)

    • imagePullSecret

      The secret (if specified) that the JWS Operator uses to pull the builder image from the repository

    • applicationBuildScript

      The script that the builder image uses to build the application .war file and move it to the /mnt directory

      Note

      If you do not specify a value for this parameter, the builder image uses a default script that uses Maven and Buildah.

You can install the JWS Operator from OperatorHub to facilitate the deployment and management of JBoss Web Server applications in an OpenShift cluster. OperatorHub is a component of the Operator Framework that you can use to discover Operators that you want to install. OperatorHub works in conjunction with the Operator Lifecycle Manger (OLM), which installs and manages Operators in a cluster.

Important

Due to a known issue, seamless upgrades do not work properly between versions 2.2 and 2.3 of the JWS Operator. If you have an existing Operator 2.2.x installation, you must first uninstall Operator 2.2.x, as described in JWS Operator deletion from a cluster. Then install the latest Operator 2.3.x version as described in this section.

You can install the JWS Operator from OperatorHub in either of the following ways:

If you want to install the JWS Operator by using a graphical user interface, you can use the OpenShift web console to install the JWS Operator.

Note

When you install the JWS Operator by using the web console, and the Operator is using SingleNamespace installation mode, the OperatorGroup and Subscription objects are installed automatically.

Prerequisites

  • You have deployed an OpenShift Container Platform cluster by using an account with cluster administrator and Operator installation permissions.

Procedure

  1. Open the web console and select Operators > OperatorHub.
  2. In the Filter by keyword search field, type "JWS".
  3. Select the JWS Operator.
  4. On the JBoss Web Server Operator menu, select the Capability level that you want to use and click Install.
  5. On the Install Operator page, perform the following steps:

    1. Select the Update channel where the JWS Operator is available.

      Note

      The JWS Operator is currently available through one channel only.

    2. Select the Installation mode for the Operator.

      You can install the Operator to all namespaces or to a specific namespace on the cluster. If you select the specific namespace option, use the Installed Namespace field to specify the namespace where you want to install the Operator.

      Note

      If you do not specify a namespace, the Operator is installed to all namespaces on the cluster by default.

    3. Select the Approval strategy for the Operator.

      Consider the following guidelines:

      • If you select Automatic updates, when a new version of the Operator is available, the OLM upgrades the running instance of your Operator automatically.
      • If you select Manual updates, when a newer version of the Operator is available, the OLM creates an update request. As a cluster administrator, you must then manually approve the update request to ensure that the Operator is updated to the new version.
  6. Click Install.

    Note

    If you have selected a Manual approval strategy, you must approve the install plan before the installation is complete.

    The JWS Operator then appears in the Installed Operators section of the Operators tab.

If you want to install the JWS Operator by using a command-line interface, you can use the oc command-line tool to install the JWS Operator. The JWS Operator that Red Hat provides is named jws-operator.

The steps to install the JWS Operator from the command line include verifying the supported installation modes and available channels for the Operator and creating a Subscription object. Depending on the installation mode that the Operator uses, you might also need to create an Operator group in the project namespace before you create the Subscription object.

Prerequisites

  • You have deployed an OpenShift Container Platform cluster by using an account with Operator installation permissions.
  • You have installed the oc tool on your local system.

Procedure

  1. To inspect the JWS Operator, perform the following steps:

    1. View the list of JWS Operators that are available to the cluster from OperatorHub:

      $ oc get packagemanifests -n openshift-marketplace | grep jws
      Copy to Clipboard Toggle word wrap

      The preceding command displays the name, catalog, and age of each available Operator.

      For example:

      NAME            CATALOG             AGE
      jws-operator    Red Hat Operators   16h
      Copy to Clipboard Toggle word wrap
    2. Inspect the JWS Operator to verify the supported installation modes and available channels for the Operator:

      $ oc describe packagemanifests jws-operator -n openshift-marketplace
      Copy to Clipboard Toggle word wrap
  2. Check the actual list of Operator groups:

    $ oc get operatorgroups -n <project_name>
    Copy to Clipboard Toggle word wrap

    In the preceding example, replace <project_name> with your OpenShift project name.

    The preceding command displays the name and age of each available Operator group.

    For example:

    NAME       AGE
    mygroup    17h
    Copy to Clipboard Toggle word wrap
  3. If you need to create an Operator group, perform the following steps:

    Note

    If the Operator you want to install uses SingleNamespace installation mode and you do not already have an appropriate Operator group in place, you must complete this step to create an Operator group. You must ensure that you create only one Operator group in the specified namespace.

    If the Operator you want to install uses AllNamespaces installation mode or you already have an appropriate Operator group in place, you can ignore this step.

    1. Create a YAML file for the OperatorGroup object.

      For example:

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: <operatorgroup_name>
        namespace: <project_name>
      spec:
        targetNamespaces:
        - <project_name>
      Copy to Clipboard Toggle word wrap

      In the preceding example, replace <operatorgroup_name> with the name of the Operator group that you want to create, and replace <project_name> with the name of the project where you want to install the Operator. To view the project name, you can run the oc project -q command.

    2. Create the OperatorGroup object from the YAML file:

      $ oc apply -f <filename>.yaml
      Copy to Clipboard Toggle word wrap

      In the preceding example, replace <filename>.yaml with the name of the YAML file that you have created for the OperatorGroup object.

  4. To create a Subscription object, perform the following steps:

    1. Create a YAML file for the Subscription object.

      For example:

      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
          name: jws-operator
          namespace: <project_name>
      spec:
          channel: alpha
          name: jws-operator
          source: redhat-operators
          sourceNamespace: openshift-marketplace
      Copy to Clipboard Toggle word wrap

      In the preceding example, replace <project_name> with the name of the project where you want to install the Operator. To view the project name, you can run the oc project -q command.

      The namespace that you specify must have an OperatorGroup object that has the same installation mode setting as the Operator. If the Operator uses AllNamespaces installation mode, replace <project_name> with openshift-operators, which already provides an appropriate Operator group. If the Operator uses SingleNamespace installation mode, ensure that this namespace has only one OperatorGroup object.

      Ensure that the source setting matches the Catalog Source value that was displayed when you verified the available channels for the Operator (for example, redhat-operators).

    2. Create the Subscription object from the YAML file:

      $ oc apply -f <filename>.yaml
      Copy to Clipboard Toggle word wrap

      In the preceding example, replace <filename>.yaml with the name of the YAML file that you have created for the Subscription object.

Verification

  • To verify that the JWS Operator is installed successfully, enter the following command:

    $ oc get csv -n <project_name>
    Copy to Clipboard Toggle word wrap

    In the preceding example, replace <project_name> with the name of the project where you have installed the Operator.

    The preceding command displays details of the installed Operator.

    For example:

    Expand
    NAMEDISPLAYVERSIONREPLACESPHASE

    jws-operator.v2.3.x

    JWS Operator

    2.3.x

    jws-operator.v2.2.y

    Succeeded

    In the preceding output, 2.3.x represents the current Operator version (for example, 2.3.0), and 2.2.y represents the previous Operator version that the current version replaces (for example, 2.2.4).

Chapter 4. Deploying an existing JWS image

You can use the JWS Operator to facilitate the deployment of an existing image for a web server application that you want to deploy in an OpenShift cluster. In this situation, you must create a custom resource WebServer file for the web server application that you want to deploy. The JWS Operator uses the custom resource WebServer file to handle the application deployment.

Prerequisites

  • You have installed the JWS Operator from OperatorHub.

    To ensure that the JWS Operator is installed, enter the following command:

    $ oc get deployment.apps/jws-operator-controller-manager
    Copy to Clipboard Toggle word wrap

    The preceding command displays the name and status details of the Operator.

    For example:

    NAME           READY  UP-TO-DATE  AVAILABLE  AGE
    jws-operator   1/1    1           1          15h
    Copy to Clipboard Toggle word wrap
    Note

    If you want to view more detailed output, you can use the following command:

    oc describe deployment.apps/jws-operator-controller-manager

Procedure

  1. Prepare your image and push it to the location where you want to display the image (for example, quay.io/<USERNAME>/tomcat-demo:latest).
  2. To create a custom resource file for your web server application, perform the following steps:

    1. Create a YAML file named, for example, webservers_cr.yaml.
    2. Enter details in the following format:

      apiVersion: web.servers.org/v1alpha1
      kind: WebServer
      metadata:
          name: <image name>
      spec:
          # Add fields here
          applicationName: <application name>
          replicas: 2
      webImage:
         applicationImage: <URL of the image>
      Copy to Clipboard Toggle word wrap

      For example:

      apiVersion: web.servers.org/v1alpha1
      kind: WebServer
      metadata:
          name: example-image-webserver
      spec:
          # Add fields here
          applicationName: jws-app
          replicas: 2
      webImage:
         applicationImage: quay.io/<USERNAME>/tomcat-demo:latest
      Copy to Clipboard Toggle word wrap
  3. To deploy your web application, perform the following steps:

    1. Go to the directory where you have created the web application.
    2. Enter the following command:

      $ oc apply -f webservers_cr.yaml
      Copy to Clipboard Toggle word wrap

      The preceding command displays a message to confirm that the web application is deployed.

      For example:

      webserver/example-image-webserver created
      Copy to Clipboard Toggle word wrap

      When you run the preceding command, the Operator also creates a route automatically.

  4. Verify the route that the Operator has automatically created:

    $ oc get routes
    Copy to Clipboard Toggle word wrap
  5. Optional: Delete the webserver that you created in Step 3:

    $ oc delete webserver example-image-webserver
    Copy to Clipboard Toggle word wrap
    Note

    Alternatively, you can delete the webserver by deleting the YAML file. For example:

    oc delete -f webservers_cr.yaml

Chapter 5. JWS Operator deletion from a cluster

If you no longer need to use the JWS Operator, you can subsequently delete the JWS Operator from a cluster.

You can delete the JWS Operator from a cluster in either of the following ways:

If you want to delete the JWS Operator by using a graphical user interface, you can use the OpenShift web console to delete the JWS Operator.

Prerequisites

  • You have deployed an OpenShift Container Platform cluster by using an account with cluster admin permissions.

    Note

    If you do not have cluster admin permissions, you can circumvent this requirement. For more information, see Allowing non-cluster administrators to install Operators.

Procedure

  1. Open the web console and click Operators > Installed Operators.
  2. Select the Actions menu and click Uninstall Operator.

    Note

    The Uninstall Operator option automatically removes the Operator, any Operator deployments, and Pods.

    Deleting the Operator does not remove any custom resource definitions or custom resources for the Operator, including CRDs or CRs. If the Operator has deployed applications on the cluster, or if the Operator has configured resources outside the cluster, you must clean up these applications and resources manually.

If you want to delete the JWS Operator by using a command-line interface, you can use the oc command-line tool to delete the JWS Operator.

Prerequisites

  • You have deployed an OpenShift Container Platform cluster by using an account with cluster admin permissions.

    Note

    If you do not have cluster admin permissions, you can circumvent this requirement. For more information, see Allowing non-cluster administrators to install Operators.

  • You have installed the oc tool on your local system.

Procedure

  1. Check the current version of the subscribed Operator:

    $ oc get subscription jws-operator -n <project_name> -o yaml | grep currentCSV
    Copy to Clipboard Toggle word wrap

    In the preceding example, replace <project_name> with the namespace of the project where you installed the Operator. If your Operator was installed to all namespaces, replace <project_name> with openshift-operators.

    The preceding command displays the following output, where v2.1.x refers to the Operator version (for example, v2.1.0):

    f:currentCSV: {}
    currentCSV: jws-operator.v2.1.x
    Copy to Clipboard Toggle word wrap
  2. Delete the subscription for the Operator:

    $ oc delete subscription jws-operator -n <project_name>
    Copy to Clipboard Toggle word wrap

    In the preceding example, replace <project_name> with the namespace of the project where you installed the Operator. If your operator was installed to all namespaces, replace <project_name> with openshift-operators.

  3. Delete the CSV for the Operator in the target namespace:

    $ oc delete clusterserviceversion <currentCSV> -n <project_name>
    Copy to Clipboard Toggle word wrap

    In the preceding example, replace <currentCSV> with the currentCSV value that you obtained in Step 1 (for example, jws-operator.v2.1.0). Replace <project_name> with the namespace of the project where you installed the Operator. If your operator was installed to all namespaces, replace <project_name> with openshift-operators.

    The preceding command displays a message to confirm that the CSV is deleted.

    For example:

    clusterserviceversion.operators.coreos.com "jws-operator.v2.1.x" deleted
    Copy to Clipboard Toggle word wrap

You can create an image stream in your project, create a custom resource (CR) for the web application that you want the Operator to deploy, and trigger an update to the image stream.

Procedure

  1. In your project namespace, create an image stream by using the oc import-image command to import the tag and other information for an image.

    For example:

    oc import-image <my-image>-imagestream:latest \
    --from=quay.io/$user/<my-image>:latest \
    --confirm
    Copy to Clipboard Toggle word wrap

    In the preceding example, replace each occurrence of <my-image> with the name of the image that you want to import.

    The preceding command creates an image stream named <my-image>-imagestream by importing information for the quay.io/$user/<my-image> image. For more information about the format and management of image streams, see Managing image streams.

  2. Create a custom resource of the WebServer kind for the web application that you want the JWS Operator to deploy whenever the image stream is updated. You can define the custom resource in YAML file format.

    For example:

    apiVersion: web.servers.org/v1alpha1
    kind: WebServer
    metadata:
      name: <my-image>
    spec:
      # Add fields here
      applicationName: my-app
      useSessionClustering: true
      replicas: 2
      webImageStream:
        imageStreamNamespace: <project-name>
        imageStreamName: <my-image>-imagestream
    Copy to Clipboard Toggle word wrap
  3. Trigger an update to the image stream by using the oc tag command.

    For example:

    oc tag quay.io/$user/<my-image> <my-image>-imagestream:latest --scheduled
    Copy to Clipboard Toggle word wrap

    The preceding command causes OpenShift Container Platform to update the specified image stream tag periodically. This period is a cluster-wide setting that is set to 15 minutes by default.

Chapter 7. Creating a secret for a webhook

You can create a secret that you can use with a generic, GitHub, or GitLab webhook to trigger application builds in a Git repository. Depending on the type of Git hosting platform that you use for your application code, the JWS Operator provides webhookSecrets:generic, webhookSecrets:github, and webhookSecrets:gitlab parameters that you can use to specify the secret in the custom resource file for a web application.

Procedure

  1. Create a Base64-encoded secret string.

    For example:

    echo -n "qwerty" | base64
    Copy to Clipboard Toggle word wrap

    The preceding command encodes a plain-text string, qwerty, and displays the encoded string.

    For example:

    cXdlcnR5
    Copy to Clipboard Toggle word wrap
  2. Create a secret.yaml file that defines an object of kind Secret.

    For example:

    kind: Secret
    apiVersion: v1
    metadata:
      name: jws-secret
    data:
      WebHookSecretKey: cXdlcnR5
    Copy to Clipboard Toggle word wrap

    In the preceding example, jws-secret is the name of the secret and cXdlcnR5 is the encoded secret string.

  3. To create the secret, enter the following command:

    oc create -f secret.yaml
    Copy to Clipboard Toggle word wrap

    The preceding command displays a message to confirm that the secret is created.

    For example:

    secret/jws-secret created
    Copy to Clipboard Toggle word wrap

    Based on the preceding example, you can set the webhookSecrets:generic parameter to jws-secret.

Verification

  1. Get the URL for the webhook:

    oc describe BuildConfig | grep webhooks
    Copy to Clipboard Toggle word wrap

    The preceding command generates the webhook URL in the following format:

    https://<host>:<port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic
    Copy to Clipboard Toggle word wrap
  2. To send a request to the webhook, enter the following curl command:

    curl -k -X POST https://<host>:<port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic
    Copy to Clipboard Toggle word wrap

    In the preceding command, replace <host>, <port>, <namespace>, and <name> in the URL string with values that are appropriate for your environment. Replace <secret> with the plain-text secret string (for example, qwerty).

    The preceding command generates the following type of webhook response in JSON format and the build is triggered:

    {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"test-2","namespace":"jfc","selfLink":"/apis/build.openshift.io/v1/namespaces/jfc/buildconfigs/test-2/instantiate","uid":"a72dd529-edc6-4e1c-898e-7c0dbbea176e","resourceVersion":"846159","creationTimestamp":"2020-10-30T12:29:30Z","labels":{"application":"test","buildconfig":"test","openshift.io/build-config.name":"test","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"test","openshift.io/build.number":"2"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"test","uid":"1f78fa3f-2f3b-421b-9f49-192184cc2280","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2020-10-30T12:29:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:application":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1f78fa3f-2f3b-421b-9f49-192184cc2280\"}":{".":{},"f:apiVersion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:output":{"f:to":{".":{},"f:kind":{},"f:name":{}}},"f:serviceAccount":{},"f:source":{"f:contextDir":{},"f:git":{".":{},"f:ref":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:env":{},"f:forcePull":{},"f:from":{".":{},"f:kind":{},"f:name":{}},"f:pullSecret":{".":{},"f:name":{}}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{".":{},"f:kind":{},"f:name":{},"f:namespace":{}},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/jfclere/demo-webapp.git","ref":"master"},"contextDir":"/"},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/jfc/jboss-webserver54-tomcat9-openshift@sha256:75dcdf81011e113b8c8d0a40af32dc705851243baa13b68352706154174319e7"},"pullSecret":{"name":"builder-dockercfg-rvbh8"},"env":[{"name":"MAVEN_MIRROR_URL"},{"name":"ARTIFACT_DIR"}],"forcePull":true}},"output":{"to":{"kind":"ImageStreamTag","name":"test:latest"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Generic WebHook","genericWebHook":{"secret":"\u003csecret\u003e"}}]},"status":{"phase":"New","config":{"kind":"BuildConfig","namespace":"jfc","name":"test"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2020-10-30T12:29:30Z","lastTransitionTime":"2020-10-30T12:29:30Z"}]}}
    {
      "kind": "Status",
      "apiVersion": "v1",
      "metadata": {},
      "status": "Success",
      "message": "invalid Content-Type on payload, ignoring payload and continuing with build",
      "code": 200
    }
    Copy to Clipboard Toggle word wrap
    Note

    If a User "system:anonymous" cannot create resource error results, you can resolve this error either by adding unauthenticated users to the system:webhook role binding or by creating a token and running the curl command.

    For example, to create a token and run the curl command:

    TOKEN=`oc create token builder`
    
    curl -H "Authorization: Bearer $TOKEN" -k -X POST https://<host>:<port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic
    Copy to Clipboard Toggle word wrap
  3. If you want to use the webhook in GitHub:

    1. In your GitHub project, select Settings > Webhooks > Add webhook.
    2. In the Payload URL field, add the URL.
    3. Set the content type to application/json.
    4. Disable SSL verification, if necessary.
    5. Click Add webhook.

    For more information, see https://docs.openshift.com/container-platform/4.6/builds/triggering-builds-build-hooks.html.

Before the JWS Operator 2.3 release, the Operator automatically managed the cluster-wide configuration that is required for the monitoring system. However, from JWS Operator 2.3 onward, users must maintain responsibility for managing the cluster-wide configuration for monitoring.

When using JWS Operator 2.3, you must create a cluster-monitoring-config ConfigMap. To enable monitoring for user-defined projects in the cluster, you must edit the ConfigMap to include an enableUserWorkload entry that is set to true, as shown in the following example:

apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-monitoring-config
  namespace: openshift-monitoring
data:
  config.yaml: |
    enableUserWorkload: true
Copy to Clipboard Toggle word wrap

For more information about the prerequisites and steps for completing this task, see Configuring user workload monitoring: Enabling monitoring for user-defined projects in the Red Hat OpenShift Container Platform documentation.

Chapter 9. JWS Operator CRD parameters

The JWS Operator provides a set of custom resource definition (CRD) parameters. When you create a custom resource WebServer file for a web application, you can specify parameter values in a <key>: <value> format. The JWS Operator uses the information that you specify in the custom resource WebServer file to deploy the web application.

9.1. CRD parameter hierarchy

The JWS Operator provides CRD parameters in the following hierarchical format:

applicationName: <value>
replicas: <value>
useSessionClustering: <value>
webImage:
   applicationImage: <value>
   imagePullSecret: <value>
   webApp:
      name: <value>
      sourceRepositoryURL: <value>
      sourceRepositoryRef: <value>
      contextDir: <value>
      webAppWarImage: <value>
      webAppWarImagePushSecret: <value>
      builder:
        image: <value>
        imagePullSecret: <value>
        applicationBuildScript: <value>
   webServerHealthCheck:
      serverReadinessScript: <value>
      serverLivenessScript: <value>
webImageStream:
   imageStreamName: <value>
   imageStreamNamespace: <value>
   webSources:
      sourceRepositoryUrl: <value>
      sourceRepositorySecret: <value>
      sourceRepositoryRef: <value>
      contextDir: <value>
      webhookSecrets:
         generic: <value>
         github: <value>
         gitlab: <value>
      webSourcesParams:
         mavenMirrorUrl: <value>
         artifactDir: <value>
         genericWebhookSecret: <value>  [Deprecated in 2.1 release]
         githubWebhookSecret: <value>  [Deprecated in 2.1 release]
   webServerHealthCheck:
      serverReadinessScript: <value>
      serverLivenessScript: <value>
tlsConfig:
   routeHostname: <value>
   certificateVerification: <value>
   tlsSecret: <value>
   tlsPassword: <value>
environmentVariables
persistentLogs:
   catalinaLogs: <value>
   enableAccessLogs: <value>
   volumeName: <value>
   storageClass: <value>
podResources
securityContext
volumeSpec:
   persistentVolumeClaims:
      - <value1>
      - <value2>
      - ...
   secrets:
      - <value1>
      - <value2>
      - ...
   configMaps:
      - <value1>
      - <value2>
      - ...
   volumeClaimTemplates
useInsightsClient [Technology Preview only]
Copy to Clipboard Toggle word wrap
Note

When you create a custom resource WebServer file, specify parameter names and values in the same hierarchical format that the preceding example outlines. For more information about creating a custom resource WebServer file, see Deploying an existing JWS image.

The genericWebhookSecret and githubWebhookSecret parameters are deprecated in the JWS Operator 2.1 release. The useInsightsClient parameter is a Technology Preview feature only.

9.2. CRD parameter details

The following table describes the CRD parameters that the JWS Operator provides. This table shows each parameter name in the context of any higher-level parameters that are above it in the hierarchy.

Expand
Parameter nameDescription

replicas

The number of pods of the JBoss Web Server image that you want to run

For example:

replicas: 2

applicationName

The name of the web application that you want the JWS Operator to deploy

The application name must be a unique value in the OpenShift namespace or project. The JWS Operator uses the application name that you specify to create the route to access the web application.

For example:

applicationName: my-app

useSessionClustering

Enables DNSping session clustering

This is set to false by default. If you set this parameter to true, the image must be based on JBoss Web Server images, because session clustering uses the ENV_FILES environment variable and a shell script to add the clustering in the server.xml file.

Note: In this release, the session clustering functionality is available as a Technology Preview feature only. The current Operator version uses the DNS Membership Provider, which is limited because of DNS limitations. InetAddress.getAllByName() results are cached, which means session replications might not work while scaling up.

For example:

useSessionClustering: true

webImage

A set of parameters that controls how the JWS Operator deploys pods from existing images

This parameter contains applicationImage, imagePullSecret, webApp, and webServerHealthCheck fields.

webImage:

      applicationImage

The full path to the name of the application image that you want to deploy

For example:

applicationImage: quay.io/$user/my-image-name

webImage:

      imagePullSecret

The name of the secret that the JWS Operator uses to pull images from the repository

The secret must contain the key .dockerconfigjson. The JWS Operator mounts the secret and uses it similar to --authfile /mount_point/.dockerconfigjson to pull the images from the repository.

The Secret object definition file might contain several username and password values or tokens to allow access to images in the image stream, the builder image, and images built by the JWS Operator.

For example:

imagePullSecret: mysecret

webImage:

      webApp

A set of parameters that describe how the JWS Operator builds the web application that you want to add to the application image

If you do not specify the webApp parameter, the JWS Operator deploys the web application without building the application.

This parameter contains name, sourceRepositoryURL, sourceRepositoryRef, contextDir, webAppWarImage, webAppWarImagePushSecret, and builder fields.

webImage:

      webApp:

           name

The name of the web application file

The default name is ROOT.war.

For example:

name: my-app.war

webImage:

      webApp:

           sourceRepositoryURL

The URL where the application source files are located

The source should contain a Maven pom.xml file to support a Maven build. When Maven generates a .war file for the application, the .war file is copied to the webapps directory of the image that the JWS Operator uses to deploy the application (for example, /opt/jws-5.x/tomcat/webapps).

For example:

sourceRepositoryUrl: 'https://github.com/$user/demo-webapp.git'

webImage:

      webApp:

           sourceRepositoryRef

The branch of the source repository that the JWS Operator uses

For example:

sourceRepositoryRef: main

webImage:

      webApp:

           contextDir

The subdirectory in the source repository where the pom.xml file is located and the mvn install command is run

For example:

contextDir: /

webImage:

      webApp:

           webAppWarImage

The URL of the images where the JWS Operator pushes the built image

webImage:

      webApp:

           webAppWarImagePushSecret

The name of the secret that the JWS Operator uses to push images to the repository

The secret must contain the key .dockerconfigjson. The JWS Operator mounts the secret and uses it similar to --authfile /mount_point/.dockerconfigjson to push the image to the repository.

If the JWS Operator uses a pull secret to pull images from the repository, you must specify the name of the pull secret as the value for the webAppWarImagePushSecret parameter. See imagePullSecret for more information.

For example:

imagePullSecret: mysecret

webImage:

      webApp:

           builder

A set of parameters that describe how the JWS Operator builds the web application and creates and pushes the image to the image repository

To ensure that the builder can operate successfully and run commands with different user IDs, the builder must have access to the anyuid SCC (security context constraint). To grant the builder access to the anyuid SCC, enter the following command:

oc adm policy add-scc-to-user anyuid -z builder

This parameter contains image, imagePullSecret, and applicationBuildScript fields.

webImage:

      webApp:

           builder:

                image

The image of the container where the JWS Operator builds the web application

For example:

image: quay.io/$user/tomcat10-buildah

webImage:

      webApp:

           builder:

                imagePullSecret

The name of the secret (if specified) that the JWS Operator uses to pull the builder image from the repository

The secret must contain the key .dockerconfigjson. The JWS Operator mounts the secret and uses it similar to --authfile /mount_point/.dockerconfigjson to pull the images from the repository.

The Secret object definition file might contain several username and password values or tokens to allow access to images in the image stream, the builder image, and images built by the JWS Operator.

For example:

imagePullSecret: mysecret

webImage:

      webApp:

           builder:

                applicationBuildScript

The script that the builder image uses to build the application .war file and move it to the /mnt directory

If you do not specify a value for this parameter, the builder image uses a default script that uses Maven and Buildah.

webImage:

      webServerHealthCheck

The health check that the JWS Operator uses

The default behavior is to use the health valve, which does not require any parameters.

This parameter contains serverReadinessScript and serverLivenessScript fields.

webImage:

      webServerHealthCheck:

           serverReadinessScript

A string that specifies the logic for the pod readiness health check

If this parameter is not specified, the JWS Operator uses the default health check by using the OpenShift internal registry to check http://localhost:8080/health.

For example:

serverReadinessScript: /bin/bash -c " /usr/bin/curl --noproxy '*' -s 'http://localhost:8080/health' | /usr/bin/grep -i 'status.*UP'"

webImage:

      webServerHealthCheck:

           serverLivenessScript

A string that specifies the logic for the pod liveness health check

This parameter is optional.

webImageStream

A set of parameters that control how the JWS Operator uses an image stream that provides images to run or to build upon

The JWS Operator uses the latest image in the image stream.

This parameter contains applicationImage, imagePullSecret, webApp, and webServerHealthCheck fields.

webImageStream:

      imageStreamName

The name of the image stream that you have created to allow the JWS Operator to find the base images

For example:

imageStreamName: my-image-name-imagestream:latest

webImageStream:

      imageStreamNamespace

The namespace or project where you have created the image stream

For example:

imageStreamNamespace: my-namespace

webImageStream:

      webSources

A set of parameters that describe where the application source files are located and how to build them

If you do not specify the webSources parameter, the JWS Operator deploys the latest image in the image stream.

This parameter contains sourceRepositoryUrl, sourceRepositorySecret, sourceRepositoryRef, contextDir, and webSourcesParams fields.

webImageStream:

      webSources:

           sourceRepositoryUrl

The URL where the application source files are located

The source should contain a Maven pom.xml file to support a Maven build. When Maven generates a .war file for the application, the .war file is copied to the webapps directory of the image that the JWS Operator uses to deploy the application (for example, /opt/jws-5.x/tomcat/webapps).

For example:

sourceRepositoryUrl: 'https://github.com/$user/demo-webapp.git'

webImageStream:

      webSources:

           sourceRepositorySecret

The secret for the repository of the application source files

You can use this parameter to grant access to application source files that are contained in a private Git repository.

For example:

sourceRepositorySecret: <secret>

For more information, see Builds using BuildConfig: Source clone secrets in the Red Hat OpenShift Container Platform documentation.

webImageStream:

      webSources:

           sourceRepositoryRef

The branch of the source repository that the JWS Operator uses

For example:

sourceRepositoryRef: main

webImageStream:

      webSources:

           contextDir

The subdirectory in the source repository where the pom.xml file is located and the mvn install command is run

For example:

contextDir: /

webImageStream:

      webSources:

           webhookSecrets

A set of parameters that specify secret names for triggering a build through a webhook

This parameter contains generic, github, and gitlab fields.

webImageStream:

      webSources:

           webhookSecrets:

                generic

The name of a secret for a generic webhook that can trigger a build

For more information about creating a secret, see Creating a secret for a webhook.

For more information about using generic webhooks, see Webhook Triggers.

For example:

generic: jws-secret

webImageStream:

      webSources:

           webhookSecrets:

                github

The name of a secret for a GitHub webhook that can trigger a build

For more information about creating a secret, see Creating a secret for a webhook.

For more information about using GitHub webhooks, see Webhook Triggers.

For example:

github: jws-secret

webImageStream:

      webSources:

           webhookSecrets:

                gitlab

The name of a secret for a GitLab webhook that can trigger a build

For more information about creating a secret, see Creating a secret for a webhook.

For more information about using GitLab webhooks, see Webhook Triggers.

For example:

gitlab: jws-secret

webImageStream:

      webSources:

           webSourcesParams

A set of parameters that describe how to build the application images

This parameter is optional.

This parameter contains mavenMirrorUrl, artifactDir, genericWebhookSecret, and githubWebhookSecret fields.

Note: The genericWebhookSecret and githubWebhookSecret fields are deprecated in the JWS Operator 2.1 release.

webImageStream:

      webSources:

           webSourcesParams:

                mavenMirrorUrl

The Maven proxy URL that Maven uses to build the web application

This parameter is required if the cluster does not have internet access.

webImageStream:

      webSources:

           webSourcesParams:

                artifactDir

The directory where Maven stores the .war file that Maven generates for the web application

The contents of this directory are copied to the webapps directory of the image that the JWS Operator uses to deploy the application (for example, /opt/jws-5.x/tomcat/webapps).

The default value is target.

webImageStream:

      webSources:

           webSourcesParams:

                genericWebHookSecret

Important: This parameter is deprecated in the 2.1 release. Use the webhookSecrets:generic parameter instead.

A webhook secret string

For more information about creating a secret, see Creating a secret for a webhook.

For more information about using generic webhooks, see Webhook Triggers.

For example:

genericWebhookSecret: qwerty

webImageStream:

      webSources:

           webSourcesParams:

                githubWebhookSecret

Important: This parameter is deprecated in the 2.1 release. Use the webhookSecrets:github parameter instead.

A webhook secret string specific to GitHub

For more information about creating a secret, see Creating a secret for a webhook.

For more information about using GitHub webhooks, see Webhook Triggers.

Note: You cannot perform manual tests of a GitHub webhook. GitHub generates the payload and it is not empty.

webImageStream:

      webServerHealthCheck

The health check that the JWS Operator uses

The default behavior is to use the health valve, which does not require any parameters.

This parameter contains serverReadinessScript and serverLivenessScript fields.

webImageStream:

      webServerHealthCheck:

           serverReadinessScript

A string that specifies the logic for the pod readiness health check

If this parameter is not specified, the JWS Operator uses the default health check by using the OpenShift internal registry to check http://localhost:8080/health.

For example:

serverReadinessScript: /bin/bash -c " /usr/bin/curl --noproxy '*' -s 'http://localhost:8080/health' | /usr/bin/grep -i 'status.*UP'"

webImageStream:

      webServerHealthCheck:

           serverLivenessScript

A string that specifies the logic for the pod liveness health check

This parameter is optional.

tlsConfig

A set of parameters that specify the TLS configuration for a web server

This parameter contains routeHostname, certificateVerification, tlsSecret, and tlsPassword fields.

tlsConfig:

      routeHostname

Indicates whether the Operator should create a route or whether the route uses TLS

Supported values are NONE and tls:

  • If you specify NONE, the Operator does not create the route. In this situation, you must create the route manually.
  • If you specify tls, the Operator creates a passthrough route to the web server.

For example:

routeHostname: NONE

tlsConfig:

      certificateVerification

Indicates whether the Operator should use the TLS connector with a client certificate

Supported values are required, optional, or empty.

For more information, see the Apache Tomcat HTTP Connector documentation about the certificateVerification attribute.

For example:

certificateVerification: required

tlsConfig:

      tlsSecret

The secret to use for the server certificate (server.cert) and server key (server.key) and the optional CA certificate for the client certificates (ca.cert)

For example:

tlsSecret: tlssecret

tlsConfig:

      tlsPassword

The passphrase used to protect the server key (server.key)

For example:

tlsPassword: changeit

environmentVariables

Environment variables for deployment

persistentLogs

A set of parameters that specify persistent volume and logging configuration

This parameter contains catalinaLogs, enableAccessLogs, volumeName, and storageClass fields.

persistentLogs:

      catalinaLogs

Indicates whether the catalina.out log file for every pod is saved in a persistent volume, to remain available after a possible pod failure.

Supported values are true or false.

For example:

catalinaLogs: true

persistentLogs:

      enableAccessLogs

Indicates whether the access_log log file for every pod is saved in a persistent volume, to remain available after a possible pod failure.

Supported values are true or false.

For example:

enableAccessLogs: true

persistentLogs:

      volumeName

Name of the persistent volume that is used to store the log files

For example:

volumeName: pv0000

persistentLogs:

      storageClass

Name of the storage class of the persistent volume that is used to store the log files

For example:

storageClass: nfs-client

podResources

Specifies the configuration of the CPU and memory resources that the web server uses

These values must be categorized under limits and requests.

For example:

resources:

    limits:

       cpu: 500m

    requests:

       cpu: 200m

These values are used for autoscaling. For more information about autoscaling, see Automatically scaling pods with the horizontal pod autoscaler.

securityContext

Defines the security capabilities that are required to run the application

volumeSpec

A set of parameters that specify the volumes to be mounted

This parameter contains persistentVolumeClaims, secrets, configMaps, and volumeClaimTemplates fields.

volumeSpec:

      persistentVolumeClaims

A list of the names of persistent volume claims (PVCs) that are to be mounted to the /volumes directory

For example:

persistentVolumeClaims:

      - <pvc1>

      - <pvc2>

volumeSpec:

      secrets

A list of the names of secrets that are to be mounted to the /secrets directory

For example:

secrets:

      - <secret1>

      - <secret2>

volumeSpec:

      configMaps

A list of the names of config maps that are to be mounted to the /configmaps directory

For example:

configMaps:

      - <configmap1>

      - <configmap2>

volumeSpec:

      volumeClaimTemplates

A list of PersistentVolumeClaimSpec properties for stateful applications

For more information about PersistentVolumeClaimSpec, see Storage APIs: PersistentVolumeClaim [v1].

useInsightsClient

Indicates whether to create a connection with the runtimes inventory operator that Red Hat provides

Supported values are true or false.

For example:

useInsightsClient: true

You can enable debug logging for the Insights client by setting the INSIGHTS_DEBUG environment variable to true.

Note: The useInsightsClient parameter requires use of a Red Hat JBoss Web Server 6.1 or later image. This parameter is a Technology Preview feature only.

Support contact information

For general support queries, see https://access.redhat.com/support for more information.

If you want to report a potential security issue with Red Hat JBoss Web Server, see https://access.redhat.com/security/team/contact for more information.

Legal Notice

Copyright © Red Hat.
Except as otherwise noted below, the text of and illustrations in this documentation are licensed by Red Hat under the Creative Commons Attribution–Share Alike 3.0 Unported license . If you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, the Red Hat logo, JBoss, Hibernate, and RHCE are trademarks or registered trademarks of Red Hat, LLC. or its subsidiaries in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS is a trademark or registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United States and other countries.
The OpenStack® Word Mark and OpenStack logo are trademarks or registered trademarks of the Linux Foundation, used under license.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top