Search

Chapter 12. Using images

download PDF

12.1. Using images overview

Use the following topics to discover the different Source-to-Image (S2I), database, and other container images that are available for OpenShift Container Platform users.

Red Hat official container images are provided in the Red Hat Registry at registry.redhat.io. OpenShift Container Platform’s supported S2I, database, and Jenkins images are provided in the openshift4 repository in the Red Hat Quay Registry. For example, quay.io/openshift-release-dev/ocp-v4.0-<address> is the name of the OpenShift Application Platform image.

The xPaaS middleware images are provided in their respective product repositories on the Red Hat Registry but suffixed with a -openshift. For example, registry.redhat.io/jboss-eap-6/eap64-openshift is the name of the JBoss EAP image.

All Red Hat supported images covered in this section are described in the Container images section of the Red Hat Ecosystem Catalog. For every version of each image, you can find details on its contents and usage. Browse or search for the image that interests you.

Important

The newer versions of container images are not compatible with earlier versions of OpenShift Container Platform. Verify and use the correct version of container images, based on your version of OpenShift Container Platform.

12.2. Configuring Jenkins images

OpenShift Container Platform provides a container image for running Jenkins. This image provides a Jenkins server instance, which can be used to set up a basic flow for continuous testing, integration, and delivery.

The image is based on the Red Hat Universal Base Images (UBI).

OpenShift Container Platform follows the LTS release of Jenkins. OpenShift Container Platform provides an image that contains Jenkins 2.x.

The OpenShift Container Platform Jenkins images are available on Quay.io or registry.redhat.io.

For example:

$ podman pull registry.redhat.io/openshift4/ose-jenkins:<v4.3.0>

To use these images, you can either access them directly from these registries or push them into your OpenShift Container Platform container image registry. Additionally, you can create an image stream that points to the image, either in your container image registry or at the external location. Your OpenShift Container Platform resources can then reference the image stream.

But for convenience, OpenShift Container Platform provides image streams in the openshift namespace for the core Jenkins image as well as the example Agent images provided for OpenShift Container Platform integration with Jenkins.

12.2.1. Configuration and customization

You can manage Jenkins authentication in two ways:

  • OpenShift Container Platform OAuth authentication provided by the OpenShift Container Platform Login plug-in.
  • Standard authentication provided by Jenkins.

12.2.1.1. OpenShift Container Platform OAuth authentication

OAuth authentication is activated by configuring options on the Configure Global Security panel in the Jenkins UI, or by setting the OPENSHIFT_ENABLE_OAUTH environment variable on the Jenkins Deployment configuration to anything other than false. This activates the OpenShift Container Platform Login plug-in, which retrieves the configuration information from pod data or by interacting with the OpenShift Container Platform API server.

Valid credentials are controlled by the OpenShift Container Platform identity provider.

Jenkins supports both browser and non-browser access.

Valid users are automatically added to the Jenkins authorization matrix at log in, where OpenShift Container Platform roles dictate the specific Jenkins permissions that users have. The roles used by default are the predefined admin, edit, and view. The login plug-in executes self-SAR requests against those roles in the project or namespace that Jenkins is running in.

Users with the admin role have the traditional Jenkins administrative user permissions. Users with the edit or view role have progressively fewer permissions.

The default OpenShift Container Platform admin, edit, and view roles and the Jenkins permissions those roles are assigned in the Jenkins instance are configurable.

When running Jenkins in an OpenShift Container Platform pod, the login plug-in looks for a config map named openshift-jenkins-login-plugin-config in the namespace that Jenkins is running in.

If this plugin finds and can read in that config map, you can define the role to Jenkins Permission mappings. Specifically:

  • The login plug-in treats the key and value pairs in the config map as Jenkins permission to OpenShift Container Platform role mappings.
  • The key is the Jenkins permission group short ID and the Jenkins permission short ID, with those two separated by a hyphen character.
  • If you want to add the Overall Jenkins Administer permission to an OpenShift Container Platform role, the key should be Overall-Administer.
  • To get a sense of which permission groups and permissions IDs are available, go to the matrix authorization page in the Jenkins console and IDs for the groups and individual permissions in the table they provide.
  • The value of the key and value pair is the list of OpenShift Container Platform roles the permission should apply to, with each role separated by a comma.
  • If you want to add the Overall Jenkins Administer permission to both the default admin and edit roles, as well as a new Jenkins role you have created, the value for the key Overall-Administer would be admin,edit,jenkins.
Note

The admin user that is pre-populated in the OpenShift Container Platform Jenkins image with administrative privileges is not given those privileges when OpenShift Container Platform OAuth is used. To grant these permissions the OpenShift Container Platform cluster administrator must explicitly define that user in the OpenShift Container Platform identity provider and assigns the admin role to the user.

Jenkins users' permissions that are stored can be changed after the users are initially established. The OpenShift Container Platform Login plug-in polls the OpenShift Container Platform API server for permissions and updates the permissions stored in Jenkins for each user with the permissions retrieved from OpenShift Container Platform. If the Jenkins UI is used to update permissions for a Jenkins user, the permission changes are overwritten the next time the plug-in polls OpenShift Container Platform.

You can control how often the polling occurs with the OPENSHIFT_PERMISSIONS_POLL_INTERVAL environment variable. The default polling interval is five minutes.

The easiest way to create a new Jenkins service using OAuth authentication is to use a template.

12.2.1.2. Jenkins authentication

Jenkins authentication is used by default if the image is run directly, without using a template.

The first time Jenkins starts, the configuration is created along with the administrator user and password. The default user credentials are admin and password. Configure the default password by setting the JENKINS_PASSWORD environment variable when using, and only when using, standard Jenkins authentication.

Procedure

  • Create a Jenkins application that uses standard Jenkins authentication:

    $ oc new-app -e \
        JENKINS_PASSWORD=<password> \
        openshift4/ose-jenkins

12.2.2. Jenkins environment variables

The Jenkins server can be configured with the following environment variables:

VariableDefinitionExample values and settings

OPENSHIFT_ENABLE_OAUTH

Determines whether the OpenShift Container Platform Login plug-in manages authentication when logging in to Jenkins. To enable, set to true.

Default: false

JENKINS_PASSWORD

The password for the admin user when using standard Jenkins authentication. Not applicable when OPENSHIFT_ENABLE_OAUTH is set to true.

Default: password

JAVA_MAX_HEAP_PARAM, CONTAINER_HEAP_PERCENT, JENKINS_MAX_HEAP_UPPER_BOUND_MB

These values control the maximum heap size of the Jenkins JVM. If JAVA_MAX_HEAP_PARAM is set, its value takes precedence. Otherwise, the maximum heap size is dynamically calculated as CONTAINER_HEAP_PERCENT of the container memory limit, optionally capped at JENKINS_MAX_HEAP_UPPER_BOUND_MB MiB.

By default, the maximum heap size of the Jenkins JVM is set to 50% of the container memory limit with no cap.

JAVA_MAX_HEAP_PARAM example setting: -Xmx512m

CONTAINER_HEAP_PERCENT default: 0.5, or 50%

JENKINS_MAX_HEAP_UPPER_BOUND_MB example setting: 512 MiB

JAVA_INITIAL_HEAP_PARAM, CONTAINER_INITIAL_PERCENT

These values control the initial heap size of the Jenkins JVM. If JAVA_INITIAL_HEAP_PARAM is set, its value takes precedence. Otherwise, the initial heap size is dynamically calculated as CONTAINER_INITIAL_PERCENT of the dynamically calculated maximum heap size.

By default, the JVM sets the initial heap size.

JAVA_INITIAL_HEAP_PARAM example setting: -Xms32m

CONTAINER_INITIAL_PERCENT example setting: 0.1, or 10%

CONTAINER_CORE_LIMIT

If set, specifies an integer number of cores used for sizing numbers of internal JVM threads.

Example setting: 2

JAVA_TOOL_OPTIONS

Specifies options to apply to all JVMs running in this container. It is not recommended to override this value.

Default: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true

JAVA_GC_OPTS

Specifies Jenkins JVM garbage collection parameters. It is not recommended to override this value.

Default: -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90

JENKINS_JAVA_OVERRIDES

Specifies additional options for the Jenkins JVM. These options are appended to all other options, including the Java options above, and may be used to override any of them if necessary. Separate each additional option with a space; if any option contains space characters, escape them with a backslash.

Example settings: -Dfoo -Dbar; -Dfoo=first\ value -Dbar=second\ value.

JENKINS_OPTS

Specifies arguments to Jenkins.

 

INSTALL_PLUGINS

Specifies additional Jenkins plug-ins to install when the container is first run or when OVERRIDE_PV_PLUGINS_WITH_IMAGE_PLUGINS is set to true. Plug-ins are specified as a comma-delimited list of name:version pairs.

Example setting: git:3.7.0,subversion:2.10.2.

OPENSHIFT_PERMISSIONS_POLL_INTERVAL

Specifies the interval in milliseconds that the OpenShift Container Platform Login plug-in polls OpenShift Container Platform for the permissions that are associated with each user that is defined in Jenkins.

Default: 300000 - 5 minutes

OVERRIDE_PV_CONFIG_WITH_IMAGE_CONFIG

When running this image with an OpenShift Container Platform persistent volume (PV) for the Jenkins configuration directory, the transfer of configuration from the image to the PV is performed only the first time the image starts because the PV is assigned when the persistent volume claim (PVC) is created. If you create a custom image that extends this image and updates configuration in the custom image after the initial startup, the configuration is not copied over unless you set this environment variable to true.

Default: false

OVERRIDE_PV_PLUGINS_WITH_IMAGE_PLUGINS

When running this image with an OpenShift Container Platform PV for the Jenkins configuration directory, the transfer of plug-ins from the image to the PV is performed only the first time the image starts because the PV is assigned when the PVC is created. If you create a custom image that extends this image and updates plug-ins in the custom image after the initial startup, the plug-ins are not copied over unless you set this environment variable to true.

Default: false

ENABLE_FATAL_ERROR_LOG_FILE

When running this image with an OpenShift Container Platform PVC for the Jenkins configuration directory, this environment variable allows the fatal error log file to persist when a fatal error occurs. The fatal error file is saved at /var/lib/jenkins/logs.

Default: false

NODEJS_SLAVE_IMAGE

Setting this value overrides the image that is used for the default Node.js agent pod configuration. A related image stream tag named jenkins-agent-nodejs is in the project. This variable must be set before Jenkins starts the first time for it to have an effect.

Default Node.js agent image in Jenkins server: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-nodejs:latest

MAVEN_SLAVE_IMAGE

Setting this value overrides the image used for the default maven agent pod configuration. A related image stream tag named jenkins-agent-maven is in the project. This variable must be set before Jenkins starts the first time for it to have an effect.

Default Maven agent image in Jenkins server: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-maven:latest

12.2.3. Providing Jenkins cross project access

If you are going to run Jenkins somewhere other than your same project, you must provide an access token to Jenkins to access your project.

Procedure

  1. Identify the secret for the service account that has appropriate permissions to access the project Jenkins must access:

    $ oc describe serviceaccount jenkins

    Example output

    Name:       default
    Labels:     <none>
    Secrets:    {  jenkins-token-uyswp    }
                {  jenkins-dockercfg-xcr3d    }
    Tokens:     jenkins-token-izv1u
                jenkins-token-uyswp

    In this case the secret is named jenkins-token-uyswp.

  2. Retrieve the token from the secret:

    $ oc describe secret <secret name from above>

    Example output

    Name:       jenkins-token-uyswp
    Labels:     <none>
    Annotations:    kubernetes.io/service-account.name=jenkins,kubernetes.io/service-account.uid=32f5b661-2a8f-11e5-9528-3c970e3bf0b7
    Type:   kubernetes.io/service-account-token
    Data
    ====
    ca.crt: 1066 bytes
    token:  eyJhbGc..<content cut>....wRA

    The token parameter contains the token value Jenkins requires to access the project.

12.2.4. Jenkins cross volume mount points

The Jenkins image can be run with mounted volumes to enable persistent storage for the configuration:

  • /var/lib/jenkins is the data directory where Jenkins stores configuration files, including job definitions.

12.2.5. Customizing the Jenkins image through source-to-image

To customize the official OpenShift Container Platform Jenkins image, you can use the image as a source-to-image (S2I) builder.

You can use S2I to copy your custom Jenkins jobs definitions, add additional plug-ins, or replace the provided config.xml file with your own, custom, configuration.

To include your modifications in the Jenkins image, you must have a Git repository with the following directory structure:

plugins
This directory contains those binary Jenkins plug-ins you want to copy into Jenkins.
plugins.txt
This file lists the plug-ins you want to install using the following syntax:
pluginId:pluginVersion
configuration/jobs
This directory contains the Jenkins job definitions.
configuration/config.xml
This file contains your custom Jenkins configuration.

The contents of the configuration/ directory is copied to the /var/lib/jenkins/ directory, so you can also include additional files, such as credentials.xml, there.

Sample build configuration customizes the Jenkins image in OpenShift Container Platform

apiVersion: v1
kind: BuildConfig
metadata:
  name: custom-jenkins-build
spec:
  source:                       1
    git:
      uri: https://github.com/custom/repository
    type: Git
  strategy:                     2
    sourceStrategy:
      from:
        kind: ImageStreamTag
        name: jenkins:2
        namespace: openshift
    type: Source
  output:                       3
    to:
      kind: ImageStreamTag
      name: custom-jenkins:latest

1
The source parameter defines the source Git repository with the layout described above.
2
The strategy parameter defines the original Jenkins image to use as a source image for the build.
3
The output parameter defines the resulting, customized Jenkins image that you can use in deployment configurations instead of the official Jenkins image.

12.2.6. Configuring the Jenkins Kubernetes plug-in

The OpenShift Container Platform Jenkins image includes the pre-installed Kubernetes plug-in that allows Jenkins agents to be dynamically provisioned on multiple container hosts using Kubernetes and OpenShift Container Platform.

To use the Kubernetes plug-in, OpenShift Container Platform provides images that are suitable for use as Jenkins agents including the Base, Maven, and Node.js images.

Both the Maven and Node.js agent images are automatically configured as Kubernetes pod template images within the OpenShift Container Platform Jenkins image configuration for the Kubernetes plug-in. That configuration includes labels for each of the images that can be applied to any of your Jenkins jobs under their Restrict where this project can be run setting. If the label is applied, jobs run under an OpenShift Container Platform pod running the respective agent image.

The Jenkins image also provides auto-discovery and auto-configuration of additional agent images for the Kubernetes plug-in.

With the OpenShift Container Platform sync plug-in, the Jenkins image on Jenkins startup searches for the following within the project that it is running or the projects specifically listed in the plug-in’s configuration:

  • Image streams that have the label role set to jenkins-agent.
  • Image stream tags that have the annotation role set to jenkins-agent.
  • Config maps that have the label role set to jenkins-agent.

When it finds an image stream with the appropriate label, or image stream tag with the appropriate annotation, it generates the corresponding Kubernetes plug-in configuration so you can assign your Jenkins jobs to run in a pod that runs the container image that is provided by the image stream.

The name and image references of the image stream or image stream tag are mapped to the name and image fields in the Kubernetes plug-in pod template. You can control the label field of the Kubernetes plug-in pod template by setting an annotation on the image stream or image stream tag object with the key agent-label. Otherwise, the name is used as the label.

Note

Do not log in to the Jenkins console and modify the pod template configuration. If you do so after the pod template is created, and the OpenShift Container Platform Sync plug-in detects that the image associated with the image stream or image stream tag has changed, it replaces the pod template and overwrites those configuration changes. You cannot merge a new configuration with the existing configuration.

Consider the config map approach if you have more complex configuration needs.

When it finds a config map with the appropriate label, it assumes that any values in the key-value data payload of the config map contains Extensible Markup Language (XML) that is consistent with the configuration format for Jenkins and the Kubernetes plug-in pod templates. A key differentiator to note when using config maps, instead of image streams or image stream tags, is that you can control all the parameters of the Kubernetes plug-in pod template.

Sample config map for jenkins-agent

kind: ConfigMap
apiVersion: v1
metadata:
  name: jenkins-agent
  labels:
    role: jenkins-agent
data:
  template1: |-
    <org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
      <inheritFrom></inheritFrom>
      <name>template1</name>
      <instanceCap>2147483647</instanceCap>
      <idleMinutes>0</idleMinutes>
      <label>template1</label>
      <serviceAccount>jenkins</serviceAccount>
      <nodeSelector></nodeSelector>
      <volumes/>
      <containers>
        <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
          <name>jnlp</name>
          
          <privileged>false</privileged>
          <alwaysPullImage>true</alwaysPullImage>
          <workingDir>/tmp</workingDir>
          <command></command>
          <args>${computer.jnlpmac} ${computer.name}</args>
          <ttyEnabled>false</ttyEnabled>
          <resourceRequestCpu></resourceRequestCpu>
          <resourceRequestMemory></resourceRequestMemory>
          <resourceLimitCpu></resourceLimitCpu>
          <resourceLimitMemory></resourceLimitMemory>
          <envVars/>
        </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
      </containers>
      <envVars/>
      <annotations/>
      <imagePullSecrets/>
      <nodeProperties/>
    </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>

Note

If you log in to the Jenkins console and make further changes to the pod template configuration after the pod template is created, and the OpenShift Container Platform Sync plug-in detects that the config map has changed, it will replace the pod template and overwrite those configuration changes. You cannot merge a new configuration with the existing configuration.

Do not log in to the Jenkins console and modify the pod template configuration. If you do so after the pod template is created, and the OpenShift Container Platform Sync plug-in detects that the image associated with the image stream or image stream tag has changed, it replaces the pod template and overwrites those configuration changes. You cannot merge a new configuration with the existing configuration.

Consider the config map approach if you have more complex configuration needs.

After it is installed, the OpenShift Container Platform Sync plug-in monitors the API server of OpenShift Container Platform for updates to image streams, image stream tags, and config maps and adjusts the configuration of the Kubernetes plug-in.

The following rules apply:

  • Removing the label or annotation from the config map, image stream, or image stream tag results in the deletion of any existing PodTemplate from the configuration of the Kubernetes plug-in.
  • If those objects are removed, the corresponding configuration is removed from the Kubernetes plug-in.
  • Either creating appropriately labeled or annotated ConfigMap, ImageStream, or ImageStreamTag objects, or the adding of labels after their initial creation, leads to creating of a PodTemplate in the Kubernetes-plugin configuration.
  • In the case of the PodTemplate by config map form, changes to the config map data for the PodTemplate are applied to the PodTemplate settings in the Kubernetes plug-in configuration and overrides any changes that were made to the PodTemplate through the Jenkins UI between changes to the config map.

To use a container image as a Jenkins agent, the image must run the agent as an entrypoint. For more details about this, refer to the official Jenkins documentation.

12.2.7. Jenkins permissions

If in the config map the <serviceAccount> element of the pod template XML is the OpenShift Container Platform service account used for the resulting pod, the service account credentials are mounted into the pod. The permissions are associated with the service account and control which operations against the OpenShift Container Platform master are allowed from the pod.

Consider the following scenario with service accounts used for the pod, which is launched by the Kubernetes Plug-in that runs in the OpenShift Container Platform Jenkins image.

If you use the example template for Jenkins that is provided by OpenShift Container Platform, the jenkins service account is defined with the edit role for the project Jenkins runs in, and the master Jenkins pod has that service account mounted.

The two default Maven and NodeJS pod templates that are injected into the Jenkins configuration are also set to use the same service account as the Jenkins master.

  • Any pod templates that are automatically discovered by the OpenShift Container Platform sync plug-in because their image streams or image stream tags have the required label or annotations are configured to use the Jenkins master service account as their service account.
  • For the other ways you can provide a pod template definition into Jenkins and the Kubernetes plug-in, you have to explicitly specify the service account to use. Those other ways include the Jenkins console, the podTemplate pipeline DSL that is provided by the Kubernetes plug-in, or labeling a config map whose data is the XML configuration for a pod template.
  • If you do not specify a value for the service account, the default service account is used.
  • Ensure that whatever service account is used has the necessary permissions, roles, and so on defined within OpenShift Container Platform to manipulate whatever projects you choose to manipulate from the within the pod.

12.2.8. Creating a Jenkins service from a template

Templates provide parameter fields to define all the environment variables with predefined default values. OpenShift Container Platform provides templates to make creating a new Jenkins service easy. The Jenkins templates should be registered in the default openshift project by your cluster administrator during the initial cluster setup.

The two available templates both define deployment configuration and a service. The templates differ in their storage strategy, which affects whether or not the Jenkins content persists across a pod restart.

Note

A pod might be restarted when it is moved to another node or when an update of the deployment configuration triggers a redeployment.

  • jenkins-ephemeral uses ephemeral storage. On pod restart, all data is lost. This template is only useful for development or testing.
  • jenkins-persistent uses a Persistent Volume (PV) store. Data survives a pod restart.

To use a PV store, the cluster administrator must define a PV pool in the OpenShift Container Platform deployment.

After you select which template you want, you must instantiate the template to be able to use Jenkins.

Procedure

  1. Create a new Jenkins application using one of the following methods:

    • A PV:

      $ oc new-app jenkins-persistent
    • Or an emptyDir type volume where configuration does not persist across pod restarts:

      $ oc new-app jenkins-ephemeral

12.2.9. Using the Jenkins Kubernetes plug-in

In the following example, the openshift-jee-sample BuildConfig object causes a Jenkins Maven agent pod to be dynamically provisioned. The pod clones some Java source code, builds a WAR file, and causes a second BuildConfig, openshift-jee-sample-docker to run. The second BuildConfig layers the new WAR file into a container image.

Sample BuildConfig that uses the Jenkins Kubernetes plug-in

kind: List
apiVersion: v1
items:
- kind: ImageStream
  apiVersion: v1
  metadata:
    name: openshift-jee-sample
- kind: BuildConfig
  apiVersion: v1
  metadata:
    name: openshift-jee-sample-docker
  spec:
    strategy:
      type: Docker
    source:
      type: Docker
      dockerfile: |-
        FROM openshift/wildfly-101-centos7:latest
        COPY ROOT.war /wildfly/standalone/deployments/ROOT.war
        CMD $STI_SCRIPTS_PATH/run
      binary:
        asFile: ROOT.war
    output:
      to:
        kind: ImageStreamTag
        name: openshift-jee-sample:latest
- kind: BuildConfig
  apiVersion: v1
  metadata:
    name: openshift-jee-sample
  spec:
    strategy:
      type: JenkinsPipeline
      jenkinsPipelineStrategy:
        jenkinsfile: |-
          node("maven") {
            sh "git clone https://github.com/openshift/openshift-jee-sample.git ."
            sh "mvn -B -Popenshift package"
            sh "oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war"
          }
    triggers:
    - type: ConfigChange

It is also possible to override the specification of the dynamically created Jenkins agent pod. The following is a modification to the previous example, which overrides the container memory and specifies an environment variable.

Sample BuildConfig that uses the Jenkins Kubernetes Plug-in, specifying memory limit and environment variable

kind: BuildConfig
apiVersion: v1
metadata:
  name: openshift-jee-sample
spec:
  strategy:
    type: JenkinsPipeline
    jenkinsPipelineStrategy:
      jenkinsfile: |-
        podTemplate(label: "mypod", 1
                    cloud: "openshift", 2
                    inheritFrom: "maven", 3
                    containers: [
            containerTemplate(name: "jnlp", 4
                              image: "openshift/jenkins-agent-maven-35-centos7:v3.10", 5
                              resourceRequestMemory: "512Mi", 6
                              resourceLimitMemory: "512Mi", 7
                              envVars: [
              envVar(key: "CONTAINER_HEAP_PERCENT", value: "0.25") 8
            ])
          ]) {
          node("mypod") { 9
            sh "git clone https://github.com/openshift/openshift-jee-sample.git ."
            sh "mvn -B -Popenshift package"
            sh "oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war"
          }
        }
  triggers:
  - type: ConfigChange

1
A new pod template called mypod is defined dynamically. The new pod template name is referenced in the node stanza.
2
The cloud value must be set to openshift.
3
The new pod template can inherit its configuration from an existing pod template. In this case, inherited from the Maven pod template that is pre-defined by OpenShift Container Platform.
4
This example overrides values in the pre-existing container, and must be specified by name. All Jenkins agent images shipped with OpenShift Container Platform use the Container name jnlp.
5
Specify the container image name again. This is a known issue.
6
A memory request of 512 Mi is specified.
7
A memory limit of 512 Mi is specified.
8
An environment variable CONTAINER_HEAP_PERCENT, with value 0.25, is specified.
9
The node stanza references the name of the defined pod template.

By default, the pod is deleted when the build completes. This behavior can be modified with the plug-in or within a pipeline Jenkinsfile.

12.2.10. Jenkins memory requirements

When deployed by the provided Jenkins Ephemeral or Jenkins Persistent templates, the default memory limit is 1 Gi.

By default, all other process that run in the Jenkins container cannot use more than a total of 512 MiB of memory. If they require more memory, the container halts. It is therefore highly recommended that pipelines run external commands in an agent container wherever possible.

And if Project quotas allow for it, see recommendations from the Jenkins documentation on what a Jenkins master should have from a memory perspective. Those recommendations proscribe to allocate even more memory for the Jenkins master.

It is recommended to specify memory request and limit values on agent containers created by the Jenkins Kubernetes plug-in. Admin users can set default values on a per-agent image basis through the Jenkins configuration. The memory request and limit parameters can also be overridden on a per-container basis.

You can increase the amount of memory available to Jenkins by overriding the MEMORY_LIMIT parameter when instantiating the Jenkins Ephemeral or Jenkins Persistent template.

12.2.11. Additional resources

12.3. Jenkins agent

OpenShift Container Platform provides three images that are suitable for use as Jenkins agents: the Base, Maven, and Node.js images.

The first is a base image for Jenkins agents:

  • It pulls in both the required tools, headless Java, the Jenkins JNLP client, and the useful ones including git, tar, zip, and nss among others.
  • It establishes the JNLP agent as the entrypoint.
  • It includes the oc client tooling for invoking command line operations from within Jenkins jobs.
  • It provides Dockerfiles for both Red Hat Enterprise Linux (RHEL) and localdev images.

Two more images that extend the base image are also provided:

  • Maven v3.5 image
  • Node.js v10 image and Node.js v12 image

The Maven and Node.js Jenkins agent images provide Dockerfiles for the Universal Base Image (UBI) that you can reference when building new agent images. Also note the contrib and contrib/bin subdirectories. They allow for the insertion of configuration files and executable scripts for your image.

Important

Use and extend an appropriate agent image version for the your of OpenShift Container Platform. If the oc client version that is embedded in the agent image is not compatible with the OpenShift Container Platform version, unexpected behavior can result.

12.3.1. Jenkins agent images

The OpenShift Container Platform Jenkins agent images are available on Quay.io or registry.redhat.io.

Jenkins images are available through the Red Hat Registry:

$ docker pull registry.redhat.io/openshift4/ose-jenkins:<v4.5.0>
$ docker pull registry.redhat.io/openshift4/jenkins-agent-nodejs-10-rhel7:<v4.5.0>
$ docker pull registry.redhat.io/openshift4/jenkins-agent-nodejs-12-rhel7:<v4.5.0>
$ docker pull registry.redhat.io/openshift4/ose-jenkins-agent-maven:<v4.5.0>
$ docker pull registry.redhat.io/openshift4/ose-jenkins-agent-base:<v4.5.0>

To use these images, you can either access them directly from Quay.io or registry.redhat.io or push them into your OpenShift Container Platform container image registry.

12.3.2. Jenkins agent environment variables

Each Jenkins agent container can be configured with the following environment variables.

VariableDefinitionExample values and settings

JAVA_MAX_HEAP_PARAM, CONTAINER_HEAP_PERCENT, JENKINS_MAX_HEAP_UPPER_BOUND_MB

These values control the maximum heap size of the Jenkins JVM. If JAVA_MAX_HEAP_PARAM is set, its value takes precedence. Otherwise, the maximum heap size is dynamically calculated as CONTAINER_HEAP_PERCENT of the container memory limit, optionally capped at JENKINS_MAX_HEAP_UPPER_BOUND_MB MiB.

By default, the maximum heap size of the Jenkins JVM is set to 50% of the container memory limit with no cap.

JAVA_MAX_HEAP_PARAM example setting: -Xmx512m

CONTAINER_HEAP_PERCENT default: 0.5, or 50%

JENKINS_MAX_HEAP_UPPER_BOUND_MB example setting: 512 MiB

JAVA_INITIAL_HEAP_PARAM, CONTAINER_INITIAL_PERCENT

These values control the initial heap size of the Jenkins JVM. If JAVA_INITIAL_HEAP_PARAM is set, its value takes precedence. Otherwise, the initial heap size is dynamically calculated as CONTAINER_INITIAL_PERCENT of the dynamically calculated maximum heap size.

By default, the JVM sets the initial heap size.

JAVA_INITIAL_HEAP_PARAM example setting: -Xms32m

CONTAINER_INITIAL_PERCENT example setting: 0.1, or 10%

CONTAINER_CORE_LIMIT

If set, specifies an integer number of cores used for sizing numbers of internal JVM threads.

Example setting: 2

JAVA_TOOL_OPTIONS

Specifies options to apply to all JVMs running in this container. It is not recommended to override this value.

Default: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true

JAVA_GC_OPTS

Specifies Jenkins JVM garbage collection parameters. It is not recommended to override this value.

Default: -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90

JENKINS_JAVA_OVERRIDES

Specifies additional options for the Jenkins JVM. These options are appended to all other options, including the Java options above, and can be used to override any of them, if necessary. Separate each additional option with a space and if any option contains space characters, escape them with a backslash.

Example settings: -Dfoo -Dbar; -Dfoo=first\ value -Dbar=second\ value

USE_JAVA_VERSION

Specifies the version of Java version to use to run the agent in its container. The container base image has two versions of java installed: java-11 and java-1.8.0. If you extend the container base image, you can specify any alternative version of java using its associated suffix.

The default value is java-11.

Example setting: java-1.8.0

12.3.3. Jenkins agent memory requirements

A JVM is used in all Jenkins agents to host the Jenkins JNLP agent as well as to run any Java applications such as javac, Maven, or Gradle.

By default, the Jenkins JNLP agent JVM uses 50% of the container memory limit for its heap. This value can be modified by the CONTAINER_HEAP_PERCENT environment variable. It can also be capped at an upper limit or overridden entirely.

By default, any other processes run in the Jenkins agent container, such as shell scripts or oc commands run from pipelines, cannot use more than the remaining 50% memory limit without provoking an OOM kill.

By default, each further JVM process that runs in a Jenkins agent container uses up to 25% of the container memory limit for its heap. It might be necessary to tune this limit for many build workloads.

12.3.4. Jenkins agent Gradle builds

Hosting Gradle builds in the Jenkins agent on OpenShift Container Platform presents additional complications because in addition to the Jenkins JNLP agent and Gradle JVMs, Gradle spawns a third JVM to run tests if they are specified.

The following settings are suggested as a starting point for running Gradle builds in a memory constrained Jenkins agent on OpenShift Container Platform. You can modify these settings as required.

  • Ensure the long-lived Gradle daemon is disabled by adding org.gradle.daemon=false to the gradle.properties file.
  • Disable parallel build execution by ensuring org.gradle.parallel=true is not set in the gradle.properties file and that --parallel is not set as a command line argument.
  • To prevent Java compilations running out-of-process, set java { options.fork = false } in the build.gradle file.
  • Disable multiple additional test processes by ensuring test { maxParallelForks = 1 } is set in the build.gradle file.
  • Override the Gradle JVM memory parameters by the GRADLE_OPTS, JAVA_OPTS or JAVA_TOOL_OPTIONS environment variables.
  • Set the maximum heap size and JVM arguments for any Gradle test JVM by defining the maxHeapSize and jvmArgs settings in build.gradle, or through the -Dorg.gradle.jvmargs command line argument.

12.3.5. Jenkins agent pod retention

Jenkins agent pods, are deleted by default after the build completes or is stopped. This behavior can be changed by the Kubernetes plug-in pod retention setting. Pod retention can be set for all Jenkins builds, with overrides for each pod template. The following behaviors are supported:

  • Always keeps the build pod regardless of build result.
  • Default uses the plug-in value, which is the pod template only.
  • Never always deletes the pod.
  • On Failure keeps the pod if it fails during the build.

You can override pod retention in the pipeline Jenkinsfile:

podTemplate(label: "mypod",
  cloud: "openshift",
  inheritFrom: "maven",
  podRetention: onFailure(), 1
  containers: [
    ...
  ]) {
  node("mypod") {
    ...
  }
}
1
Allowed values for podRetention are never(), onFailure(), always(), and default().
Warning

Pods that are kept might continue to run and count against resource quotas.

12.4. Source-to-image

You can use the Red Hat Software Collections images as a foundation for applications that rely on specific runtime environments such as Node.js, Perl, or Python. You can use the Red Hat Java Source-to-Image for OpenShift documentation as a reference for runtime environments that use Java. Special versions of some of these runtime base images are referred to as Source-to-Image (S2I) images. With S2I images, you can insert your code into a base image environment that is ready to run that code.

S2I images include:

  • .NET
  • Java
  • Go
  • Node.js
  • Perl
  • PHP
  • Python
  • Ruby

S2I images are available for you to use directly from the OpenShift Container Platform web console by following procedure:

  1. Log in to the OpenShift Container Platform web console using your login credentials. The default view for the OpenShift Container Platform web console is the Administrator perspective.
  2. Use the perspective switcher to switch to the Developer perspective.
  3. In the +Add view, select an existing project from the list or use the Project drop-down list to create a new project.
  4. Choose All services under the tile Developer Catalog.
  5. Select the Type Builder Images then can see the available S2I images.

S2I images are also available though the Configuring the Cluster Samples Operator.

12.4.1. Source-to-image build process overview

Source-to-image (S2I) produces ready-to-run images by injecting source code into a container that prepares that source code to be run. It performs the following steps:

  1. Runs the FROM <builder image> command
  2. Copies the source code to a defined location in the builder image
  3. Runs the assemble script in the builder image
  4. Sets the run script in the builder image as the default command

Buildah then creates the container image.

12.4.2. Additional resources

12.5. Customizing source-to-image images

Source-to-image (S2I) builder images include assemble and run scripts, but the default behavior of those scripts is not suitable for all users. You can customize the behavior of an S2I builder that includes default scripts.

12.5.1. Invoking scripts embedded in an image

Builder images provide their own version of the source-to-image (S2I) scripts that cover the most common use-cases. If these scripts do not fulfill your needs, S2I provides a way of overriding them by adding custom ones in the .s2i/bin directory. However, by doing this, you are completely replacing the standard scripts. In some cases, replacing the scripts is acceptable, but, in other scenarios, you can run a few commands before or after the scripts while retaining the logic of the script provided in the image. To reuse the standard scripts, you can create a wrapper script that runs custom logic and delegates further work to the default scripts in the image.

Procedure

  1. Look at the value of the io.openshift.s2i.scripts-url label to determine the location of the scripts inside of the builder image:

    $ podman inspect --format='{{ index .Config.Labels "io.openshift.s2i.scripts-url" }}' wildfly/wildfly-centos7

    Example output

    image:///usr/libexec/s2i

    You inspected the wildfly/wildfly-centos7 builder image and found out that the scripts are in the /usr/libexec/s2i directory.

  2. Create a script that includes an invocation of one of the standard scripts wrapped in other commands:

    .s2i/bin/assemble script

    #!/bin/bash
    echo "Before assembling"
    
    /usr/libexec/s2i/assemble
    rc=$?
    
    if [ $rc -eq 0 ]; then
        echo "After successful assembling"
    else
        echo "After failed assembling"
    fi
    
    exit $rc

    This example shows a custom assemble script that prints the message, runs the standard assemble script from the image, and prints another message depending on the exit code of the assemble script.

    Important

    When wrapping the run script, you must use exec for invoking it to ensure signals are handled properly. The use of exec also precludes the ability to run additional commands after invoking the default image run script.

    .s2i/bin/run script

    #!/bin/bash
    echo "Before running application"
    exec /usr/libexec/s2i/run

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.