Chapter 4. Other Images
4.1. Overview
This topic group includes information on other container images available for OpenShift Container Platform users.
4.2. Jenkins
4.2.1. Overview
OpenShift Container Platform provides a container image for running Jenkins. This image provides a Jenkins server instance, which can be used to set up a basic flow for continuous testing, integration, and delivery.
This image also includes a sample Jenkins job, which triggers a new build of a BuildConfig
defined in OpenShift Container Platform, tests the output of that build, and then on successful build, retags the output to indicate the build is ready for production.
4.2.2. Versions
OpenShift Container Platform follows the LTS releases of Jenkins. Currently, OpenShift Container Platform provides versions 1.x and 2.x.
4.2.3. Images
These images come in two flavors, depending on your needs:
- RHEL 7
- CentOS 7
RHEL 7 Based Images
The RHEL 7 images are available through the Red Hat Registry:
$ docker pull registry.access.redhat.com/openshift3/jenkins-1-rhel7 $ docker pull registry.access.redhat.com/openshift3/jenkins-2-rhel7
CentOS 7 Based Images
This image is available on Docker Hub:
$ docker pull openshift/jenkins-1-centos7 $ docker pull openshift/jenkins-2-centos7
To use these images, you can either access them directly from these registries or push them into your OpenShift Container Platform Docker registry. Additionally, you can create an ImageStream that points to the image, either in your Docker registry or at the external location. Your OpenShift Container Platform resources can then reference the ImageStream. You can find example ImageStream definitions for all the provided OpenShift Container Platform images.
4.2.4. Configuration and Usage
4.2.4.1. Initializing Jenkins
You can manage Jenkins authentication in two ways:
- OpenShift Container Platform OAuth authentication provided by the OpenShift Login plug-in.
- Standard authentication provided by Jenkins
4.2.4.1.1. OpenShift Container Platform OAuth authentication
OAuth authentication is activated by configuring the Configure Global Security
panel in the Jenkins UI, or by setting the OPENSHIFT_ENABLE_OAUTH
environment variable on the Jenkins Deployment Config
to anything other than false
. This activates the OpenShift Login plug-in, which retrieves the configuration information from pod data or by interacting with the OpenShift Container Platform API server.
Valid credentials are controlled by the OpenShift Container Platform identity provider. For example, if Allow All
is the default identity provider, you can provide any non-empty string for both the user name and password.
Jenkins supports both browser and non-browser access.
Valid users are automatically added to the Jenkins authorization matrix at log in, where OpenShift Container Platform Roles
dictate the specific Jenkins permissions the user will have.
Users with the admin
role will have the traditional Jenkins administrative user permissions. Users with the edit
or view
role will have progressively less permissions. See the Jenkins image source repository README for the specifics on the OpenShift roles to Jenkins permissions mappings.
The admin
user that is pre-populated in the OpenShift Container Platform Jenkins image with administrative privileges will not be given those privileges when OpenShift Container Platform OAuth is used, unless the OpenShift Container Platform cluster administrator explicitly defines that user in the OpenShift Container Platform identity provider and assigns the admin
role to the user.
Jenkins' users permissions can be changed after the users are initially established. The OpenShift Login plug-in polls the OpenShift Container Platform API server for permissions and updates the permissions stored in Jenkins for each user with the permissions retrieved from OpenShift Container Platform. If the Jenkins UI is used to update permissions for a Jenkins user, the permission changes are overwritten the next time the plug-in polls OpenShift Container Platform.
You can control how often the polling occurs with the OPENSHIFT_PERMISSIONS_POLL_INTERVAL
environment variable. The default polling interval is five minutes.
The easiest way to create a new Jenkins service using OAuth authentication is to use a template as described below.
4.2.4.1.2. Jenkins Standard Authentication
Jenkins authentication is used by default if the image is run directly, without using a template.
The first time Jenkins starts, the configuration is created along with the administrator user and password. The default user credentials are admin
and password
. Configure the default password by setting the JENKINS_PASSWORD
environment variable when using (and only when using) standard Jenkins authentication.
To create a new Jenkins application using standard Jenkins authentication:
$ oc new-app -e \ JENKINS_PASSWORD=<password> \ openshift/jenkins-1-centos7
4.2.4.2. Environment Variables
The Jenkins server can be configured with the following environment variables:
Variable name | Description |
---|---|
|
The password for the |
| Determines whether the OpenShift Login plug-in manages authentication when logging into Jenkins. Enabled when set to any non-empty value other than "false". |
| Specifies in seconds how often the OpenShift Login plug-in polls OpenShift Container Platform for the permissions associated with each user defined in Jenkins. |
| When running this image with an OpenShift Container Platform persistent volume for the Jenkins config directory, the transfer of configuration from the image to the persistent volume is only done the first startup of the image as the persistent volume is assigned by the persistent volume claim creation. If you create a custom image that extends this image and updates configuration in the custom image after the initial startup, by default it will not be copied over, unless you set this environment variable to some non-empty value. |
| When running this image with an OpenShift Container Platform persistent volume for the Jenkins config directory, the transfer of plugins from the image to the persistent volume is only done the first startup of the image as the persistent volume is assigned by the persistent volume claim creation. If you create a custom image that extends this image and updates plugins in the custom image after the initial startup, by default they will not be copied over, unless you set this environment variable to some non-empty value. |
4.2.4.3. Cross Project Access
If you are going to run Jenkins somewhere other than as a deployment within your same project, you will need to provide an access token to Jenkins to access your project.
Identify the secret for the service account that has appropriate permissions to access the project Jenkins needs to access:
$ oc describe serviceaccount jenkins Name: default Labels: <none> Secrets: { jenkins-token-uyswp } { jenkins-dockercfg-xcr3d } Tokens: jenkins-token-izv1u jenkins-token-uyswp
In this case the secret is named
jenkins-token-uyswp
Retrieve the token from the secret:
$ oc describe secret <secret name from above> # for example, jenkins-token-uyswp Name: jenkins-token-uyswp Labels: <none> Annotations: kubernetes.io/service-account.name=jenkins,kubernetes.io/service-account.uid=32f5b661-2a8f-11e5-9528-3c970e3bf0b7 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1066 bytes token: eyJhbGc..<content cut>....wRA
The token field contains the token value Jenkins needs to access the project.
4.2.4.4. Volume Mount Points
The Jenkins image can be run with mounted volumes to enable persistent storage for the configuration:
- /var/lib/jenkins - This is the data directory where Jenkins stores configuration files including job definitions.
4.2.5. Creating a Jenkins Service from a Template
Templates provide parameter fields to define all the environment variables (password) with predefined defaults. OpenShift Container Platform provides templates to make creating a new Jenkins service easy. The Jenkins templates should have been registered in the default openshift project by your cluster administrator during the initial cluster setup. See Loading the Default Image Streams and Templates for more details, if required.
The two available templates both define a deployment configuration and a service. The templates differ in their storage strategy, which affects whether or not the Jenkins content persists across a pod restart.
A pod may be restarted when it is moved to another node, or when an update of the deployment configuration triggers a redeployment.
-
jenkins-ephemeral
uses ephemeral storage. On pod restart, all data is lost. This template is useful for development or testing only. -
jenkins-persistent
uses a persistent volume store. Data survives a pod restart. To use a persistent volume store, the cluster administrator must define a persistent volume pool in the OpenShift Container Platform deployment.
Once you have selected which template you want, you must instantiate the template to be able to use Jenkins:
Creating a New Jenkins Service
- Ensure the the default image streams and templates are already installed.
Create a new Jenkins application using:
- A persistent volume:
$ oc new-app jenkins-persistent
-
Or an
emptyDir
type volume (where configuration does not persist across pod restarts):
$ oc new-app jenkins-ephemeral
If you instantiate the template against releases prior to v3.4 of OpenShift Container Platform, standard Jenkins authentication is used, and the default admin
account will exist with password password
. See Jenkins Standard Authentication for details about changing this password.
4.2.6. Using Jenkins as a Source-To-Image builder
To customize the official OpenShift Container Platform Jenkins image, you have two options:
- Use Docker layering.
- Use the image as a Source-To-Image builder, described here.
You can use S2I to copy your custom Jenkins Jobs definitions, additional plug-ins or replace the provided config.xml file with your own, custom, configuration.
In order to include your modifications in the Jenkins image, you need to have a Git repository with the following directory structure:
- plugins
- This directory contains those binary Jenkins plug-ins you want to copy into Jenkins.
- plugins.txt
- This file lists the plug-ins you want to install:
pluginId:pluginVersion
- configuration/jobs
- This directory contains the Jenkins job definitions.
- configuration/config.xml
- This file contains your custom Jenkins configuration.
The contents of the configuration/ directory will be copied into the /var/lib/jenkins/ directory, so you can also include additional files, such as credentials.xml, there.
The following is an example build configuration that customizes the Jenkins image in OpenShift Container Platform:
apiVersion: v1 kind: BuildConfig metadata: name: custom-jenkins-build spec: source: 1 git: uri: https://github.com/custom/repository type: Git strategy: 2 sourceStrategy: from: kind: ImageStreamTag name: jenkins:latest namespace: openshift type: Source output: 3 to: kind: ImageStreamTag name: custom-jenkins:latest
- 1
- The
source
field defines the source Git repository with the layout described above. - 2
- The
strategy
field defines the original Jenkins image to use as a source image for the build. - 3
- The
output
field defines the resulting, customized Jenkins image you can use in deployment configuration instead of the official Jenkins image.
4.2.7. Using the Jenkins Kubernetes Plug-in to Run Jobs
The official OpenShift Container Platform Jenkins image includes the pre-installed Kubernetes plug-in that allows Jenkins slaves to be dynamically provisioned on multiple container hosts using Kubernetes and OpenShift Container Platform.
To use the Kubernetes plug-in, OpenShift Container Platform provides three images suitable for use as Jenkins slaves: the Base, Maven, and Node.js images.
The first is a base image for Jenkins slaves:
- It pulls in both the required tools (headless Java, the Jenkins JNLP client) and the useful ones (including git, tar, zip, and nss among others).
- It establishes the JNLP slave agent as the entrypoint.
- It includes the oc client tooling for invoking command line operations from within Jenkins jobs, and
- It provides Dockerfiles for both CentOS and RHEL images.
Two additional images that extend the base image are also provided:
Both the Maven and Node.js slave images are configured as Kubernetes Pod Template images within the OpenShift Container Platform Jenkins image’s configuration for the Kubernetes plug-in. That configuration includes labels for each of the images that can be applied to any of your Jenkins jobs under their "Restrict where this project can be run" setting. If the label is applied, execution of the given job will be done under an OpenShift Container Platform pod running the respective slave image.
The Maven and Node.js Jenkins slave images provide Dockerfiles for both CentOS and RHEL that you can reference when building new slave images. Also note the contrib
and contrib/bin
subdirectories. They allow for the insertion of configuration files and executable scripts for your image.
The Jenkins image also provides auto-discovery and auto-configuration of slave images for the Kubernetes plug-in. With the OpenShift Sync plug-in, the Jenkins image on Jenkins start-up searches within the project that it is running, or the projects specifically listed in the plug-in’s configuration for the following:
-
Image streams that have the label
role
set tojenkins-slave
. -
Image stream tags that have the annotation
role
set tojenkins-slave
. -
ConfigMaps that have the label
role
set tojenkins-slave
.
When it finds an image stream with the appropriate label, or image stream tag with the appropriate annotation, it generates the corresponding Kubernetes plug-in configuration so you can assign your Jenkins jobs to run in a pod running the container image provided by the image stream.
The name and image references of the image stream or image stream tag are mapped to the name and image fields in the Kubernetes plug-in pod template. You can control the label field of the Kubernetes plug-in pod template by setting an annotation on the image stream or image stream tag object with the key slave-label
. Otherwise, the name is used as the label.
When it finds a ConfigMap with the appropriate label, it assumes that any values in the key-value data payload of the ConfigMap contains XML consistent with the config format for Jenkins and the Kubernetes plug-in pod templates. A key differentiator to note when using ConfigMaps, instead of image streams or image stream tags, is that you can control all the various fields of the Kubernetes plug-in pod template.
The following is an example ConfigMap:
apiVersion: v1 items: - apiVersion: v1 data: template1: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template1</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template1</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>false</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>${computer.jnlpmac} ${computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate> template2: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template2</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template2</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>false</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>${computer.jnlpmac} ${computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate> kind: ConfigMap metadata: labels: role: jenkins-slave name: jenkins-slave namespace: myproject kind: List metadata: {} resourceVersion: "" selfLink: ""
After startup, the OpenShift Sync plug-in monitors the API server of OpenShift Container Platform for updates to ImageStreams
, ImageStreamTags
, and ConfigMaps
and adjusts the configuration of the Kubernetes plug-in.
In particular, the following rules will apply:
-
Removal of the label or annotation from the
ConfigMap
,ImageStream
, orImageStreamTag
will result in the deletion of any existingPodTemplate
from the configuration of the Kubernetes plug-in. - Similarly, if those objects are removed, the corresponding configuration is removed from the Kubernetes plug-in.
-
Conversely, either the creation of appropriately labeled or annotated
ConfigMap
,ImageStream
, orImageStreamTag
objects, or the adding of labels after their initial creation, leads to the creation of aPodTemplate
in the Kubernetes-plugin configuration. -
In the case of the
PodTemplate
viaConfigMap
form, changes to theConfigMap
data for thePodTemplate`will be applied to the `PodTemplate
settings in the Kubernetes plug-in configuration, and will override any changes made to thePodTemplate
via the Jenkins UI in the interim between changes to theConfigMap
.
To use a container image as a Jenkins slave, the image must run the slave agent as an entrypoint. For more details about this, refer to the official Jenkins documentation.
4.2.7.1. Permission Considerations
In the previous ConfigMap example, the <serviceAccount>
element of the Pod Template XML is the OpenShift Container Platform Service Account used for the resulting Pod. The service account credentials mounted into the Pod, with permissions associated with the service account, control which operations against the OpenShift Container Platform master are allowed from the Pod.
Consider the following with service accounts used for the Pod, launched by the Kubernetes Plug-in running in the OpenShift Container Platform Jenkins image:
-
If you use the example template for Jenkins provided by OpenShift Container Platform, the
jenkins
service account is defined with theedit
role for the project Jenkins is running in, and the master Jenkins Pod has that service account mounted. - The two default Maven and NodeJS Pod Templates injected into the Jenkins configuration are also set to use the same service account as the master.
- Any Pod Templates auto-discovered by the OpenShift Sync plug-in as a result of Image streams or Image stream tags having the required label or annotations have their service account set to the master’s service account.
- For the other ways you can provide a Pod Template definition into Jenkins and the Kubernetes plug-in, you have to explicitly specify the service account to use.
-
Those other ways include the Jenkins console, the
podTemplate
pipeline DSL provided by the Kubernetes plug-in, or labeling a ConfigMap whose data is the XML configuration for a Pod Template. -
If you do not specify a value for the service account, the
default
service account is used. - You need to ensure that whatever service account is used has the necessary permissions, roles, and so on defined within OpenShift Container Platform to manipulate whatever projects you choose to manipulate from the within the Pod.
4.2.8. Tutorial
For more details on the sample job included in this image, see this tutorial.
4.2.9. OpenShift Container Platform Pipeline Plug-in
The Jenkins image’s list of pre-installed plug-ins includes the OpenShift Pipeline plug-in, which assists in the creation of CI/CD workflows in Jenkins that run against an OpenShift Container Platform server. A series of build steps, post-build actions, and SCM-style polling are provided, which equate to administrative and operational actions on the OpenShift Container Platform server and the API artifacts hosted there.
In addition to being accessible from the classic "freestyle" form of Jenkins job, the build steps as of version 1.0.14 of the OpenShift Container Platform Pipeline Plug-in are also available to Jenkins Pipeline jobs via the DSL extension points provided by the Jenkins Pipeline Plug-in. The OpenShift Jenkins Pipeline build strategy sample illustrates how to use the OpenShift Pipeline plugin DSL versions of its steps.
The sample Jenkins job that is pre-configured in the Jenkins image utilizes the OpenShift Container Platform pipeline plug-in and serves as an example of how to leverage the plug-in for creating CI/CD flows for OpenShift Container Platform in Jenkins.
See the the plug-in’s README for a detailed description of what is available.
4.2.10. OpenShift Container Platform Client Plug-in
The OpenShift Container Platform Client Plug-in aims to provide a readable, concise, comprehensive, and fluent Jenkins Pipeline syntax for rich interactions with OpenShift Container Platform. The plug-in uses the oc
binary, which must be available on the nodes executing the script.
This plug-in is fully supported and is included in the Jenkins image. It provides:
- A Fluent-style syntax for use in Jenkins Pipelines.
-
Use of and exposure to any option available with
oc
. - Integration with Jenkins credentials and clusters.
- Continued support for classic Jenkins Freestyle jobs.
See the OpenShift Pipeline Builds tutorial and the plug-in’s README for more information.
4.2.11. OpenShift Container Platform Sync Plug-in
To facilitate OpenShift Container Platform Pipeline build strategy for integration between Jenkins and OpenShift Container Platform, the OpenShift Sync plug-in monitors the API server of OpenShift Container Platform for updates to BuildConfigs
and Builds
that employ the Pipeline strategy and either creates Jenkins Pipeline projects (when a BuildConfig
is created) or starts jobs in the resulting projects (when a Build
is started).
4.2.12. Kubernetes Plug-in
The Kubernetes plug-in is used to run Jenkins slaves as pods on your cluster. The auto-configuration of the Kubernetes plug-in is described in Using the Jenkins Kubernetes Plug-in to Run Jobs.
4.2.13. Memory Requirements
The default memory allocation for the Jenkins container is 512Mi, regardless of whether you are using the Jenkins Ephemeral or Jenkins Persistent template. While Jenkins can easily operate within this limit, you will need to be mindful of any sh
invocations that you might be making from your pipelines, such as shell scripts, invoking the oc
command through the OpenShift DSL, or monitoring PIDs, as these can quickly use your memory allocation.
You can increase the amount of memory available to Jenkins by overriding the MEMORY_LIMIT paramenter when instantiating the Jenkins Ephemeral or Jenkins Persistent template.
4.3. Other Container Images
4.3.1. Overview
If you want to use container images not found in the Red Hat Container Catalog, you can use other arbitrary container images in your OpenShift Container Platform instance, for example those found on the Docker Hub.
For OpenShift Container Platform-specific guidelines on running containers using an arbitrarily assigned user ID, see Support Arbitrary User IDs in the Creating Images guide.
For supportability details, see the Production Support Scope of Coverage as defined in the OpenShift Container Platform Support Policy.
4.3.2. Security Warning
OpenShift Container Platform runs containers on hosts in the cluster, and in some cases, such as build operations and the registry service, it does so using privileged containers. Furthermore, those containers access the hosts' Docker daemon and perform docker build
and docker push
operations. As such, cluster administrators should be aware of the inherent security risks associated with performing docker run
operations on arbitrary images as they effectively have root access. This is particularly relevant for docker build
operations.
Exposure to harmful containers can be limited by assigning specific builds to nodes so that any exposure is limited to those nodes. To do this, see the Assigning Builds to Specific Nodes section of the Developer Guide. For cluster administrators, see the Configuring Global Build Defaults and Overrides section of the Installation and Configuration Guide.
You can also use security context constraints to control the actions that a pod can perform and what it has the ability to access. For instructions on how to enable images to run with USER in the Dockerfile, see Managing Security Context Constraints (requires a user with cluster-admin privileges).
For more information, see these articles: