Search

CI/CD

download PDF
OpenShift Container Platform 4.10

Contains information on builds, pipelines and GitOps for OpenShift Container Platform

Red Hat OpenShift Documentation Team

Abstract

CI/CD for the OpenShift Container Platform

Chapter 1. OpenShift Container Platform CI/CD overview

OpenShift Container Platform is an enterprise-ready Kubernetes platform for developers, which enables organizations to automate the application delivery process through DevOps practices, such as continuous integration (CI) and continuous delivery (CD). To meet your organizational needs, the OpenShift Container Platform provides the following CI/CD solutions:

  • OpenShift Builds
  • OpenShift Pipelines
  • OpenShift GitOps

1.1. OpenShift Builds

With OpenShift Builds, you can create cloud-native apps by using a declarative build process. You can define the build process in a YAML file that you use to create a BuildConfig object. This definition includes attributes such as build triggers, input parameters, and source code. When deployed, the BuildConfig object typically builds a runnable image and pushes it to a container image registry.

OpenShift Builds provides the following extensible support for build strategies:

  • Docker build
  • Source-to-image (S2I) build
  • Custom build

For more information, see Understanding image builds

1.2. OpenShift Pipelines

OpenShift Pipelines provides a Kubernetes-native CI/CD framework to design and run each step of the CI/CD pipeline in its own container. It can scale independently to meet the on-demand pipelines with predictable outcomes.

For more information, see Understanding OpenShift Pipelines

1.3. OpenShift GitOps

OpenShift GitOps is an Operator that uses Argo CD as the declarative GitOps engine. It enables GitOps workflows across multicluster OpenShift and Kubernetes infrastructure. Using OpenShift GitOps, administrators can consistently configure and deploy Kubernetes-based infrastructure and applications across clusters and development lifecycles.

For more information, see Understanding OpenShift GitOps

1.4. Jenkins

Jenkins automates the process of building, testing, and deploying applications and projects. OpenShift Developer Tools provides a Jenkins image that integrates directly with the OpenShift Container Platform. Jenkins can be deployed on OpenShift by using the Samples Operator templates or certified Helm chart.

Chapter 2. Builds

2.1. Understanding image builds

2.1.1. Builds

A build is the process of transforming input parameters into a resulting object. Most often, the process is used to transform input parameters or source code into a runnable image. A BuildConfig object is the definition of the entire build process.

OpenShift Container Platform uses Kubernetes by creating containers from build images and pushing them to a container image registry.

Build objects share common characteristics including inputs for a build, the requirement to complete a build process, logging the build process, publishing resources from successful builds, and publishing the final status of the build. Builds take advantage of resource restrictions, specifying limitations on resources such as CPU usage, memory usage, and build or pod execution time.

The OpenShift Container Platform build system provides extensible support for build strategies that are based on selectable types specified in the build API. There are three primary build strategies available:

  • Docker build
  • Source-to-image (S2I) build
  • Custom build

By default, docker builds and S2I builds are supported.

The resulting object of a build depends on the builder used to create it. For docker and S2I builds, the resulting objects are runnable images. For custom builds, the resulting objects are whatever the builder image author has specified.

Additionally, the pipeline build strategy can be used to implement sophisticated workflows:

  • Continuous integration
  • Continuous deployment
2.1.1.1. Docker build

OpenShift Container Platform uses Buildah to build a container image from a Dockerfile. For more information on building container images with Dockerfiles, see the Dockerfile reference documentation.

Tip

If you set Docker build arguments by using the buildArgs array, see Understand how ARG and FROM interact in the Dockerfile reference documentation.

2.1.1.2. Source-to-image build

Source-to-image (S2I) is a tool for building reproducible container images. It produces ready-to-run images by injecting application source into a container image and assembling a new image. The new image incorporates the base image, the builder, and built source and is ready to use with the buildah run command. S2I supports incremental builds, which re-use previously downloaded dependencies, previously built artifacts, and so on.

2.1.1.3. Custom build

The custom build strategy allows developers to define a specific builder image responsible for the entire build process. Using your own builder image allows you to customize your build process.

A custom builder image is a plain container image embedded with build process logic, for example for building RPMs or base images.

Custom builds run with a high level of privilege and are not available to users by default. Only users who can be trusted with cluster administration permissions should be granted access to run custom builds.

2.1.1.4. Pipeline build
Important

The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton.

Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system.

The Pipeline build strategy allows developers to define a Jenkins pipeline for use by the Jenkins pipeline plugin. The build can be started, monitored, and managed by OpenShift Container Platform in the same way as any other build type.

Pipeline workflows are defined in a jenkinsfile, either embedded directly in the build configuration, or supplied in a Git repository and referenced by the build configuration.

2.2. Understanding build configurations

The following sections define the concept of a build, build configuration, and outline the primary build strategies available.

2.2.1. BuildConfigs

A build configuration describes a single build definition and a set of triggers for when a new build is created. Build configurations are defined by a BuildConfig, which is a REST object that can be used in a POST to the API server to create a new instance.

A build configuration, or BuildConfig, is characterized by a build strategy and one or more sources. The strategy determines the process, while the sources provide its input.

Depending on how you choose to create your application using OpenShift Container Platform, a BuildConfig is typically generated automatically for you if you use the web console or CLI, and it can be edited at any time. Understanding the parts that make up a BuildConfig and their available options can help if you choose to manually change your configuration later.

The following example BuildConfig results in a new build every time a container image tag or the source code changes:

BuildConfig object definition

kind: BuildConfig
apiVersion: build.openshift.io/v1
metadata:
  name: "ruby-sample-build" 1
spec:
  runPolicy: "Serial" 2
  triggers: 3
    -
      type: "GitHub"
      github:
        secret: "secret101"
    - type: "Generic"
      generic:
        secret: "secret101"
    -
      type: "ImageChange"
  source: 4
    git:
      uri: "https://github.com/openshift/ruby-hello-world"
  strategy: 5
    sourceStrategy:
      from:
        kind: "ImageStreamTag"
        name: "ruby-20-centos7:latest"
  output: 6
    to:
      kind: "ImageStreamTag"
      name: "origin-ruby-sample:latest"
  postCommit: 7
      script: "bundle exec rake test"

1
This specification creates a new BuildConfig named ruby-sample-build.
2
The runPolicy field controls whether builds created from this build configuration can be run simultaneously. The default value is Serial, which means new builds run sequentially, not simultaneously.
3
You can specify a list of triggers, which cause a new build to be created.
4
The source section defines the source of the build. The source type determines the primary source of input, and can be either Git, to point to a code repository location, Dockerfile, to build from an inline Dockerfile, or Binary, to accept binary payloads. It is possible to have multiple sources at once. For more information about each source type, see "Creating build inputs".
5
The strategy section describes the build strategy used to execute the build. You can specify a Source , Docker, or Custom strategy here. This example uses the ruby-20-centos7 container image that Source-to-image (S2I) uses for the application build.
6
After the container image is successfully built, it is pushed into the repository described in the output section.
7
The postCommit section defines an optional build hook.

2.3. Creating build inputs

Use the following sections for an overview of build inputs, instructions on how to use inputs to provide source content for builds to operate on, and how to use build environments and create secrets.

2.3.1. Build inputs

A build input provides source content for builds to operate on. You can use the following build inputs to provide sources in OpenShift Container Platform, listed in order of precedence:

  • Inline Dockerfile definitions
  • Content extracted from existing images
  • Git repositories
  • Binary (Local) inputs
  • Input secrets
  • External artifacts

You can combine multiple inputs in a single build. However, as the inline Dockerfile takes precedence, it can overwrite any other file named Dockerfile provided by another input. Binary (local) input and Git repositories are mutually exclusive inputs.

You can use input secrets when you do not want certain resources or credentials used during a build to be available in the final application image produced by the build, or want to consume a value that is defined in a secret resource. External artifacts can be used to pull in additional files that are not available as one of the other build input types.

When you run a build:

  1. A working directory is constructed and all input content is placed in the working directory. For example, the input Git repository is cloned into the working directory, and files specified from input images are copied into the working directory using the target path.
  2. The build process changes directories into the contextDir, if one is defined.
  3. The inline Dockerfile, if any, is written to the current directory.
  4. The content from the current directory is provided to the build process for reference by the Dockerfile, custom builder logic, or assemble script. This means any input content that resides outside the contextDir is ignored by the build.

The following example of a source definition includes multiple input types and an explanation of how they are combined. For more details on how each input type is defined, see the specific sections for each input type.

source:
  git:
    uri: https://github.com/openshift/ruby-hello-world.git 1
    ref: "master"
  images:
  - from:
      kind: ImageStreamTag
      name: myinputimage:latest
      namespace: mynamespace
    paths:
    - destinationDir: app/dir/injected/dir 2
      sourcePath: /usr/lib/somefile.jar
  contextDir: "app/dir" 3
  dockerfile: "FROM centos:7\nRUN yum install -y httpd" 4
1
The repository to be cloned into the working directory for the build.
2
/usr/lib/somefile.jar from myinputimage is stored in <workingdir>/app/dir/injected/dir.
3
The working directory for the build becomes <original_workingdir>/app/dir.
4
A Dockerfile with this content is created in <original_workingdir>/app/dir, overwriting any existing file with that name.

2.3.2. Dockerfile source

When you supply a dockerfile value, the content of this field is written to disk as a file named dockerfile. This is done after other input sources are processed, so if the input source repository contains a Dockerfile in the root directory, it is overwritten with this content.

The source definition is part of the spec section in the BuildConfig:

source:
  dockerfile: "FROM centos:7\nRUN yum install -y httpd" 1
1
The dockerfile field contains an inline Dockerfile that is built.

Additional resources

  • The typical use for this field is to provide a Dockerfile to a docker strategy build.

2.3.3. Image source

You can add additional files to the build process with images. Input images are referenced in the same way the From and To image targets are defined. This means both container images and image stream tags can be referenced. In conjunction with the image, you must provide one or more path pairs to indicate the path of the files or directories to copy the image and the destination to place them in the build context.

The source path can be any absolute path within the image specified. The destination must be a relative directory path. At build time, the image is loaded and the indicated files and directories are copied into the context directory of the build process. This is the same directory into which the source repository content is cloned. If the source path ends in /. then the content of the directory is copied, but the directory itself is not created at the destination.

Image inputs are specified in the source definition of the BuildConfig:

source:
  git:
    uri: https://github.com/openshift/ruby-hello-world.git
    ref: "master"
  images: 1
  - from: 2
      kind: ImageStreamTag
      name: myinputimage:latest
      namespace: mynamespace
    paths: 3
    - destinationDir: injected/dir 4
      sourcePath: /usr/lib/somefile.jar 5
  - from:
      kind: ImageStreamTag
      name: myotherinputimage:latest
      namespace: myothernamespace
    pullSecret: mysecret 6
    paths:
    - destinationDir: injected/dir
      sourcePath: /usr/lib/somefile.jar
1
An array of one or more input images and files.
2
A reference to the image containing the files to be copied.
3
An array of source/destination paths.
4
The directory relative to the build root where the build process can access the file.
5
The location of the file to be copied out of the referenced image.
6
An optional secret provided if credentials are needed to access the input image.
Note

If your cluster uses an ImageContentSourcePolicy object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project.

Optionally, if an input image requires a pull secret, you can link the pull secret to the service account used by the build. By default, builds use the builder service account. The pull secret is automatically added to the build if the secret contains a credential that matches the repository hosting the input image. To link a pull secret to the service account used by the build, run:

$ oc secrets link builder dockerhub
Note

This feature is not supported for builds using the custom strategy.

2.3.4. Git source

When specified, source code is fetched from the supplied location.

If you supply an inline Dockerfile, it overwrites the Dockerfile in the contextDir of the Git repository.

The source definition is part of the spec section in the BuildConfig:

source:
  git: 1
    uri: "https://github.com/openshift/ruby-hello-world"
    ref: "master"
  contextDir: "app/dir" 2
  dockerfile: "FROM openshift/ruby-22-centos7\nUSER example" 3
1
The git field contains the Uniform Resource Identifier (URI) to the remote Git repository of the source code. You must specify the value of the ref field to check out a specific Git reference. A valid ref can be a SHA1 tag or a branch name. The default value of the ref field is master.
2
The contextDir field allows you to override the default location inside the source code repository where the build looks for the application source code. If your application exists inside a sub-directory, you can override the default location (the root folder) using this field.
3
If the optional dockerfile field is provided, it should be a string containing a Dockerfile that overwrites any Dockerfile that may exist in the source repository.

If the ref field denotes a pull request, the system uses a git fetch operation and then checkout FETCH_HEAD.

When no ref value is provided, OpenShift Container Platform performs a shallow clone (--depth=1). In this case, only the files associated with the most recent commit on the default branch (typically master) are downloaded. This results in repositories downloading faster, but without the full commit history. To perform a full git clone of the default branch of a specified repository, set ref to the name of the default branch (for example main).

Warning

Git clone operations that go through a proxy that is performing man in the middle (MITM) TLS hijacking or reencrypting of the proxied connection do not work.

2.3.4.1. Using a proxy

If your Git repository can only be accessed using a proxy, you can define the proxy to use in the source section of the build configuration. You can configure both an HTTP and HTTPS proxy to use. Both fields are optional. Domains for which no proxying should be performed can also be specified in the NoProxy field.

Note

Your source URI must use the HTTP or HTTPS protocol for this to work.

source:
  git:
    uri: "https://github.com/openshift/ruby-hello-world"
    ref: "master"
    httpProxy: http://proxy.example.com
    httpsProxy: https://proxy.example.com
    noProxy: somedomain.com, otherdomain.com
Note

For Pipeline strategy builds, given the current restrictions with the Git plugin for Jenkins, any Git operations through the Git plugin do not leverage the HTTP or HTTPS proxy defined in the BuildConfig. The Git plugin only uses the proxy configured in the Jenkins UI at the Plugin Manager panel. This proxy is then used for all git interactions within Jenkins, across all jobs.

Additional resources

  • You can find instructions on how to configure proxies through the Jenkins UI at JenkinsBehindProxy.
2.3.4.2. Source Clone Secrets

Builder pods require access to any Git repositories defined as source for a build. Source clone secrets are used to provide the builder pod with access it would not normally have access to, such as private repositories or repositories with self-signed or untrusted SSL certificates.

The following source clone secret configurations are supported:

  • .gitconfig File
  • Basic Authentication
  • SSH Key Authentication
  • Trusted Certificate Authorities
Note

You can also use combinations of these configurations to meet your specific needs.

2.3.4.2.1. Automatically adding a source clone secret to a build configuration

When a BuildConfig is created, OpenShift Container Platform can automatically populate its source clone secret reference. This behavior allows the resulting builds to automatically use the credentials stored in the referenced secret to authenticate to a remote Git repository, without requiring further configuration.

To use this functionality, a secret containing the Git repository credentials must exist in the namespace in which the BuildConfig is later created. This secrets must include one or more annotations prefixed with build.openshift.io/source-secret-match-uri-. The value of each of these annotations is a Uniform Resource Identifier (URI) pattern, which is defined as follows. When a BuildConfig is created without a source clone secret reference and its Git source URI matches a URI pattern in a secret annotation, OpenShift Container Platform automatically inserts a reference to that secret in the BuildConfig.

Prerequisites

A URI pattern must consist of:

  • A valid scheme: *://, git://, http://, https:// or ssh://
  • A host: *` or a valid hostname or IP address optionally preceded by *.
  • A path: /* or / followed by any characters optionally including * characters

In all of the above, a * character is interpreted as a wildcard.

Important

URI patterns must match Git source URIs which are conformant to RFC3986. Do not include a username (or password) component in a URI pattern.

For example, if you use ssh://git@bitbucket.atlassian.com:7999/ATLASSIAN jira.git for a git repository URL, the source secret must be specified as ssh://bitbucket.atlassian.com:7999/* (and not ssh://git@bitbucket.atlassian.com:7999/*).

$ oc annotate secret mysecret \
    'build.openshift.io/source-secret-match-uri-1=ssh://bitbucket.atlassian.com:7999/*'

Procedure

If multiple secrets match the Git URI of a particular BuildConfig, OpenShift Container Platform selects the secret with the longest match. This allows for basic overriding, as in the following example.

The following fragment shows two partial source clone secrets, the first matching any server in the domain mycorp.com accessed by HTTPS, and the second overriding access to servers mydev1.mycorp.com and mydev2.mycorp.com:

kind: Secret
apiVersion: v1
metadata:
  name: matches-all-corporate-servers-https-only
  annotations:
    build.openshift.io/source-secret-match-uri-1: https://*.mycorp.com/*
data:
  ...
---
kind: Secret
apiVersion: v1
metadata:
  name: override-for-my-dev-servers-https-only
  annotations:
    build.openshift.io/source-secret-match-uri-1: https://mydev1.mycorp.com/*
    build.openshift.io/source-secret-match-uri-2: https://mydev2.mycorp.com/*
data:
  ...
  • Add a build.openshift.io/source-secret-match-uri- annotation to a pre-existing secret using:

    $ oc annotate secret mysecret \
        'build.openshift.io/source-secret-match-uri-1=https://*.mycorp.com/*'
2.3.4.2.2. Manually adding a source clone secret

Source clone secrets can be added manually to a build configuration by adding a sourceSecret field to the source section inside the BuildConfig and setting it to the name of the secret that you created. In this example, it is the basicsecret.

apiVersion: "v1"
kind: "BuildConfig"
metadata:
  name: "sample-build"
spec:
  output:
    to:
      kind: "ImageStreamTag"
      name: "sample-image:latest"
  source:
    git:
      uri: "https://github.com/user/app.git"
    sourceSecret:
      name: "basicsecret"
  strategy:
    sourceStrategy:
      from:
        kind: "ImageStreamTag"
        name: "python-33-centos7:latest"

Procedure

You can also use the oc set build-secret command to set the source clone secret on an existing build configuration.

  • To set the source clone secret on an existing build configuration, enter the following command:

    $ oc set build-secret --source bc/sample-build basicsecret
2.3.4.2.3. Creating a secret from a .gitconfig file

If the cloning of your application is dependent on a .gitconfig file, then you can create a secret that contains it. Add it to the builder service account and then your BuildConfig.

Procedure

  • To create a secret from a .gitconfig file:
$ oc create secret generic <secret_name> --from-file=<path/to/.gitconfig>
Note

SSL verification can be turned off if sslVerify=false is set for the http section in your .gitconfig file:

[http]
        sslVerify=false
2.3.4.2.4. Creating a secret from a .gitconfig file for secured Git

If your Git server is secured with two-way SSL and user name with password, you must add the certificate files to your source build and add references to the certificate files in the .gitconfig file.

Prerequisites

  • You must have Git credentials.

Procedure

Add the certificate files to your source build and add references to the certificate files in the .gitconfig file.

  1. Add the client.crt, cacert.crt, and client.key files to the /var/run/secrets/openshift.io/source/ folder in the application source code.
  2. In the .gitconfig file for the server, add the [http] section shown in the following example:

    # cat .gitconfig

    Example output

    [user]
            name = <name>
            email = <email>
    [http]
            sslVerify = false
            sslCert = /var/run/secrets/openshift.io/source/client.crt
            sslKey = /var/run/secrets/openshift.io/source/client.key
            sslCaInfo = /var/run/secrets/openshift.io/source/cacert.crt

  3. Create the secret:

    $ oc create secret generic <secret_name> \
    --from-literal=username=<user_name> \ 1
    --from-literal=password=<password> \ 2
    --from-file=.gitconfig=.gitconfig \
    --from-file=client.crt=/var/run/secrets/openshift.io/source/client.crt \
    --from-file=cacert.crt=/var/run/secrets/openshift.io/source/cacert.crt \
    --from-file=client.key=/var/run/secrets/openshift.io/source/client.key
    1
    The user’s Git user name.
    2
    The password for this user.
Important

To avoid having to enter your password again, be sure to specify the source-to-image (S2I) image in your builds. However, if you cannot clone the repository, you must still specify your user name and password to promote the build.

Additional resources

  • /var/run/secrets/openshift.io/source/ folder in the application source code.
2.3.4.2.5. Creating a secret from source code basic authentication

Basic authentication requires either a combination of --username and --password, or a token to authenticate against the software configuration management (SCM) server.

Prerequisites

  • User name and password to access the private repository.

Procedure

  1. Create the secret first before using the --username and --password to access the private repository:

    $ oc create secret generic <secret_name> \
        --from-literal=username=<user_name> \
        --from-literal=password=<password> \
        --type=kubernetes.io/basic-auth
  2. Create a basic authentication secret with a token:

    $ oc create secret generic <secret_name> \
        --from-literal=password=<token> \
        --type=kubernetes.io/basic-auth
2.3.4.2.6. Creating a secret from source code SSH key authentication

SSH key based authentication requires a private SSH key.

The repository keys are usually located in the $HOME/.ssh/ directory, and are named id_dsa.pub, id_ecdsa.pub, id_ed25519.pub, or id_rsa.pub by default.

Procedure

  1. Generate SSH key credentials:

    $ ssh-keygen -t ed25519 -C "your_email@example.com"
    Note

    Creating a passphrase for the SSH key prevents OpenShift Container Platform from building. When prompted for a passphrase, leave it blank.

    Two files are created: the public key and a corresponding private key (one of id_dsa, id_ecdsa, id_ed25519, or id_rsa). With both of these in place, consult your source control management (SCM) system’s manual on how to upload the public key. The private key is used to access your private repository.

  2. Before using the SSH key to access the private repository, create the secret:

    $ oc create secret generic <secret_name> \
        --from-file=ssh-privatekey=<path/to/ssh/private/key> \
        --from-file=<path/to/known_hosts> \ 1
        --type=kubernetes.io/ssh-auth
    1
    Optional: Adding this field enables strict server host key check.
    Warning

    Skipping the known_hosts file while creating the secret makes the build vulnerable to a potential man-in-the-middle (MITM) attack.

    Note

    Ensure that the known_hosts file includes an entry for the host of your source code.

2.3.4.2.7. Creating a secret from source code trusted certificate authorities

The set of Transport Layer Security (TLS) certificate authorities (CA) that are trusted during a Git clone operation are built into the OpenShift Container Platform infrastructure images. If your Git server uses a self-signed certificate or one signed by an authority not trusted by the image, you can create a secret that contains the certificate or disable TLS verification.

If you create a secret for the CA certificate, OpenShift Container Platform uses it to access your Git server during the Git clone operation. Using this method is significantly more secure than disabling Git SSL verification, which accepts any TLS certificate that is presented.

Procedure

Create a secret with a CA certificate file.

  1. If your CA uses Intermediate Certificate Authorities, combine the certificates for all CAs in a ca.crt file. Enter the following command:

    $ cat intermediateCA.crt intermediateCA.crt rootCA.crt > ca.crt
    1. Create the secret:

      $ oc create secret generic mycert --from-file=ca.crt=</path/to/file> 1
      1
      You must use the key name ca.crt.
2.3.4.2.8. Source secret combinations

You can combine the different methods for creating source clone secrets for your specific needs.

2.3.4.2.8.1. Creating a SSH-based authentication secret with a .gitconfig file

You can combine the different methods for creating source clone secrets for your specific needs, such as a SSH-based authentication secret with a .gitconfig file.

Prerequisites

  • SSH authentication
  • .gitconfig file

Procedure

  • To create a SSH-based authentication secret with a .gitconfig file, run:

    $ oc create secret generic <secret_name> \
        --from-file=ssh-privatekey=<path/to/ssh/private/key> \
        --from-file=<path/to/.gitconfig> \
        --type=kubernetes.io/ssh-auth
2.3.4.2.8.2. Creating a secret that combines a .gitconfig file and CA certificate

You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a .gitconfig file and certificate authority (CA) certificate.

Prerequisites

  • .gitconfig file
  • CA certificate

Procedure

  • To create a secret that combines a .gitconfig file and CA certificate, run:

    $ oc create secret generic <secret_name> \
        --from-file=ca.crt=<path/to/certificate> \
        --from-file=<path/to/.gitconfig>
2.3.4.2.8.3. Creating a basic authentication secret with a CA certificate

You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a basic authentication and certificate authority (CA) certificate.

Prerequisites

  • Basic authentication credentials
  • CA certificate

Procedure

  • Create a basic authentication secret with a CA certificate, run:

    $ oc create secret generic <secret_name> \
        --from-literal=username=<user_name> \
        --from-literal=password=<password> \
        --from-file=ca-cert=</path/to/file> \
        --type=kubernetes.io/basic-auth
2.3.4.2.8.4. Creating a basic authentication secret with a .gitconfig file

You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a basic authentication and .gitconfig file.

Prerequisites

  • Basic authentication credentials
  • .gitconfig file

Procedure

  • To create a basic authentication secret with a .gitconfig file, run:

    $ oc create secret generic <secret_name> \
        --from-literal=username=<user_name> \
        --from-literal=password=<password> \
        --from-file=</path/to/.gitconfig> \
        --type=kubernetes.io/basic-auth
2.3.4.2.8.5. Creating a basic authentication secret with a .gitconfig file and CA certificate

You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a basic authentication, .gitconfig file, and certificate authority (CA) certificate.

Prerequisites

  • Basic authentication credentials
  • .gitconfig file
  • CA certificate

Procedure

  • To create a basic authentication secret with a .gitconfig file and CA certificate, run:

    $ oc create secret generic <secret_name> \
        --from-literal=username=<user_name> \
        --from-literal=password=<password> \
        --from-file=</path/to/.gitconfig> \
        --from-file=ca-cert=</path/to/file> \
        --type=kubernetes.io/basic-auth

2.3.5. Binary (local) source

Streaming content from a local file system to the builder is called a Binary type build. The corresponding value of BuildConfig.spec.source.type is Binary for these builds.

This source type is unique in that it is leveraged solely based on your use of the oc start-build.

Note

Binary type builds require content to be streamed from the local file system, so automatically triggering a binary type build, like an image change trigger, is not possible. This is because the binary files cannot be provided. Similarly, you cannot launch binary type builds from the web console.

To utilize binary builds, invoke oc start-build with one of these options:

  • --from-file: The contents of the file you specify are sent as a binary stream to the builder. You can also specify a URL to a file. Then, the builder stores the data in a file with the same name at the top of the build context.
  • --from-dir and --from-repo: The contents are archived and sent as a binary stream to the builder. Then, the builder extracts the contents of the archive within the build context directory. With --from-dir, you can also specify a URL to an archive, which is extracted.
  • --from-archive: The archive you specify is sent to the builder, where it is extracted within the build context directory. This option behaves the same as --from-dir; an archive is created on your host first, whenever the argument to these options is a directory.

In each of the previously listed cases:

  • If your BuildConfig already has a Binary source type defined, it is effectively ignored and replaced by what the client sends.
  • If your BuildConfig has a Git source type defined, it is dynamically disabled, since Binary and Git are mutually exclusive, and the data in the binary stream provided to the builder takes precedence.

Instead of a file name, you can pass a URL with HTTP or HTTPS schema to --from-file and --from-archive. When using --from-file with a URL, the name of the file in the builder image is determined by the Content-Disposition header sent by the web server, or the last component of the URL path if the header is not present. No form of authentication is supported and it is not possible to use custom TLS certificate or disable certificate validation.

When using oc new-build --binary=true, the command ensures that the restrictions associated with binary builds are enforced. The resulting BuildConfig has a source type of Binary, meaning that the only valid way to run a build for this BuildConfig is to use oc start-build with one of the --from options to provide the requisite binary data.

The Dockerfile and contextDir source options have special meaning with binary builds.

Dockerfile can be used with any binary build source. If Dockerfile is used and the binary stream is an archive, its contents serve as a replacement Dockerfile to any Dockerfile in the archive. If Dockerfile is used with the --from-file argument, and the file argument is named Dockerfile, the value from Dockerfile replaces the value from the binary stream.

In the case of the binary stream encapsulating extracted archive content, the value of the contextDir field is interpreted as a subdirectory within the archive, and, if valid, the builder changes into that subdirectory before executing the build.

2.3.6. Input secrets and config maps

Important

To prevent the contents of input secrets and config maps from appearing in build output container images, use build volumes in your Docker build and source-to-image build strategies.

In some scenarios, build operations require credentials or other configuration data to access dependent resources, but it is undesirable for that information to be placed in source control. You can define input secrets and input config maps for this purpose.

For example, when building a Java application with Maven, you can set up a private mirror of Maven Central or JCenter that is accessed by private keys. To download libraries from that private mirror, you have to supply the following:

  1. A settings.xml file configured with the mirror’s URL and connection settings.
  2. A private key referenced in the settings file, such as ~/.ssh/id_rsa.

For security reasons, you do not want to expose your credentials in the application image.

This example describes a Java application, but you can use the same approach for adding SSL certificates into the /etc/ssl/certs directory, API keys or tokens, license files, and more.

2.3.6.1. What is a secret?

The Secret object type provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, dockercfg files, private source repository credentials, and so on. Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod.

YAML Secret Object Definition

apiVersion: v1
kind: Secret
metadata:
  name: test-secret
  namespace: my-namespace
type: Opaque 1
data: 2
  username: <username> 3
  password: <password>
stringData: 4
  hostname: myapp.mydomain.com 5

1
Indicates the structure of the secret’s key names and values.
2
The allowable format for the keys in the data field must meet the guidelines in the DNS_SUBDOMAIN value in the Kubernetes identifiers glossary.
3
The value associated with keys in the data map must be base64 encoded.
4
Entries in the stringData map are converted to base64 and the entry are then moved to the data map automatically. This field is write-only. The value is only be returned by the data field.
5
The value associated with keys in the stringData map is made up of plain text strings.
2.3.6.1.1. Properties of secrets

Key properties include:

  • Secret data can be referenced independently from its definition.
  • Secret data volumes are backed by temporary file-storage facilities (tmpfs) and never come to rest on a node.
  • Secret data can be shared within a namespace.
2.3.6.1.2. Types of Secrets

The value in the type field indicates the structure of the secret’s key names and values. The type can be used to enforce the presence of user names and keys in the secret object. If you do not want validation, use the opaque type, which is the default.

Specify one of the following types to trigger minimal server-side validation to ensure the presence of specific key names in the secret data:

  • kubernetes.io/service-account-token. Uses a service account token.
  • kubernetes.io/dockercfg. Uses the .dockercfg file for required Docker credentials.
  • kubernetes.io/dockerconfigjson. Uses the .docker/config.json file for required Docker credentials.
  • kubernetes.io/basic-auth. Use with basic authentication.
  • kubernetes.io/ssh-auth. Use with SSH key authentication.
  • kubernetes.io/tls. Use with TLS certificate authorities.

Specify type= Opaque if you do not want validation, which means the secret does not claim to conform to any convention for key names or values. An opaque secret, allows for unstructured key:value pairs that can contain arbitrary values.

Note

You can specify other arbitrary types, such as example.com/my-secret-type. These types are not enforced server-side, but indicate that the creator of the secret intended to conform to the key/value requirements of that type.

2.3.6.1.3. Updates to secrets

When you modify the value of a secret, the value used by an already running pod does not dynamically change. To change a secret, you must delete the original pod and create a new pod, in some cases with an identical PodSpec.

Updating a secret follows the same workflow as deploying a new container image. You can use the kubectl rolling-update command.

The resourceVersion value in a secret is not specified when it is referenced. Therefore, if a secret is updated at the same time as pods are starting, the version of the secret that is used for the pod is not defined.

Note

Currently, it is not possible to check the resource version of a secret object that was used when a pod was created. It is planned that pods report this information, so that a controller could restart ones using an old resourceVersion. In the interim, do not update the data of existing secrets, but create new ones with distinct names.

2.3.6.2. Creating secrets

You must create a secret before creating the pods that depend on that secret.

When creating secrets:

  • Create a secret object with secret data.
  • Update the pod service account to allow the reference to the secret.
  • Create a pod, which consumes the secret as an environment variable or as a file using a secret volume.

Procedure

  • Use the create command to create a secret object from a JSON or YAML file:

    $ oc create -f <filename>

    For example, you can create a secret from your local .docker/config.json file:

    $ oc create secret generic dockerhub \
        --from-file=.dockerconfigjson=<path/to/.docker/config.json> \
        --type=kubernetes.io/dockerconfigjson

    This command generates a JSON specification of the secret named dockerhub and creates the object.

    YAML Opaque Secret Object Definition

    apiVersion: v1
    kind: Secret
    metadata:
      name: mysecret
    type: Opaque 1
    data:
      username: <username>
      password: <password>

    1
    Specifies an opaque secret.

    Docker Configuration JSON File Secret Object Definition

    apiVersion: v1
    kind: Secret
    metadata:
      name: aregistrykey
      namespace: myapps
    type: kubernetes.io/dockerconfigjson 1
    data:
      .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2

    1
    Specifies that the secret is using a docker configuration JSON file.
    2
    The output of a base64-encoded the docker configuration JSON file
2.3.6.3. Using secrets

After creating secrets, you can create a pod to reference your secret, get logs, and delete the pod.

Procedure

  1. Create the pod to reference your secret:

    $ oc create -f <your_yaml_file>.yaml
  2. Get the logs:

    $ oc logs secret-example-pod
  3. Delete the pod:

    $ oc delete pod secret-example-pod

Additional resources

  • Example YAML files with secret data:

    YAML Secret That Will Create Four Files

    apiVersion: v1
    kind: Secret
    metadata:
      name: test-secret
    data:
      username: <username> 1
      password: <password> 2
    stringData:
      hostname: myapp.mydomain.com 3
      secret.properties: |-     4
        property1=valueA
        property2=valueB

    1
    File contains decoded values.
    2
    File contains decoded values.
    3
    File contains the provided string.
    4
    File contains the provided data.

    YAML of a pod populating files in a volume with secret data

    apiVersion: v1
    kind: Pod
    metadata:
      name: secret-example-pod
    spec:
      containers:
        - name: secret-test-container
          image: busybox
          command: [ "/bin/sh", "-c", "cat /etc/secret-volume/*" ]
          volumeMounts:
              # name must match the volume name below
              - name: secret-volume
                mountPath: /etc/secret-volume
                readOnly: true
      volumes:
        - name: secret-volume
          secret:
            secretName: test-secret
      restartPolicy: Never

    YAML of a pod populating environment variables with secret data

    apiVersion: v1
    kind: Pod
    metadata:
      name: secret-example-pod
    spec:
      containers:
        - name: secret-test-container
          image: busybox
          command: [ "/bin/sh", "-c", "export" ]
          env:
            - name: TEST_SECRET_USERNAME_ENV_VAR
              valueFrom:
                secretKeyRef:
                  name: test-secret
                  key: username
      restartPolicy: Never

    YAML of a Build Config Populating Environment Variables with Secret Data

    apiVersion: build.openshift.io/v1
    kind: BuildConfig
    metadata:
      name: secret-example-bc
    spec:
      strategy:
        sourceStrategy:
          env:
          - name: TEST_SECRET_USERNAME_ENV_VAR
            valueFrom:
              secretKeyRef:
                name: test-secret
                key: username

2.3.6.4. Adding input secrets and config maps

To provide credentials and other configuration data to a build without placing them in source control, you can define input secrets and input config maps.

In some scenarios, build operations require credentials or other configuration data to access dependent resources. To make that information available without placing it in source control, you can define input secrets and input config maps.

Procedure

To add an input secret, config maps, or both to an existing BuildConfig object:

  1. Create the ConfigMap object, if it does not exist:

    $ oc create configmap settings-mvn \
        --from-file=settings.xml=<path/to/settings.xml>

    This creates a new config map named settings-mvn, which contains the plain text content of the settings.xml file.

    Tip

    You can alternatively apply the following YAML to create the config map:

    apiVersion: core/v1
    kind: ConfigMap
    metadata:
      name: settings-mvn
    data:
      settings.xml: |
        <settings>
        … # Insert maven settings here
        </settings>
  2. Create the Secret object, if it does not exist:

    $ oc create secret generic secret-mvn \
        --from-file=ssh-privatekey=<path/to/.ssh/id_rsa>
        --type=kubernetes.io/ssh-auth

    This creates a new secret named secret-mvn, which contains the base64 encoded content of the id_rsa private key.

    Tip

    You can alternatively apply the following YAML to create the input secret:

    apiVersion: core/v1
    kind: Secret
    metadata:
      name: secret-mvn
    type: kubernetes.io/ssh-auth
    data:
      ssh-privatekey: |
        # Insert ssh private key, base64 encoded
  3. Add the config map and secret to the source section in the existing BuildConfig object:

    source:
      git:
        uri: https://github.com/wildfly/quickstart.git
      contextDir: helloworld
      configMaps:
        - configMap:
            name: settings-mvn
      secrets:
        - secret:
            name: secret-mvn

To include the secret and config map in a new BuildConfig object, run the following command:

$ oc new-build \
    openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git \
    --context-dir helloworld --build-secret “secret-mvn” \
    --build-config-map "settings-mvn"

During the build, the settings.xml and id_rsa files are copied into the directory where the source code is located. In OpenShift Container Platform S2I builder images, this is the image working directory, which is set using the WORKDIR instruction in the Dockerfile. If you want to specify another directory, add a destinationDir to the definition:

source:
  git:
    uri: https://github.com/wildfly/quickstart.git
  contextDir: helloworld
  configMaps:
    - configMap:
        name: settings-mvn
      destinationDir: ".m2"
  secrets:
    - secret:
        name: secret-mvn
      destinationDir: ".ssh"

You can also specify the destination directory when creating a new BuildConfig object:

$ oc new-build \
    openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git \
    --context-dir helloworld --build-secret “secret-mvn:.ssh” \
    --build-config-map "settings-mvn:.m2"

In both cases, the settings.xml file is added to the ./.m2 directory of the build environment, and the id_rsa key is added to the ./.ssh directory.

2.3.6.5. Source-to-image strategy

When using a Source strategy, all defined input secrets are copied to their respective destinationDir. If you left destinationDir empty, then the secrets are placed in the working directory of the builder image.

The same rule is used when a destinationDir is a relative path. The secrets are placed in the paths that are relative to the working directory of the image. The final directory in the destinationDir path is created if it does not exist in the builder image. All preceding directories in the destinationDir must exist, or an error will occur.

Note

Input secrets are added as world-writable, have 0666 permissions, and are truncated to size zero after executing the assemble script. This means that the secret files exist in the resulting image, but they are empty for security reasons.

Input config maps are not truncated after the assemble script completes.

2.3.6.6. Docker strategy

When using a docker strategy, you can add all defined input secrets into your container image using the ADD and COPY instructions in your Dockerfile.

If you do not specify the destinationDir for a secret, then the files are copied into the same directory in which the Dockerfile is located. If you specify a relative path as destinationDir, then the secrets are copied into that directory, relative to your Dockerfile location. This makes the secret files available to the Docker build operation as part of the context directory used during the build.

Example of a Dockerfile referencing secret and config map data

FROM centos/ruby-22-centos7

USER root
COPY ./secret-dir /secrets
COPY ./config /

# Create a shell script that will output secrets and ConfigMaps when the image is run
RUN echo '#!/bin/sh' > /input_report.sh
RUN echo '(test -f /secrets/secret1 && echo -n "secret1=" && cat /secrets/secret1)' >> /input_report.sh
RUN echo '(test -f /config && echo -n "relative-configMap=" && cat /config)' >> /input_report.sh
RUN chmod 755 /input_report.sh

CMD ["/bin/sh", "-c", "/input_report.sh"]

Important

Users normally remove their input secrets from the final application image so that the secrets are not present in the container running from that image. However, the secrets still exist in the image itself in the layer where they were added. This removal is part of the Dockerfile itself.

To prevent the contents of input secrets and config maps from appearing in the build output container images and avoid this removal process altogether, use build volumes in your Docker build strategy instead.

2.3.6.7. Custom strategy

When using a Custom strategy, all the defined input secrets and config maps are available in the builder container in the /var/run/secrets/openshift.io/build directory. The custom build image must use these secrets and config maps appropriately. With the Custom strategy, you can define secrets as described in Custom strategy options.

There is no technical difference between existing strategy secrets and the input secrets. However, your builder image can distinguish between them and use them differently, based on your build use case.

The input secrets are always mounted into the /var/run/secrets/openshift.io/build directory, or your builder can parse the $BUILD environment variable, which includes the full build object.

Important

If a pull secret for the registry exists in both the namespace and the node, builds default to using the pull secret in the namespace.

2.3.7. External artifacts

It is not recommended to store binary files in a source repository. Therefore, you must define a build which pulls additional files, such as Java .jar dependencies, during the build process. How this is done depends on the build strategy you are using.

For a Source build strategy, you must put appropriate shell commands into the assemble script:

.s2i/bin/assemble File

#!/bin/sh
APP_VERSION=1.0
wget http://repository.example.com/app/app-$APP_VERSION.jar -O app.jar

.s2i/bin/run File

#!/bin/sh
exec java -jar app.jar

For a Docker build strategy, you must modify the Dockerfile and invoke shell commands with the RUN instruction:

Excerpt of Dockerfile

FROM jboss/base-jdk:8

ENV APP_VERSION 1.0
RUN wget http://repository.example.com/app/app-$APP_VERSION.jar -O app.jar

EXPOSE 8080
CMD [ "java", "-jar", "app.jar" ]

In practice, you may want to use an environment variable for the file location so that the specific file to be downloaded can be customized using an environment variable defined on the BuildConfig, rather than updating the Dockerfile or assemble script.

You can choose between different methods of defining environment variables:

  • Using the .s2i/environment file] (only for a Source build strategy)
  • Setting in BuildConfig
  • Providing explicitly using oc start-build --env (only for builds that are triggered manually)

2.3.8. Using docker credentials for private registries

You can supply builds with a .docker/config.json file with valid credentials for private container registries. This allows you to push the output image into a private container image registry or pull a builder image from the private container image registry that requires authentication.

You can supply credentials for multiple repositories within the same registry, each with credentials specific to that registry path.

Note

For the OpenShift Container Platform container image registry, this is not required because secrets are generated automatically for you by OpenShift Container Platform.

The .docker/config.json file is found in your home directory by default and has the following format:

auths:
  index.docker.io/v1/: 1
    auth: "YWRfbGzhcGU6R2labnRib21ifTE=" 2
    email: "user@example.com" 3
  docker.io/my-namespace/my-user/my-image: 4
    auth: "GzhYWRGU6R2fbclabnRgbkSp=""
    email: "user@example.com"
  docker.io/my-namespace: 5
    auth: "GzhYWRGU6R2deesfrRgbkSp=""
    email: "user@example.com"
1
URL of the registry.
2
Encrypted password.
3
Email address for the login.
4
URL and credentials for a specific image in a namespace.
5
URL and credentials for a registry namespace.

You can define multiple container image registries or define multiple repositories in the same registry. Alternatively, you can also add authentication entries to this file by running the docker login command. The file will be created if it does not exist.

Kubernetes provides Secret objects, which can be used to store configuration and passwords.

Prerequisites

  • You must have a .docker/config.json file.

Procedure

  1. Create the secret from your local .docker/config.json file:

    $ oc create secret generic dockerhub \
        --from-file=.dockerconfigjson=<path/to/.docker/config.json> \
        --type=kubernetes.io/dockerconfigjson

    This generates a JSON specification of the secret named dockerhub and creates the object.

  2. Add a pushSecret field into the output section of the BuildConfig and set it to the name of the secret that you created, which in the previous example is dockerhub:

    spec:
      output:
        to:
          kind: "DockerImage"
          name: "private.registry.com/org/private-image:latest"
        pushSecret:
          name: "dockerhub"

    You can use the oc set build-secret command to set the push secret on the build configuration:

    $ oc set build-secret --push bc/sample-build dockerhub

    You can also link the push secret to the service account used by the build instead of specifying the pushSecret field. By default, builds use the builder service account. The push secret is automatically added to the build if the secret contains a credential that matches the repository hosting the build’s output image.

    $ oc secrets link builder dockerhub
  3. Pull the builder container image from a private container image registry by specifying the pullSecret field, which is part of the build strategy definition:

    strategy:
      sourceStrategy:
        from:
          kind: "DockerImage"
          name: "docker.io/user/private_repository"
        pullSecret:
          name: "dockerhub"

    You can use the oc set build-secret command to set the pull secret on the build configuration:

    $ oc set build-secret --pull bc/sample-build dockerhub
    Note

    This example uses pullSecret in a Source build, but it is also applicable in Docker and Custom builds.

    You can also link the pull secret to the service account used by the build instead of specifying the pullSecret field. By default, builds use the builder service account. The pull secret is automatically added to the build if the secret contains a credential that matches the repository hosting the build’s input image. To link the pull secret to the service account used by the build instead of specifying the pullSecret field, run:

    $ oc secrets link builder dockerhub
    Note

    You must specify a from image in the BuildConfig spec to take advantage of this feature. Docker strategy builds generated by oc new-build or oc new-app may not do this in some situations.

2.3.9. Build environments

As with pod environment variables, build environment variables can be defined in terms of references to other resources or variables using the Downward API. There are some exceptions, which are noted.

You can also manage environment variables defined in the BuildConfig with the oc set env command.

Note

Referencing container resources using valueFrom in build environment variables is not supported as the references are resolved before the container is created.

2.3.9.1. Using build fields as environment variables

You can inject information about the build object by setting the fieldPath environment variable source to the JsonPath of the field from which you are interested in obtaining the value.

Note

Jenkins Pipeline strategy does not support valueFrom syntax for environment variables.

Procedure

  • Set the fieldPath environment variable source to the JsonPath of the field from which you are interested in obtaining the value:

    env:
      - name: FIELDREF_ENV
        valueFrom:
          fieldRef:
            fieldPath: metadata.name
2.3.9.2. Using secrets as environment variables

You can make key values from secrets available as environment variables using the valueFrom syntax.

Important

This method shows the secrets as plain text in the output of the build pod console. To avoid this, use input secrets and config maps instead.

Procedure

  • To use a secret as an environment variable, set the valueFrom syntax:

    apiVersion: build.openshift.io/v1
    kind: BuildConfig
    metadata:
      name: secret-example-bc
    spec:
      strategy:
        sourceStrategy:
          env:
          - name: MYVAL
            valueFrom:
              secretKeyRef:
                key: myval
                name: mysecret

Additional resources

2.3.10. Service serving certificate secrets

Service serving certificate secrets are intended to support complex middleware applications that need out-of-the-box certificates. It has the same settings as the server certificates generated by the administrator tooling for nodes and masters.

Procedure

To secure communication to your service, have the cluster generate a signed serving certificate/key pair into a secret in your namespace.

  • Set the service.beta.openshift.io/serving-cert-secret-name annotation on your service with the value set to the name you want to use for your secret.

    Then, your PodSpec can mount that secret. When it is available, your pod runs. The certificate is good for the internal service DNS name, <service.name>.<service.namespace>.svc.

    The certificate and key are in PEM format, stored in tls.crt and tls.key respectively. The certificate/key pair is automatically replaced when it gets close to expiration. View the expiration date in the service.beta.openshift.io/expiry annotation on the secret, which is in RFC3339 format.

Note

In most cases, the service DNS name <service.name>.<service.namespace>.svc is not externally routable. The primary use of <service.name>.<service.namespace>.svc is for intracluster or intraservice communication, and with re-encrypt routes.

Other pods can trust cluster-created certificates, which are only signed for internal DNS names, by using the certificate authority (CA) bundle in the /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt file that is automatically mounted in their pod.

The signature algorithm for this feature is x509.SHA256WithRSA. To manually rotate, delete the generated secret. A new certificate is created.

2.3.11. Secrets restrictions

To use a secret, a pod needs to reference the secret. A secret can be used with a pod in three ways:

  • To populate environment variables for containers.
  • As files in a volume mounted on one or more of its containers.
  • By kubelet when pulling images for the pod.

Volume type secrets write data into the container as a file using the volume mechanism. imagePullSecrets use service accounts for the automatic injection of the secret into all pods in a namespaces.

When a template contains a secret definition, the only way for the template to use the provided secret is to ensure that the secret volume sources are validated and that the specified object reference actually points to an object of type Secret. Therefore, a secret needs to be created before any pods that depend on it. The most effective way to ensure this is to have it get injected automatically through the use of a service account.

Secret API objects reside in a namespace. They can only be referenced by pods in that same namespace.

Individual secrets are limited to 1MB in size. This is to discourage the creation of large secrets that would exhaust apiserver and kubelet memory. However, creation of a number of smaller secrets could also exhaust memory.

2.4. Managing build output

Use the following sections for an overview of and instructions for managing build output.

2.4.1. Build output

Builds that use the docker or source-to-image (S2I) strategy result in the creation of a new container image. The image is then pushed to the container image registry specified in the output section of the Build specification.

If the output kind is ImageStreamTag, then the image will be pushed to the integrated OpenShift image registry and tagged in the specified imagestream. If the output is of type DockerImage, then the name of the output reference will be used as a docker push specification. The specification may contain a registry or will default to DockerHub if no registry is specified. If the output section of the build specification is empty, then the image will not be pushed at the end of the build.

Output to an ImageStreamTag

spec:
  output:
    to:
      kind: "ImageStreamTag"
      name: "sample-image:latest"

Output to a docker Push Specification

spec:
  output:
    to:
      kind: "DockerImage"
      name: "my-registry.mycompany.com:5000/myimages/myimage:tag"

2.4.2. Output image environment variables

docker and source-to-image (S2I) strategy builds set the following environment variables on output images:

VariableDescription

OPENSHIFT_BUILD_NAME

Name of the build

OPENSHIFT_BUILD_NAMESPACE

Namespace of the build

OPENSHIFT_BUILD_SOURCE

The source URL of the build

OPENSHIFT_BUILD_REFERENCE

The Git reference used in the build

OPENSHIFT_BUILD_COMMIT

Source commit used in the build

Additionally, any user-defined environment variable, for example those configured with S2I] or docker strategy options, will also be part of the output image environment variable list.

2.4.3. Output image labels

docker and source-to-image (S2I)` builds set the following labels on output images:

LabelDescription

io.openshift.build.commit.author

Author of the source commit used in the build

io.openshift.build.commit.date

Date of the source commit used in the build

io.openshift.build.commit.id

Hash of the source commit used in the build

io.openshift.build.commit.message

Message of the source commit used in the build

io.openshift.build.commit.ref

Branch or reference specified in the source

io.openshift.build.source-location

Source URL for the build

You can also use the BuildConfig.spec.output.imageLabels field to specify a list of custom labels that will be applied to each image built from the build configuration.

Custom Labels to be Applied to Built Images

spec:
  output:
    to:
      kind: "ImageStreamTag"
      name: "my-image:latest"
    imageLabels:
    - name: "vendor"
      value: "MyCompany"
    - name: "authoritative-source-url"
      value: "registry.mycompany.com"

2.5. Using build strategies

The following sections define the primary supported build strategies, and how to use them.

2.5.1. Docker build

OpenShift Container Platform uses Buildah to build a container image from a Dockerfile. For more information on building container images with Dockerfiles, see the Dockerfile reference documentation.

Tip

If you set Docker build arguments by using the buildArgs array, see Understand how ARG and FROM interact in the Dockerfile reference documentation.

2.5.1.1. Replacing Dockerfile FROM image

You can replace the FROM instruction of the Dockerfile with the from of the BuildConfig object. If the Dockerfile uses multi-stage builds, the image in the last FROM instruction will be replaced.

Procedure

To replace the FROM instruction of the Dockerfile with the from of the BuildConfig.

strategy:
  dockerStrategy:
    from:
      kind: "ImageStreamTag"
      name: "debian:latest"
2.5.1.2. Using Dockerfile path

By default, docker builds use a Dockerfile located at the root of the context specified in the BuildConfig.spec.source.contextDir field.

The dockerfilePath field allows the build to use a different path to locate your Dockerfile, relative to the BuildConfig.spec.source.contextDir field. It can be a different file name than the default Dockerfile, such as MyDockerfile, or a path to a Dockerfile in a subdirectory, such as dockerfiles/app1/Dockerfile.

Procedure

To use the dockerfilePath field for the build to use a different path to locate your Dockerfile, set:

strategy:
  dockerStrategy:
    dockerfilePath: dockerfiles/app1/Dockerfile
2.5.1.3. Using docker environment variables

To make environment variables available to the docker build process and resulting image, you can add environment variables to the dockerStrategy definition of the build configuration.

The environment variables defined there are inserted as a single ENV Dockerfile instruction right after the FROM instruction, so that it can be referenced later on within the Dockerfile.

Procedure

The variables are defined during build and stay in the output image, therefore they will be present in any container that runs that image as well.

For example, defining a custom HTTP proxy to be used during build and runtime:

dockerStrategy:
...
  env:
    - name: "HTTP_PROXY"
      value: "http://myproxy.net:5187/"

You can also manage environment variables defined in the build configuration with the oc set env command.

2.5.1.4. Adding docker build arguments

You can set docker build arguments using the buildArgs array. The build arguments are passed to docker when a build is started.

Tip

See Understand how ARG and FROM interact in the Dockerfile reference documentation.

Procedure

To set docker build arguments, add entries to the buildArgs array, which is located in the dockerStrategy definition of the BuildConfig object. For example:

dockerStrategy:
...
  buildArgs:
    - name: "foo"
      value: "bar"
Note

Only the name and value fields are supported. Any settings on the valueFrom field are ignored.

2.5.1.5. Squashing layers with docker builds

Docker builds normally create a layer representing each instruction in a Dockerfile. Setting the imageOptimizationPolicy to SkipLayers merges all instructions into a single layer on top of the base image.

Procedure

  • Set the imageOptimizationPolicy to SkipLayers:

    strategy:
      dockerStrategy:
        imageOptimizationPolicy: SkipLayers
2.5.1.6. Using build volumes

You can mount build volumes to give running builds access to information that you don’t want to persist in the output container image.

Build volumes provide sensitive information, such as repository credentials, that the build environment or configuration only needs at build time. Build volumes are different from build inputs, whose data can persist in the output container image.

The mount points of build volumes, from which the running build reads data, are functionally similar to pod volume mounts.

Procedure

  • In the dockerStrategy definition of the BuildConfig object, add any build volumes to the volumes array. For example:

    spec:
      dockerStrategy:
        volumes:
          - name: secret-mvn 1
            mounts:
            - destinationPath: /opt/app-root/src/.ssh 2
            source:
              type: Secret 3
              secret:
                secretName: my-secret 4
          - name: settings-mvn 5
            mounts:
            - destinationPath: /opt/app-root/src/.m2  6
            source:
              type: ConfigMap 7
              configMap:
                name: my-config 8
          - name: my-csi-volume 9
            mounts:
            - destinationPath: /opt/app-root/src/some_path  10
            source:
              type: CSI 11
              csi:
                driver: csi.sharedresource.openshift.io 12
                readOnly: true 13
                volumeAttributes: 14
                  attribute: value
    1 5 9
    Required. A unique name.
    2 6 10
    Required. The absolute path of the mount point. It must not contain .. or : and doesn’t collide with the destination path generated by the builder. The /opt/app-root/src is the default home directory for many Red Hat S2I-enabled images.
    3 7 11
    Required. The type of source, ConfigMap, Secret, or CSI.
    4 8
    Required. The name of the source.
    12
    Required. The driver that provides the ephemeral CSI volume.
    13
    Optional. If true, this instructs the driver to provide a read-only volume.
    14
    Optional. The volume attributes of the ephemeral CSI volume. Consult the CSI driver’s documentation for supported attribute keys and values.
Note

The Shared Resource CSI Driver is supported as a Technology Preview feature.

2.5.2. Source-to-image build

Source-to-image (S2I) is a tool for building reproducible container images. It produces ready-to-run images by injecting application source into a container image and assembling a new image. The new image incorporates the base image, the builder, and built source and is ready to use with the buildah run command. S2I supports incremental builds, which re-use previously downloaded dependencies, previously built artifacts, and so on.

2.5.2.1. Performing source-to-image incremental builds

Source-to-image (S2I) can perform incremental builds, which means it reuses artifacts from previously-built images.

Procedure

  • To create an incremental build, create a with the following modification to the strategy definition:

    strategy:
      sourceStrategy:
        from:
          kind: "ImageStreamTag"
          name: "incremental-image:latest" 1
        incremental: true 2
    1
    Specify an image that supports incremental builds. Consult the documentation of the builder image to determine if it supports this behavior.
    2
    This flag controls whether an incremental build is attempted. If the builder image does not support incremental builds, the build will still succeed, but you will get a log message stating the incremental build was not successful because of a missing save-artifacts script.

Additional resources

  • See S2I Requirements for information on how to create a builder image supporting incremental builds.
2.5.2.2. Overriding source-to-image builder image scripts

You can override the assemble, run, and save-artifacts source-to-image (S2I) scripts provided by the builder image.

Procedure

To override the assemble, run, and save-artifacts S2I scripts provided by the builder image, either:

  • Provide an assemble, run, or save-artifacts script in the .s2i/bin directory of your application source repository.
  • Provide a URL of a directory containing the scripts as part of the strategy definition. For example:

    strategy:
      sourceStrategy:
        from:
          kind: "ImageStreamTag"
          name: "builder-image:latest"
        scripts: "http://somehost.com/scripts_directory" 1
    1
    This path will have run, assemble, and save-artifacts appended to it. If any or all scripts are found they will be used in place of the same named scripts provided in the image.
Note

Files located at the scripts URL take precedence over files located in .s2i/bin of the source repository.

2.5.2.3. Source-to-image environment variables

There are two ways to make environment variables available to the source build process and resulting image. Environment files and BuildConfig environment values. Variables provided will be present during the build process and in the output image.

2.5.2.3.1. Using source-to-image environment files

Source build enables you to set environment values, one per line, inside your application, by specifying them in a .s2i/environment file in the source repository. The environment variables specified in this file are present during the build process and in the output image.

If you provide a .s2i/environment file in your source repository, source-to-image (S2I) reads this file during the build. This allows customization of the build behavior as the assemble script may use these variables.

Procedure

For example, to disable assets compilation for your Rails application during the build:

  • Add DISABLE_ASSET_COMPILATION=true in the .s2i/environment file.

In addition to builds, the specified environment variables are also available in the running application itself. For example, to cause the Rails application to start in development mode instead of production:

  • Add RAILS_ENV=development to the .s2i/environment file.

The complete list of supported environment variables is available in the using images section for each image.

2.5.2.3.2. Using source-to-image build configuration environment

You can add environment variables to the sourceStrategy definition of the build configuration. The environment variables defined there are visible during the assemble script execution and will be defined in the output image, making them also available to the run script and application code.

Procedure

  • For example, to disable assets compilation for your Rails application:

    sourceStrategy:
    ...
      env:
        - name: "DISABLE_ASSET_COMPILATION"
          value: "true"

Additional resources

  • The build environment section provides more advanced instructions.
  • You can also manage environment variables defined in the build configuration with the oc set env command.
2.5.2.4. Ignoring source-to-image source files

Source-to-image (S2I) supports a .s2iignore file, which contains a list of file patterns that should be ignored. Files in the build working directory, as provided by the various input sources, that match a pattern found in the .s2iignore file will not be made available to the assemble script.

2.5.2.5. Creating images from source code with source-to-image

Source-to-image (S2I) is a framework that makes it easy to write images that take application source code as an input and produce a new image that runs the assembled application as output.

The main advantage of using S2I for building reproducible container images is the ease of use for developers. As a builder image author, you must understand two basic concepts in order for your images to provide the best S2I performance, the build process and S2I scripts.

2.5.2.5.1. Understanding the source-to-image build process

The build process consists of the following three fundamental elements, which are combined into a final container image:

  • Sources
  • Source-to-image (S2I) scripts
  • Builder image

S2I generates a Dockerfile with the builder image as the first FROM instruction. The Dockerfile generated by S2I is then passed to Buildah.

2.5.2.5.2. How to write source-to-image scripts

You can write source-to-image (S2I) scripts in any programming language, as long as the scripts are executable inside the builder image. S2I supports multiple options providing assemble/run/save-artifacts scripts. All of these locations are checked on each build in the following order:

  1. A script specified in the build configuration.
  2. A script found in the application source .s2i/bin directory.
  3. A script found at the default image URL with the io.openshift.s2i.scripts-url label.

Both the io.openshift.s2i.scripts-url label specified in the image and the script specified in a build configuration can take one of the following forms:

  • image:///path_to_scripts_dir: absolute path inside the image to a directory where the S2I scripts are located.
  • file:///path_to_scripts_dir: relative or absolute path to a directory on the host where the S2I scripts are located.
  • http(s)://path_to_scripts_dir: URL to a directory where the S2I scripts are located.
Table 2.1. S2I scripts
ScriptDescription

assemble

The assemble script builds the application artifacts from a source and places them into appropriate directories inside the image. This script is required. The workflow for this script is:

  1. Optional: Restore build artifacts. If you want to support incremental builds, make sure to define save-artifacts as well.
  2. Place the application source in the desired location.
  3. Build the application artifacts.
  4. Install the artifacts into locations appropriate for them to run.

run

The run script executes your application. This script is required.

save-artifacts

The save-artifacts script gathers all dependencies that can speed up the build processes that follow. This script is optional. For example:

  • For Ruby, gems installed by Bundler.
  • For Java, .m2 contents.

These dependencies are gathered into a tar file and streamed to the standard output.

usage

The usage script allows you to inform the user how to properly use your image. This script is optional.

test/run

The test/run script allows you to create a process to check if the image is working correctly. This script is optional. The proposed flow of that process is:

  1. Build the image.
  2. Run the image to verify the usage script.
  3. Run s2i build to verify the assemble script.
  4. Optional: Run s2i build again to verify the save-artifacts and assemble scripts save and restore artifacts functionality.
  5. Run the image to verify the test application is working.
Note

The suggested location to put the test application built by your test/run script is the test/test-app directory in your image repository.

Example S2I scripts

The following example S2I scripts are written in Bash. Each example assumes its tar contents are unpacked into the /tmp/s2i directory.

assemble script:

#!/bin/bash

# restore build artifacts
if [ "$(ls /tmp/s2i/artifacts/ 2>/dev/null)" ]; then
    mv /tmp/s2i/artifacts/* $HOME/.
fi

# move the application source
mv /tmp/s2i/src $HOME/src

# build application artifacts
pushd ${HOME}
make all

# install the artifacts
make install
popd

run script:

#!/bin/bash

# run the application
/opt/application/run.sh

save-artifacts script:

#!/bin/bash

pushd ${HOME}
if [ -d deps ]; then
    # all deps contents to tar stream
    tar cf - deps
fi
popd

usage script:

#!/bin/bash

# inform the user how to use the image
cat <<EOF
This is a S2I sample builder image, to use it, install
https://github.com/openshift/source-to-image
EOF

Additional resources

2.5.2.6. Using build volumes

You can mount build volumes to give running builds access to information that you don’t want to persist in the output container image.

Build volumes provide sensitive information, such as repository credentials, that the build environment or configuration only needs at build time. Build volumes are different from build inputs, whose data can persist in the output container image.

The mount points of build volumes, from which the running build reads data, are functionally similar to pod volume mounts.

Procedure

  • In the sourceStrategy definition of the BuildConfig object, add any build volumes to the volumes array. For example:

    spec:
      sourceStrategy:
        volumes:
          - name: secret-mvn 1
            mounts:
            - destinationPath: /opt/app-root/src/.ssh 2
            source:
              type: Secret 3
              secret:
                secretName: my-secret 4
          - name: settings-mvn 5
            mounts:
            - destinationPath: /opt/app-root/src/.m2 6
            source:
              type: ConfigMap 7
              configMap:
                name: my-config 8
          - name: my-csi-volume 9
            mounts:
            - destinationPath: /opt/app-root/src/some_path  10
            source:
              type: CSI 11
              csi:
                driver: csi.sharedresource.openshift.io 12
                readOnly: true 13
                volumeAttributes: 14
                  attribute: value
1 5 9
Required. A unique name.
2 6 10
Required. The absolute path of the mount point. It must not contain .. or : and doesn’t collide with the destination path generated by the builder. The /opt/app-root/src is the default home directory for many Red Hat S2I-enabled images.
3 7 11
Required. The type of source, ConfigMap, Secret, or CSI.
4 8
Required. The name of the source.
12
Required. The driver that provides the ephemeral CSI volume.
13
Optional. If true, this instructs the driver to provide a read-only volume.
14
Optional. The volume attributes of the ephemeral CSI volume. Consult the CSI driver’s documentation for supported attribute keys and values.
Note

The Shared Resource CSI Driver is supported as a Technology Preview feature.

2.5.3. Custom build

The custom build strategy allows developers to define a specific builder image responsible for the entire build process. Using your own builder image allows you to customize your build process.

A custom builder image is a plain container image embedded with build process logic, for example for building RPMs or base images.

Custom builds run with a high level of privilege and are not available to users by default. Only users who can be trusted with cluster administration permissions should be granted access to run custom builds.

2.5.3.1. Using FROM image for custom builds

You can use the customStrategy.from section to indicate the image to use for the custom build

Procedure

  • Set the customStrategy.from section:

    strategy:
      customStrategy:
        from:
          kind: "DockerImage"
          name: "openshift/sti-image-builder"
2.5.3.2. Using secrets in custom builds

In addition to secrets for source and images that can be added to all build types, custom strategies allow adding an arbitrary list of secrets to the builder pod.

Procedure

  • To mount each secret at a specific location, edit the secretSource and mountPath fields of the strategy YAML file:

    strategy:
      customStrategy:
        secrets:
          - secretSource: 1
              name: "secret1"
            mountPath: "/tmp/secret1" 2
          - secretSource:
              name: "secret2"
            mountPath: "/tmp/secret2"
    1
    secretSource is a reference to a secret in the same namespace as the build.
    2
    mountPath is the path inside the custom builder where the secret should be mounted.
2.5.3.3. Using environment variables for custom builds

To make environment variables available to the custom build process, you can add environment variables to the customStrategy definition of the build configuration.

The environment variables defined there are passed to the pod that runs the custom build.

Procedure

  1. Define a custom HTTP proxy to be used during build:

    customStrategy:
    ...
      env:
        - name: "HTTP_PROXY"
          value: "http://myproxy.net:5187/"
  2. To manage environment variables defined in the build configuration, enter the following command:

    $ oc set env <enter_variables>
2.5.3.4. Using custom builder images

OpenShift Container Platform’s custom build strategy enables you to define a specific builder image responsible for the entire build process. When you need a build to produce individual artifacts such as packages, JARs, WARs, installable ZIPs, or base images, use a custom builder image using the custom build strategy.

A custom builder image is a plain container image embedded with build process logic, which is used for building artifacts such as RPMs or base container images.

Additionally, the custom builder allows implementing any extended build process, such as a CI/CD flow that runs unit or integration tests.

2.5.3.4.1. Custom builder image

Upon invocation, a custom builder image receives the following environment variables with the information needed to proceed with the build:

Table 2.2. Custom Builder Environment Variables
Variable NameDescription

BUILD

The entire serialized JSON of the Build object definition. If you must use a specific API version for serialization, you can set the buildAPIVersion parameter in the custom strategy specification of the build configuration.

SOURCE_REPOSITORY

The URL of a Git repository with source to be built.

SOURCE_URI

Uses the same value as SOURCE_REPOSITORY. Either can be used.

SOURCE_CONTEXT_DIR

Specifies the subdirectory of the Git repository to be used when building. Only present if defined.

SOURCE_REF

The Git reference to be built.

ORIGIN_VERSION

The version of the OpenShift Container Platform master that created this build object.

OUTPUT_REGISTRY

The container image registry to push the image to.

OUTPUT_IMAGE

The container image tag name for the image being built.

PUSH_DOCKERCFG_PATH

The path to the container registry credentials for running a podman push operation.

2.5.3.4.2. Custom builder workflow

Although custom builder image authors have flexibility in defining the build process, your builder image must adhere to the following required steps necessary for running a build inside of OpenShift Container Platform:

  1. The Build object definition contains all the necessary information about input parameters for the build.
  2. Run the build process.
  3. If your build produces an image, push it to the output location of the build if it is defined. Other output locations can be passed with environment variables.

2.5.4. Pipeline build

Important

The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton.

Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system.

The Pipeline build strategy allows developers to define a Jenkins pipeline for use by the Jenkins pipeline plugin. The build can be started, monitored, and managed by OpenShift Container Platform in the same way as any other build type.

Pipeline workflows are defined in a jenkinsfile, either embedded directly in the build configuration, or supplied in a Git repository and referenced by the build configuration.

2.5.4.1. Understanding OpenShift Container Platform pipelines
Important

The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton.

Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system.

Pipelines give you control over building, deploying, and promoting your applications on OpenShift Container Platform. Using a combination of the Jenkins Pipeline build strategy, jenkinsfiles, and the OpenShift Container Platform Domain Specific Language (DSL) provided by the Jenkins Client Plugin, you can create advanced build, test, deploy, and promote pipelines for any scenario.

OpenShift Container Platform Jenkins Sync Plugin

The OpenShift Container Platform Jenkins Sync Plugin keeps the build configuration and build objects in sync with Jenkins jobs and builds, and provides the following:

  • Dynamic job and run creation in Jenkins.
  • Dynamic creation of agent pod templates from image streams, image stream tags, or config maps.
  • Injection of environment variables.
  • Pipeline visualization in the OpenShift Container Platform web console.
  • Integration with the Jenkins Git plugin, which passes commit information from OpenShift Container Platform builds to the Jenkins Git plugin.
  • Synchronization of secrets into Jenkins credential entries.

OpenShift Container Platform Jenkins Client Plugin

The OpenShift Container Platform Jenkins Client Plugin is a Jenkins plugin which aims to provide a readable, concise, comprehensive, and fluent Jenkins Pipeline syntax for rich interactions with an OpenShift Container Platform API Server. The plugin uses the OpenShift Container Platform command line tool, oc, which must be available on the nodes executing the script.

The Jenkins Client Plugin must be installed on your Jenkins master so the OpenShift Container Platform DSL will be available to use within the jenkinsfile for your application. This plugin is installed and enabled by default when using the OpenShift Container Platform Jenkins image.

For OpenShift Container Platform Pipelines within your project, you will must use the Jenkins Pipeline Build Strategy. This strategy defaults to using a jenkinsfile at the root of your source repository, but also provides the following configuration options:

  • An inline jenkinsfile field within your build configuration.
  • A jenkinsfilePath field within your build configuration that references the location of the jenkinsfile to use relative to the source contextDir.
Note

The optional jenkinsfilePath field specifies the name of the file to use, relative to the source contextDir. If contextDir is omitted, it defaults to the root of the repository. If jenkinsfilePath is omitted, it defaults to jenkinsfile.

2.5.4.2. Providing the Jenkins file for pipeline builds
Important

The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton.

Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system.

The jenkinsfile uses the standard groovy language syntax to allow fine grained control over the configuration, build, and deployment of your application.

You can supply the jenkinsfile in one of the following ways:

  • A file located within your source code repository.
  • Embedded as part of your build configuration using the jenkinsfile field.

When using the first option, the jenkinsfile must be included in your applications source code repository at one of the following locations:

  • A file named jenkinsfile at the root of your repository.
  • A file named jenkinsfile at the root of the source contextDir of your repository.
  • A file name specified via the jenkinsfilePath field of the JenkinsPipelineStrategy section of your BuildConfig, which is relative to the source contextDir if supplied, otherwise it defaults to the root of the repository.

The jenkinsfile is run on the Jenkins agent pod, which must have the OpenShift Container Platform client binaries available if you intend to use the OpenShift Container Platform DSL.

Procedure

To provide the Jenkins file, you can either:

  • Embed the Jenkins file in the build configuration.
  • Include in the build configuration a reference to the Git repository that contains the Jenkins file.

Embedded Definition

kind: "BuildConfig"
apiVersion: "v1"
metadata:
  name: "sample-pipeline"
spec:
  strategy:
    jenkinsPipelineStrategy:
      jenkinsfile: |-
        node('agent') {
          stage 'build'
          openshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true')
          stage 'deploy'
          openshiftDeploy(deploymentConfig: 'frontend')
        }

Reference to Git Repository

kind: "BuildConfig"
apiVersion: "v1"
metadata:
  name: "sample-pipeline"
spec:
  source:
    git:
      uri: "https://github.com/openshift/ruby-hello-world"
  strategy:
    jenkinsPipelineStrategy:
      jenkinsfilePath: some/repo/dir/filename 1

1
The optional jenkinsfilePath field specifies the name of the file to use, relative to the source contextDir. If contextDir is omitted, it defaults to the root of the repository. If jenkinsfilePath is omitted, it defaults to jenkinsfile.
2.5.4.3. Using environment variables for pipeline builds
Important

The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton.

Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system.

To make environment variables available to the Pipeline build process, you can add environment variables to the jenkinsPipelineStrategy definition of the build configuration.

Once defined, the environment variables will be set as parameters for any Jenkins job associated with the build configuration.

Procedure

  • To define environment variables to be used during build, edit the YAML file:

    jenkinsPipelineStrategy:
    ...
      env:
        - name: "FOO"
          value: "BAR"

You can also manage environment variables defined in the build configuration with the oc set env command.

2.5.4.3.1. Mapping between BuildConfig environment variables and Jenkins job parameters

When a Jenkins job is created or updated based on changes to a Pipeline strategy build configuration, any environment variables in the build configuration are mapped to Jenkins job parameters definitions, where the default values for the Jenkins job parameters definitions are the current values of the associated environment variables.

After the Jenkins job’s initial creation, you can still add additional parameters to the job from the Jenkins console. The parameter names differ from the names of the environment variables in the build configuration. The parameters are honored when builds are started for those Jenkins jobs.

How you start builds for the Jenkins job dictates how the parameters are set.

  • If you start with oc start-build, the values of the environment variables in the build configuration are the parameters set for the corresponding job instance. Any changes you make to the parameters' default values from the Jenkins console are ignored. The build configuration values take precedence.
  • If you start with oc start-build -e, the values for the environment variables specified in the -e option take precedence.

    • If you specify an environment variable not listed in the build configuration, they will be added as a Jenkins job parameter definitions.
    • Any changes you make from the Jenkins console to the parameters corresponding to the environment variables are ignored. The build configuration and what you specify with oc start-build -e takes precedence.
  • If you start the Jenkins job with the Jenkins console, then you can control the setting of the parameters with the Jenkins console as part of starting a build for the job.
Note

It is recommended that you specify in the build configuration all possible environment variables to be associated with job parameters. Doing so reduces disk I/O and improves performance during Jenkins processing.

2.5.4.4. Pipeline build tutorial
Important

The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton.

Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system.

This example demonstrates how to create an OpenShift Container Platform Pipeline that will build, deploy, and verify a Node.js/MongoDB application using the nodejs-mongodb.json template.

Procedure

  1. Create the Jenkins master:

      $ oc project <project_name>

    Select the project that you want to use or create a new project with oc new-project <project_name>.

      $ oc new-app jenkins-ephemeral 1

    If you want to use persistent storage, use jenkins-persistent instead.

  2. Create a file named nodejs-sample-pipeline.yaml with the following content:

    Note

    This creates a BuildConfig object that employs the Jenkins pipeline strategy to build, deploy, and scale the Node.js/MongoDB example application.

    kind: "BuildConfig"
    apiVersion: "v1"
    metadata:
      name: "nodejs-sample-pipeline"
    spec:
      strategy:
        jenkinsPipelineStrategy:
          jenkinsfile: <pipeline content from below>
        type: JenkinsPipeline
  3. After you create a BuildConfig object with a jenkinsPipelineStrategy, tell the pipeline what to do by using an inline jenkinsfile:

    Note

    This example does not set up a Git repository for the application.

    The following jenkinsfile content is written in Groovy using the OpenShift Container Platform DSL. For this example, include inline content in the BuildConfig object using the YAML Literal Style, though including a jenkinsfile in your source repository is the preferred method.

    def templatePath = 'https://raw.githubusercontent.com/openshift/nodejs-ex/master/openshift/templates/nodejs-mongodb.json' 1
    def templateName = 'nodejs-mongodb-example' 2
    pipeline {
      agent {
        node {
          label 'nodejs' 3
        }
      }
      options {
        timeout(time: 20, unit: 'MINUTES') 4
      }
      stages {
        stage('preamble') {
            steps {
                script {
                    openshift.withCluster() {
                        openshift.withProject() {
                            echo "Using project: ${openshift.project()}"
                        }
                    }
                }
            }
        }
        stage('cleanup') {
          steps {
            script {
                openshift.withCluster() {
                    openshift.withProject() {
                      openshift.selector("all", [ template : templateName ]).delete() 5
                      if (openshift.selector("secrets", templateName).exists()) { 6
                        openshift.selector("secrets", templateName).delete()
                      }
                    }
                }
            }
          }
        }
        stage('create') {
          steps {
            script {
                openshift.withCluster() {
                    openshift.withProject() {
                      openshift.newApp(templatePath) 7
                    }
                }
            }
          }
        }
        stage('build') {
          steps {
            script {
                openshift.withCluster() {
                    openshift.withProject() {
                      def builds = openshift.selector("bc", templateName).related('builds')
                      timeout(5) { 8
                        builds.untilEach(1) {
                          return (it.object().status.phase == "Complete")
                        }
                      }
                    }
                }
            }
          }
        }
        stage('deploy') {
          steps {
            script {
                openshift.withCluster() {
                    openshift.withProject() {
                      def rm = openshift.selector("dc", templateName).rollout()
                      timeout(5) { 9
                        openshift.selector("dc", templateName).related('pods').untilEach(1) {
                          return (it.object().status.phase == "Running")
                        }
                      }
                    }
                }
            }
          }
        }
        stage('tag') {
          steps {
            script {
                openshift.withCluster() {
                    openshift.withProject() {
                      openshift.tag("${templateName}:latest", "${templateName}-staging:latest") 10
                    }
                }
            }
          }
        }
      }
    }
    1
    Path of the template to use.
    1 2
    Name of the template that will be created.
    3
    Spin up a node.js agent pod on which to run this build.
    4
    Set a timeout of 20 minutes for this pipeline.
    5
    Delete everything with this template label.
    6
    Delete any secrets with this template label.
    7
    Create a new application from the templatePath.
    8
    Wait up to five minutes for the build to complete.
    9
    Wait up to five minutes for the deployment to complete.
    10
    If everything else succeeded, tag the $ {templateName}:latest image as $ {templateName}-staging:latest. A pipeline build configuration for the staging environment can watch for the $ {templateName}-staging:latest image to change and then deploy it to the staging environment.
    Note

    The previous example was written using the declarative pipeline style, but the older scripted pipeline style is also supported.

  4. Create the Pipeline BuildConfig in your OpenShift Container Platform cluster:

    $ oc create -f nodejs-sample-pipeline.yaml
    1. If you do not want to create your own file, you can use the sample from the Origin repository by running:

      $ oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/pipeline/nodejs-sample-pipeline.yaml
  5. Start the Pipeline:

    $ oc start-build nodejs-sample-pipeline
    Note

    Alternatively, you can start your pipeline with the OpenShift Container Platform web console by navigating to the Builds → Pipeline section and clicking Start Pipeline, or by visiting the Jenkins Console, navigating to the Pipeline that you created, and clicking Build Now.

    Once the pipeline is started, you should see the following actions performed within your project:

    • A job instance is created on the Jenkins server.
    • An agent pod is launched, if your pipeline requires one.
    • The pipeline runs on the agent pod, or the master if no agent is required.

      • Any previously created resources with the template=nodejs-mongodb-example label will be deleted.
      • A new application, and all of its associated resources, will be created from the nodejs-mongodb-example template.
      • A build will be started using the nodejs-mongodb-example BuildConfig.

        • The pipeline will wait until the build has completed to trigger the next stage.
      • A deployment will be started using the nodejs-mongodb-example deployment configuration.

        • The pipeline will wait until the deployment has completed to trigger the next stage.
      • If the build and deploy are successful, the nodejs-mongodb-example:latest image will be tagged as nodejs-mongodb-example:stage.
    • The agent pod is deleted, if one was required for the pipeline.

      Note

      The best way to visualize the pipeline execution is by viewing it in the OpenShift Container Platform web console. You can view your pipelines by logging in to the web console and navigating to Builds → Pipelines.

2.5.5. Adding secrets with web console

You can add a secret to your build configuration so that it can access a private repository.

Procedure

To add a secret to your build configuration so that it can access a private repository from the OpenShift Container Platform web console:

  1. Create a new OpenShift Container Platform project.
  2. Create a secret that contains credentials for accessing a private source code repository.
  3. Create a build configuration.
  4. On the build configuration editor page or in the create app from builder image page of the web console, set the Source Secret.
  5. Click Save.

2.5.6. Enabling pulling and pushing

You can enable pulling to a private registry by setting the pull secret and pushing by setting the push secret in the build configuration.

Procedure

To enable pulling to a private registry:

  • Set the pull secret in the build configuration.

To enable pushing:

  • Set the push secret in the build configuration.

2.6. Custom image builds with Buildah

With OpenShift Container Platform 4.10, a docker socket will not be present on the host nodes. This means the mount docker socket option of a custom build is not guaranteed to provide an accessible docker socket for use within a custom build image.

If you require this capability in order to build and push images, add the Buildah tool your custom build image and use it to build and push the image within your custom build logic. The following is an example of how to run custom builds with Buildah.

Note

Using the custom build strategy requires permissions that normal users do not have by default because it allows the user to execute arbitrary code inside a privileged container running on the cluster. This level of access can be used to compromise the cluster and therefore should be granted only to users who are trusted with administrative privileges on the cluster.

2.6.1. Prerequisites

2.6.2. Creating custom build artifacts

You must create the image you want to use as your custom build image.

Procedure

  1. Starting with an empty directory, create a file named Dockerfile with the following content:

    FROM registry.redhat.io/rhel8/buildah
    # In this example, `/tmp/build` contains the inputs that build when this
    # custom builder image is run. Normally the custom builder image fetches
    # this content from some location at build time, by using git clone as an example.
    ADD dockerfile.sample /tmp/input/Dockerfile
    ADD build.sh /usr/bin
    RUN chmod a+x /usr/bin/build.sh
    # /usr/bin/build.sh contains the actual custom build logic that will be run when
    # this custom builder image is run.
    ENTRYPOINT ["/usr/bin/build.sh"]
  2. In the same directory, create a file named dockerfile.sample. This file is included in the custom build image and defines the image that is produced by the custom build:

    FROM registry.access.redhat.com/ubi8/ubi
    RUN touch /tmp/build
  3. In the same directory, create a file named build.sh. This file contains the logic that is run when the custom build runs:

    #!/bin/sh
    # Note that in this case the build inputs are part of the custom builder image, but normally this
    # is retrieved from an external source.
    cd /tmp/input
    # OUTPUT_REGISTRY and OUTPUT_IMAGE are env variables provided by the custom
    # build framework
    TAG="${OUTPUT_REGISTRY}/${OUTPUT_IMAGE}"
    
    
    # performs the build of the new image defined by dockerfile.sample
    buildah --storage-driver vfs bud --isolation chroot -t ${TAG} .
    
    
    # buildah requires a slight modification to the push secret provided by the service
    # account to use it for pushing the image
    cp /var/run/secrets/openshift.io/push/.dockercfg /tmp
    (echo "{ \"auths\": " ; cat /var/run/secrets/openshift.io/push/.dockercfg ; echo "}") > /tmp/.dockercfg
    
    
    # push the new image to the target for the build
    buildah --storage-driver vfs push --tls-verify=false --authfile /tmp/.dockercfg ${TAG}

2.6.3. Build custom builder image

You can use OpenShift Container Platform to build and push custom builder images to use in a custom strategy.

Prerequisites

  • Define all the inputs that will go into creating your new custom builder image.

Procedure

  1. Define a BuildConfig object that will build your custom builder image:

    $ oc new-build --binary --strategy=docker --name custom-builder-image
  2. From the directory in which you created your custom build image, run the build:

    $ oc start-build custom-builder-image --from-dir . -F

    After the build completes, your new custom builder image is available in your project in an image stream tag that is named custom-builder-image:latest.

2.6.4. Use custom builder image

You can define a BuildConfig object that uses the custom strategy in conjunction with your custom builder image to execute your custom build logic.

Prerequisites

  • Define all the required inputs for new custom builder image.
  • Build your custom builder image.

Procedure

  1. Create a file named buildconfig.yaml. This file defines the BuildConfig object that is created in your project and executed:

    kind: BuildConfig
    apiVersion: build.openshift.io/v1
    metadata:
      name: sample-custom-build
      labels:
        name: sample-custom-build
      annotations:
        template.alpha.openshift.io/wait-for-ready: 'true'
    spec:
      strategy:
        type: Custom
        customStrategy:
          forcePull: true
          from:
            kind: ImageStreamTag
            name: custom-builder-image:latest
            namespace: <yourproject> 1
      output:
        to:
          kind: ImageStreamTag
          name: sample-custom:latest
    1
    Specify your project name.
  2. Create the BuildConfig:

    $ oc create -f buildconfig.yaml
  3. Create a file named imagestream.yaml. This file defines the image stream to which the build will push the image:

    kind: ImageStream
    apiVersion: image.openshift.io/v1
    metadata:
      name: sample-custom
    spec: {}
  4. Create the imagestream:

    $ oc create -f imagestream.yaml
  5. Run your custom build:

    $ oc start-build sample-custom-build -F

    When the build runs, it launches a pod running the custom builder image that was built earlier. The pod runs the build.sh logic that is defined as the entrypoint for the custom builder image. The build.sh logic invokes Buildah to build the dockerfile.sample that was embedded in the custom builder image, and then uses Buildah to push the new image to the sample-custom image stream.

2.7. Performing and configuring basic builds

The following sections provide instructions for basic build operations, including starting and canceling builds, editing BuildConfigs, deleting BuildConfigs, viewing build details, and accessing build logs.

2.7.1. Starting a build

You can manually start a new build from an existing build configuration in your current project.

Procedure

To manually start a build, enter the following command:

$ oc start-build <buildconfig_name>
2.7.1.1. Re-running a build

You can manually re-run a build using the --from-build flag.

Procedure

  • To manually re-run a build, enter the following command:

    $ oc start-build --from-build=<build_name>
2.7.1.2. Streaming build logs

You can specify the --follow flag to stream the build’s logs in stdout.

Procedure

  • To manually stream a build’s logs in stdout, enter the following command:

    $ oc start-build <buildconfig_name> --follow
2.7.1.3. Setting environment variables when starting a build

You can specify the --env flag to set any desired environment variable for the build.

Procedure

  • To specify a desired environment variable, enter the following command:

    $ oc start-build <buildconfig_name> --env=<key>=<value>
2.7.1.4. Starting a build with source

Rather than relying on a Git source pull or a Dockerfile for a build, you can also start a build by directly pushing your source, which could be the contents of a Git or SVN working directory, a set of pre-built binary artifacts you want to deploy, or a single file. This can be done by specifying one of the following options for the start-build command:

OptionDescription

--from-dir=<directory>

Specifies a directory that will be archived and used as a binary input for the build.

--from-file=<file>

Specifies a single file that will be the only file in the build source. The file is placed in the root of an empty directory with the same file name as the original file provided.

--from-repo=<local_source_repo>

Specifies a path to a local repository to use as the binary input for a build. Add the --commit option to control which branch, tag, or commit is used for the build.

When passing any of these options directly to the build, the contents are streamed to the build and override the current build source settings.

Note

Builds triggered from binary input will not preserve the source on the server, so rebuilds triggered by base image changes will use the source specified in the build configuration.

Procedure

  • Start a build from a source using the following command to send the contents of a local Git repository as an archive from the tag v2:

    $ oc start-build hello-world --from-repo=../hello-world --commit=v2

2.7.2. Canceling a build

You can cancel a build using the web console, or with the following CLI command.

Procedure

  • To manually cancel a build, enter the following command:

    $ oc cancel-build <build_name>
2.7.2.1. Canceling multiple builds

You can cancel multiple builds with the following CLI command.

Procedure

  • To manually cancel multiple builds, enter the following command:

    $ oc cancel-build <build1_name> <build2_name> <build3_name>
2.7.2.2. Canceling all builds

You can cancel all builds from the build configuration with the following CLI command.

Procedure

  • To cancel all builds, enter the following command:

    $ oc cancel-build bc/<buildconfig_name>
2.7.2.3. Canceling all builds in a given state

You can cancel all builds in a given state, such as new or pending, while ignoring the builds in other states.

Procedure

  • To cancel all in a given state, enter the following command:

    $ oc cancel-build bc/<buildconfig_name>

2.7.3. Editing a BuildConfig

To edit your build configurations, you use the Edit BuildConfig option in the Builds view of the Developer perspective.

You can use either of the following views to edit a BuildConfig:

  • The Form view enables you to edit your BuildConfig using the standard form fields and checkboxes.
  • The YAML view enables you to edit your BuildConfig with full control over the operations.

You can switch between the Form view and YAML view without losing any data. The data in the Form view is transferred to the YAML view and vice versa.

Procedure

  1. In the Builds view of the Developer perspective, click the menu kebab to see the Edit BuildConfig option.
  2. Click Edit BuildConfig to see the Form view option.
  3. In the Git section, enter the Git repository URL for the codebase you want to use to create an application. The URL is then validated.

    • Optional: Click Show Advanced Git Options to add details such as:

      • Git Reference to specify a branch, tag, or commit that contains code you want to use to build the application.
      • Context Dir to specify the subdirectory that contains code you want to use to build the application.
      • Source Secret to create a Secret Name with credentials for pulling your source code from a private repository.
  4. In the Build from section, select the option that you would like to build from. You can use the following options:

    • Image Stream tag references an image for a given image stream and tag. Enter the project, image stream, and tag of the location you would like to build from and push to.
    • Image Stream image references an image for a given image stream and image name. Enter the image stream image you would like to build from. Also enter the project, image stream, and tag to push to.
    • Docker image: The Docker image is referenced through a Docker image repository. You will also need to enter the project, image stream, and tag to refer to where you would like to push to.
  5. Optional: In the Environment Variables section, add the environment variables associated with the project by using the Name and Value fields. To add more environment variables, use Add Value, or Add from ConfigMap and Secret .
  6. Optional: To further customize your application, use the following advanced options:

    Trigger
    Triggers a new image build when the builder image changes. Add more triggers by clicking Add Trigger and selecting the Type and Secret.
    Secrets
    Adds secrets for your application. Add more secrets by clicking Add secret and selecting the Secret and Mount point.
    Policy
    Click Run policy to select the build run policy. The selected policy determines the order in which builds created from the build configuration must run.
    Hooks
    Select Run build hooks after image is built to run commands at the end of the build and verify the image. Add Hook type, Command, and Arguments to append to the command.
  7. Click Save to save the BuildConfig.

2.7.4. Deleting a BuildConfig

You can delete a BuildConfig using the following command.

Procedure

  • To delete a BuildConfig, enter the following command:

    $ oc delete bc <BuildConfigName>

    This also deletes all builds that were instantiated from this BuildConfig.

  • To delete a BuildConfig and keep the builds instatiated from the BuildConfig, specify the --cascade=false flag when you enter the following command:

    $ oc delete --cascade=false bc <BuildConfigName>

2.7.5. Viewing build details

You can view build details with the web console or by using the oc describe CLI command.

This displays information including:

  • The build source.
  • The build strategy.
  • The output destination.
  • Digest of the image in the destination registry.
  • How the build was created.

If the build uses the Docker or Source strategy, the oc describe output also includes information about the source revision used for the build, including the commit ID, author, committer, and message.

Procedure

  • To view build details, enter the following command:

    $ oc describe build <build_name>

2.7.6. Accessing build logs

You can access build logs using the web console or the CLI.

Procedure

  • To stream the logs using the build directly, enter the following command:

    $ oc describe build <build_name>
2.7.6.1. Accessing BuildConfig logs

You can access BuildConfig logs using the web console or the CLI.

Procedure

  • To stream the logs of the latest build for a BuildConfig, enter the following command:

    $ oc logs -f bc/<buildconfig_name>
2.7.6.2. Accessing BuildConfig logs for a given version build

You can access logs for a given version build for a BuildConfig using the web console or the CLI.

Procedure

  • To stream the logs for a given version build for a BuildConfig, enter the following command:

    $ oc logs --version=<number> bc/<buildconfig_name>
2.7.6.3. Enabling log verbosity

You can enable a more verbose output by passing the BUILD_LOGLEVEL environment variable as part of the sourceStrategy or dockerStrategy in a BuildConfig.

Note

An administrator can set the default build verbosity for the entire OpenShift Container Platform instance by configuring env/BUILD_LOGLEVEL. This default can be overridden by specifying BUILD_LOGLEVEL in a given BuildConfig. You can specify a higher priority override on the command line for non-binary builds by passing --build-loglevel to oc start-build.

Available log levels for source builds are as follows:

Level 0

Produces output from containers running the assemble script and all encountered errors. This is the default.

Level 1

Produces basic information about the executed process.

Level 2

Produces very detailed information about the executed process.

Level 3

Produces very detailed information about the executed process, and a listing of the archive contents.

Level 4

Currently produces the same information as level 3.

Level 5

Produces everything mentioned on previous levels and additionally provides docker push messages.

Procedure

  • To enable more verbose output, pass the BUILD_LOGLEVEL environment variable as part of the sourceStrategy or dockerStrategy in a BuildConfig:

    sourceStrategy:
    ...
      env:
        - name: "BUILD_LOGLEVEL"
          value: "2" 1
    1
    Adjust this value to the desired log level.

2.8. Triggering and modifying builds

The following sections outline how to trigger builds and modify builds using build hooks.

2.8.1. Build triggers

When defining a BuildConfig, you can define triggers to control the circumstances in which the BuildConfig should be run. The following build triggers are available:

  • Webhook
  • Image change
  • Configuration change
2.8.1.1. Webhook triggers

Webhook triggers allow you to trigger a new build by sending a request to the OpenShift Container Platform API endpoint. You can define these triggers using GitHub, GitLab, Bitbucket, or Generic webhooks.

Currently, OpenShift Container Platform webhooks only support the analogous versions of the push event for each of the Git-based Source Code Management (SCM) systems. All other event types are ignored.

When the push events are processed, the OpenShift Container Platform control plane host confirms if the branch reference inside the event matches the branch reference in the corresponding BuildConfig. If so, it then checks out the exact commit reference noted in the webhook event on the OpenShift Container Platform build. If they do not match, no build is triggered.

Note

oc new-app and oc new-build create GitHub and Generic webhook triggers automatically, but any other needed webhook triggers must be added manually. You can manually add triggers by setting triggers.

For all webhooks, you must define a secret with a key named WebHookSecretKey and the value being the value to be supplied when invoking the webhook. The webhook definition must then reference the secret. The secret ensures the uniqueness of the URL, preventing others from triggering the build. The value of the key is compared to the secret provided during the webhook invocation.

For example here is a GitHub webhook with a reference to a secret named mysecret:

type: "GitHub"
github:
  secretReference:
    name: "mysecret"

The secret is then defined as follows. Note that the value of the secret is base64 encoded as is required for any data field of a Secret object.

- kind: Secret
  apiVersion: v1
  metadata:
    name: mysecret
    creationTimestamp:
  data:
    WebHookSecretKey: c2VjcmV0dmFsdWUx
2.8.1.1.1. Using GitHub webhooks

GitHub webhooks handle the call made by GitHub when a repository is updated. When defining the trigger, you must specify a secret, which is part of the URL you supply to GitHub when configuring the webhook.

Example GitHub webhook definition:

type: "GitHub"
github:
  secretReference:
    name: "mysecret"
Note

The secret used in the webhook trigger configuration is not the same as secret field you encounter when configuring webhook in GitHub UI. The former is to make the webhook URL unique and hard to predict, the latter is an optional string field used to create HMAC hex digest of the body, which is sent as an X-Hub-Signature header.

The payload URL is returned as the GitHub Webhook URL by the oc describe command (see Displaying Webhook URLs), and is structured as follows:

Example output

https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github

Prerequisites

  • Create a BuildConfig from a GitHub repository.

Procedure

  1. To configure a GitHub Webhook:

    1. After creating a BuildConfig from a GitHub repository, run:

      $ oc describe bc/<name-of-your-BuildConfig>

      This generates a webhook GitHub URL that looks like:

      Example output

      <https://api.starter-us-east-1.openshift.com:443/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github

    2. Cut and paste this URL into GitHub, from the GitHub web console.
    3. In your GitHub repository, select Add Webhook from Settings → Webhooks.
    4. Paste the URL output into the Payload URL field.
    5. Change the Content Type from GitHub’s default application/x-www-form-urlencoded to application/json.
    6. Click Add webhook.

      You should see a message from GitHub stating that your webhook was successfully configured.

      Now, when you push a change to your GitHub repository, a new build automatically starts, and upon a successful build a new deployment starts.

      Note

      Gogs supports the same webhook payload format as GitHub. Therefore, if you are using a Gogs server, you can define a GitHub webhook trigger on your BuildConfig and trigger it by your Gogs server as well.

  2. Given a file containing a valid JSON payload, such as payload.json, you can manually trigger the webhook with curl:

    $ curl -H "X-GitHub-Event: push" -H "Content-Type: application/json" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github

    The -k argument is only necessary if your API server does not have a properly signed certificate.

Note

The build will only be triggered if the ref value from GitHub webhook event matches the ref value specified in the source.git field of the BuildConfig resource.

Additional resources

2.8.1.1.2. Using GitLab webhooks

GitLab webhooks handle the call made by GitLab when a repository is updated. As with the GitHub triggers, you must specify a secret. The following example is a trigger definition YAML within the BuildConfig:

type: "GitLab"
gitlab:
  secretReference:
    name: "mysecret"

The payload URL is returned as the GitLab Webhook URL by the oc describe command, and is structured as follows:

Example output

https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab

Procedure

  1. To configure a GitLab Webhook:

    1. Describe the BuildConfig to get the webhook URL:

      $ oc describe bc <name>
    2. Copy the webhook URL, replacing <secret> with your secret value.
    3. Follow the GitLab setup instructions to paste the webhook URL into your GitLab repository settings.
  2. Given a file containing a valid JSON payload, such as payload.json, you can manually trigger the webhook with curl:

    $ curl -H "X-GitLab-Event: Push Hook" -H "Content-Type: application/json" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab

    The -k argument is only necessary if your API server does not have a properly signed certificate.

2.8.1.1.3. Using Bitbucket webhooks

Bitbucket webhooks handle the call made by Bitbucket when a repository is updated. Similar to the previous triggers, you must specify a secret. The following example is a trigger definition YAML within the BuildConfig:

type: "Bitbucket"
bitbucket:
  secretReference:
    name: "mysecret"

The payload URL is returned as the Bitbucket Webhook URL by the oc describe command, and is structured as follows:

Example output

https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket

Procedure

  1. To configure a Bitbucket Webhook:

    1. Describe the 'BuildConfig' to get the webhook URL:

      $ oc describe bc <name>
    2. Copy the webhook URL, replacing <secret> with your secret value.
    3. Follow the Bitbucket setup instructions to paste the webhook URL into your Bitbucket repository settings.
  2. Given a file containing a valid JSON payload, such as payload.json, you can manually trigger the webhook with curl:

    $ curl -H "X-Event-Key: repo:push" -H "Content-Type: application/json" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket

    The -k argument is only necessary if your API server does not have a properly signed certificate.

2.8.1.1.4. Using generic webhooks

Generic webhooks are invoked from any system capable of making a web request. As with the other webhooks, you must specify a secret, which is part of the URL that the caller must use to trigger the build. The secret ensures the uniqueness of the URL, preventing others from triggering the build. The following is an example trigger definition YAML within the BuildConfig:

type: "Generic"
generic:
  secretReference:
    name: "mysecret"
  allowEnv: true 1
1
Set to true to allow a generic webhook to pass in environment variables.

Procedure

  1. To set up the caller, supply the calling system with the URL of the generic webhook endpoint for your build:

    Example output

    https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic

    The caller must invoke the webhook as a POST operation.

  2. To invoke the webhook manually you can use curl:

    $ curl -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic

    The HTTP verb must be set to POST. The insecure -k flag is specified to ignore certificate validation. This second flag is not necessary if your cluster has properly signed certificates.

    The endpoint can accept an optional payload with the following format:

    git:
      uri: "<url to git repository>"
      ref: "<optional git reference>"
      commit: "<commit hash identifying a specific git commit>"
      author:
        name: "<author name>"
        email: "<author e-mail>"
      committer:
        name: "<committer name>"
        email: "<committer e-mail>"
      message: "<commit message>"
    env: 1
       - name: "<variable name>"
         value: "<variable value>"
    1
    Similar to the BuildConfig environment variables, the environment variables defined here are made available to your build. If these variables collide with the BuildConfig environment variables, these variables take precedence. By default, environment variables passed by webhook are ignored. Set the allowEnv field to true on the webhook definition to enable this behavior.
  3. To pass this payload using curl, define it in a file named payload_file.yaml and run:

    $ curl -H "Content-Type: application/yaml" --data-binary @payload_file.yaml -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic

    The arguments are the same as the previous example with the addition of a header and a payload. The -H argument sets the Content-Type header to application/yaml or application/json depending on your payload format. The --data-binary argument is used to send a binary payload with newlines intact with the POST request.

Note

OpenShift Container Platform permits builds to be triggered by the generic webhook even if an invalid request payload is presented, for example, invalid content type, unparsable or invalid content, and so on. This behavior is maintained for backwards compatibility. If an invalid request payload is presented, OpenShift Container Platform returns a warning in JSON format as part of its HTTP 200 OK response.

2.8.1.1.5. Displaying webhook URLs

You can use the following command to display webhook URLs associated with a build configuration. If the command does not display any webhook URLs, then no webhook trigger is defined for that build configuration.

Procedure

  • To display any webhook URLs associated with a BuildConfig, run:
$ oc describe bc <name>
2.8.1.2. Using image change triggers

As a developer, you can configure your build to run automatically every time a base image changes.

You can use image change triggers to automatically invoke your build when a new version of an upstream image is available. For example, if a build is based on a RHEL image, you can trigger that build to run any time the RHEL image changes. As a result, the application image is always running on the latest RHEL base image.

Note

Image streams that point to container images in v1 container registries only trigger a build once when the image stream tag becomes available and not on subsequent image updates. This is due to the lack of uniquely identifiable images in v1 container registries.

Procedure

  1. Define an ImageStream that points to the upstream image you want to use as a trigger:

    kind: "ImageStream"
    apiVersion: "v1"
    metadata:
      name: "ruby-20-centos7"

    This defines the image stream that is tied to a container image repository located at <system-registry>/<namespace>/ruby-20-centos7. The <system-registry> is defined as a service with the name docker-registry running in OpenShift Container Platform.

  2. If an image stream is the base image for the build, set the from field in the build strategy to point to the ImageStream:

    strategy:
      sourceStrategy:
        from:
          kind: "ImageStreamTag"
          name: "ruby-20-centos7:latest"

    In this case, the sourceStrategy definition is consuming the latest tag of the image stream named ruby-20-centos7 located within this namespace.

  3. Define a build with one or more triggers that point to ImageStreams:

    type: "ImageChange" 1
    imageChange: {}
    type: "ImageChange" 2
    imageChange:
      from:
        kind: "ImageStreamTag"
        name: "custom-image:latest"
    1
    An image change trigger that monitors the ImageStream and Tag as defined by the build strategy’s from field. The imageChange object here must be empty.
    2
    An image change trigger that monitors an arbitrary image stream. The imageChange part, in this case, must include a from field that references the ImageStreamTag to monitor.

When using an image change trigger for the strategy image stream, the generated build is supplied with an immutable docker tag that points to the latest image corresponding to that tag. This new image reference is used by the strategy when it executes for the build.

For other image change triggers that do not reference the strategy image stream, a new build is started, but the build strategy is not updated with a unique image reference.

Since this example has an image change trigger for the strategy, the resulting build is:

strategy:
  sourceStrategy:
    from:
      kind: "DockerImage"
      name: "172.30.17.3:5001/mynamespace/ruby-20-centos7:<immutableid>"

This ensures that the triggered build uses the new image that was just pushed to the repository, and the build can be re-run any time with the same inputs.

You can pause an image change trigger to allow multiple changes on the referenced image stream before a build is started. You can also set the paused attribute to true when initially adding an ImageChangeTrigger to a BuildConfig to prevent a build from being immediately triggered.

type: "ImageChange"
imageChange:
  from:
    kind: "ImageStreamTag"
    name: "custom-image:latest"
  paused: true

In addition to setting the image field for all Strategy types, for custom builds, the OPENSHIFT_CUSTOM_BUILD_BASE_IMAGE environment variable is checked. If it does not exist, then it is created with the immutable image reference. If it does exist, then it is updated with the immutable image reference.

If a build is triggered due to a webhook trigger or manual request, the build that is created uses the <immutableid> resolved from the ImageStream referenced by the Strategy. This ensures that builds are performed using consistent image tags for ease of reproduction.

Additional resources

2.8.1.3. Identifying the image change trigger of a build

As a developer, if you have image change triggers, you can identify which image change initiated the last build. This can be useful for debugging or troubleshooting builds.

Example BuildConfig

apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
  name: bc-ict-example
  namespace: bc-ict-example-namespace
spec:

# ...

  triggers:
  - imageChange:
      from:
        kind: ImageStreamTag
        name: input:latest
        namespace: bc-ict-example-namespace
  - imageChange:
      from:
        kind: ImageStreamTag
        name: input2:latest
        namespace: bc-ict-example-namespace
    type: ImageChange
status:
  imageChangeTriggers:
  - from:
      name: input:latest
      namespace: bc-ict-example-namespace
    lastTriggerTime: "2021-06-30T13:47:53Z"
    lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input@sha256:0f88ffbeb9d25525720bfa3524cb1bf0908b7f791057cf1acfae917b11266a69
  - from:
      name: input2:latest
      namespace: bc-ict-example-namespace
    lastTriggeredImageID:  image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input2@sha256:0f88ffbeb9d25525720bfa3524cb2ce0908b7f791057cf1acfae917b11266a69

  lastVersion: 1

Note

This example omits elements that are not related to image change triggers.

Prerequisites

  • You have configured multiple image change triggers. These triggers have triggered one or more builds.

Procedure

  1. In buildConfig.status.imageChangeTriggers to identify the lastTriggerTime that has the latest timestamp.

    This ImageChangeTriggerStatus

    Then you use the `name` and `namespace` from that build to find the corresponding image change trigger in `buildConfig.spec.triggers`.
  2. Under imageChangeTriggers, compare timestamps to identify the latest

Image change triggers

In your build configuration, buildConfig.spec.triggers is an array of build trigger policies, BuildTriggerPolicy.

Each BuildTriggerPolicy has a type field and set of pointers fields. Each pointer field corresponds to one of the allowed values for the type field. As such, you can only set BuildTriggerPolicy to only one pointer field.

For image change triggers, the value of type is ImageChange. Then, the imageChange field is the pointer to an ImageChangeTrigger object, which has the following fields:

  • lastTriggeredImageID: This field, which is not shown in the example, is deprecated in OpenShift Container Platform 4.8 and will be ignored in a future release. It contains the resolved image reference for the ImageStreamTag when the last build was triggered from this BuildConfig.
  • paused: You can use this field, which is not shown in the example, to temporarily disable this particular image change trigger.
  • from: You use this field to reference the ImageStreamTag that drives this image change trigger. Its type is the core Kubernetes type, OwnerReference.

The from field has the following fields of note: kind: For image change triggers, the only supported value is ImageStreamTag. namespace: You use this field to specify the namespace of the ImageStreamTag. ** name: You use this field to specify the ImageStreamTag.

Image change trigger status

In your build configuration, buildConfig.status.imageChangeTriggers is an array of ImageChangeTriggerStatus elements. Each ImageChangeTriggerStatus element includes the from, lastTriggeredImageID, and lastTriggerTime elements shown in the preceding example.

The ImageChangeTriggerStatus that has the most recent lastTriggerTime triggered the most recent build. You use its name and namespace to identify the image change trigger in buildConfig.spec.triggers that triggered the build.

The lastTriggerTime with the most recent timestamp signifies the ImageChangeTriggerStatus of the last build. This ImageChangeTriggerStatus has the same name and namespace as the image change trigger in buildConfig.spec.triggers that triggered the build.

Additional resources

2.8.1.4. Configuration change triggers

A configuration change trigger allows a build to be automatically invoked as soon as a new BuildConfig is created.

The following is an example trigger definition YAML within the BuildConfig:

  type: "ConfigChange"
Note

Configuration change triggers currently only work when creating a new BuildConfig. In a future release, configuration change triggers will also be able to launch a build whenever a BuildConfig is updated.

2.8.1.4.1. Setting triggers manually

Triggers can be added to and removed from build configurations with oc set triggers.

Procedure

  • To set a GitHub webhook trigger on a build configuration, use:

    $ oc set triggers bc <name> --from-github
  • To set an imagechange trigger, use:

    $ oc set triggers bc <name> --from-image='<image>'
  • To remove a trigger, add --remove:

    $ oc set triggers bc <name> --from-bitbucket --remove
Note

When a webhook trigger already exists, adding it again regenerates the webhook secret.

For more information, consult the help documentation with by running:

$ oc set triggers --help

2.8.2. Build hooks

Build hooks allow behavior to be injected into the build process.

The postCommit field of a BuildConfig object runs commands inside a temporary container that is running the build output image. The hook is run immediately after the last layer of the image has been committed and before the image is pushed to a registry.

The current working directory is set to the image’s WORKDIR, which is the default working directory of the container image. For most images, this is where the source code is located.

The hook fails if the script or command returns a non-zero exit code or if starting the temporary container fails. When the hook fails it marks the build as failed and the image is not pushed to a registry. The reason for failing can be inspected by looking at the build logs.

Build hooks can be used to run unit tests to verify the image before the build is marked complete and the image is made available in a registry. If all tests pass and the test runner returns with exit code 0, the build is marked successful. In case of any test failure, the build is marked as failed. In all cases, the build log contains the output of the test runner, which can be used to identify failed tests.

The postCommit hook is not only limited to running tests, but can be used for other commands as well. Since it runs in a temporary container, changes made by the hook do not persist, meaning that running the hook cannot affect the final image. This behavior allows for, among other uses, the installation and usage of test dependencies that are automatically discarded and are not present in the final image.

2.8.2.1. Configuring post commit build hooks

There are different ways to configure the post build hook. All forms in the following examples are equivalent and run bundle exec rake test --verbose.

Procedure

  • Shell script:

    postCommit:
      script: "bundle exec rake test --verbose"

    The script value is a shell script to be run with /bin/sh -ic. Use this when a shell script is appropriate to execute the build hook. For example, for running unit tests as above. To control the image entry point, or if the image does not have /bin/sh, use command and/or args.

    Note

    The additional -i flag was introduced to improve the experience working with CentOS and RHEL images, and may be removed in a future release.

  • Command as the image entry point:

    postCommit:
      command: ["/bin/bash", "-c", "bundle exec rake test --verbose"]

    In this form, command is the command to run, which overrides the image entry point in the exec form, as documented in the Dockerfile reference. This is needed if the image does not have /bin/sh, or if you do not want to use a shell. In all other cases, using script might be more convenient.

  • Command with arguments:

    postCommit:
      command: ["bundle", "exec", "rake", "test"]
      args: ["--verbose"]

    This form is equivalent to appending the arguments to command.

Note

Providing both script and command simultaneously creates an invalid build hook.

2.8.2.2. Using the CLI to set post commit build hooks

The oc set build-hook command can be used to set the build hook for a build configuration.

Procedure

  1. To set a command as the post-commit build hook:

    $ oc set build-hook bc/mybc \
        --post-commit \
        --command \
        -- bundle exec rake test --verbose
  2. To set a script as the post-commit build hook:

    $ oc set build-hook bc/mybc --post-commit --script="bundle exec rake test --verbose"

2.9. Performing advanced builds

The following sections provide instructions for advanced build operations including setting build resources and maximum duration, assigning builds to nodes, chaining builds, build pruning, and build run policies.

2.9.1. Setting build resources

By default, builds are completed by pods using unbound resources, such as memory and CPU. These resources can be limited.

Procedure

You can limit resource use in two ways:

  • Limit resource use by specifying resource limits in the default container limits of a project.
  • Limit resource use by specifying resource limits as part of the build configuration. ** In the following example, each of the resources, cpu, and memory parameters are optional:

    apiVersion: "v1"
    kind: "BuildConfig"
    metadata:
      name: "sample-build"
    spec:
      resources:
        limits:
          cpu: "100m" 1
          memory: "256Mi" 2
    1
    cpu is in CPU units: 100m represents 0.1 CPU units (100 * 1e-3).
    2
    memory is in bytes: 256Mi represents 268435456 bytes (256 * 2 ^ 20).

    However, if a quota has been defined for your project, one of the following two items is required:

    • A resources section set with an explicit requests:

      resources:
        requests: 1
          cpu: "100m"
          memory: "256Mi"
      1
      The requests object contains the list of resources that correspond to the list of resources in the quota.
    • A limit range defined in your project, where the defaults from the LimitRange object apply to pods created during the build process.

      Otherwise, build pod creation will fail, citing a failure to satisfy quota.

2.9.2. Setting maximum duration

When defining a BuildConfig object, you can define its maximum duration by setting the completionDeadlineSeconds field. It is specified in seconds and is not set by default. When not set, there is no maximum duration enforced.

The maximum duration is counted from the time when a build pod gets scheduled in the system, and defines how long it can be active, including the time needed to pull the builder image. After reaching the specified timeout, the build is terminated by OpenShift Container Platform.

Procedure

  • To set maximum duration, specify completionDeadlineSeconds in your BuildConfig. The following example shows the part of a BuildConfig specifying completionDeadlineSeconds field for 30 minutes:

    spec:
      completionDeadlineSeconds: 1800
Note

This setting is not supported with the Pipeline Strategy option.

2.9.3. Assigning builds to specific nodes

Builds can be targeted to run on specific nodes by specifying labels in the nodeSelector field of a build configuration. The nodeSelector value is a set of key-value pairs that are matched to Node labels when scheduling the build pod.

The nodeSelector value can also be controlled by cluster-wide default and override values. Defaults will only be applied if the build configuration does not define any key-value pairs for the nodeSelector and also does not define an explicitly empty map value of nodeSelector:{}. Override values will replace values in the build configuration on a key by key basis.

Note

If the specified NodeSelector cannot be matched to a node with those labels, the build still stay in the Pending state indefinitely.

Procedure

  • Assign builds to run on specific nodes by assigning labels in the nodeSelector field of the BuildConfig, for example:

    apiVersion: "v1"
    kind: "BuildConfig"
    metadata:
      name: "sample-build"
    spec:
      nodeSelector:1
        key1: value1
        key2: value2
    1
    Builds associated with this build configuration will run only on nodes with the key1=value2 and key2=value2 labels.

2.9.4. Chained builds

For compiled languages such as Go, C, C++, and Java, including the dependencies necessary for compilation in the application image might increase the size of the image or introduce vulnerabilities that can be exploited.

To avoid these problems, two builds can be chained together. One build that produces the compiled artifact, and a second build that places that artifact in a separate image that runs the artifact.

In the following example, a source-to-image (S2I) build is combined with a docker build to compile an artifact that is then placed in a separate runtime image.

Note

Although this example chains a S2I build and a docker build, the first build can use any strategy that produces an image containing the desired artifacts, and the second build can use any strategy that can consume input content from an image.

The first build takes the application source and produces an image containing a WAR file. The image is pushed to the artifact-image image stream. The path of the output artifact depends on the assemble script of the S2I builder used. In this case, it is output to /wildfly/standalone/deployments/ROOT.war.

apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
  name: artifact-build
spec:
  output:
    to:
      kind: ImageStreamTag
      name: artifact-image:latest
  source:
    git:
      uri: https://github.com/openshift/openshift-jee-sample.git
      ref: "master"
  strategy:
    sourceStrategy:
      from:
        kind: ImageStreamTag
        name: wildfly:10.1
        namespace: openshift

The second build uses image source with a path to the WAR file inside the output image from the first build. An inline dockerfile copies that WAR file into a runtime image.

apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
  name: image-build
spec:
  output:
    to:
      kind: ImageStreamTag
      name: image-build:latest
  source:
    dockerfile: |-
      FROM jee-runtime:latest
      COPY ROOT.war /deployments/ROOT.war
    images:
    - from: 1
        kind: ImageStreamTag
        name: artifact-image:latest
      paths: 2
      - sourcePath: /wildfly/standalone/deployments/ROOT.war
        destinationDir: "."
  strategy:
    dockerStrategy:
      from: 3
        kind: ImageStreamTag
        name: jee-runtime:latest
  triggers:
  - imageChange: {}
    type: ImageChange
1
from specifies that the docker build should include the output of the image from the artifact-image image stream, which was the target of the previous build.
2
paths specifies which paths from the target image to include in the current docker build.
3
The runtime image is used as the source image for the docker build.

The result of this setup is that the output image of the second build does not have to contain any of the build tools that are needed to create the WAR file. Also, because the second build contains an image change trigger, whenever the first build is run and produces a new image with the binary artifact, the second build is automatically triggered to produce a runtime image that contains that artifact. Therefore, both builds behave as a single build with two stages.

2.9.5. Pruning builds

By default, builds that have completed their lifecycle are persisted indefinitely. You can limit the number of previous builds that are retained.

Procedure

  1. Limit the number of previous builds that are retained by supplying a positive integer value for successfulBuildsHistoryLimit or failedBuildsHistoryLimit in your BuildConfig, for example:

    apiVersion: "v1"
    kind: "BuildConfig"
    metadata:
      name: "sample-build"
    spec:
      successfulBuildsHistoryLimit: 2 1
      failedBuildsHistoryLimit: 2 2
    1
    successfulBuildsHistoryLimit will retain up to two builds with a status of completed.
    2
    failedBuildsHistoryLimit will retain up to two builds with a status of failed, canceled, or error.
  2. Trigger build pruning by one of the following actions:

    • Updating a build configuration.
    • Waiting for a build to complete its lifecycle.

Builds are sorted by their creation timestamp with the oldest builds being pruned first.

Note

Administrators can manually prune builds using the 'oc adm' object pruning command.

2.9.6. Build run policy

The build run policy describes the order in which the builds created from the build configuration should run. This can be done by changing the value of the runPolicy field in the spec section of the Build specification.

It is also possible to change the runPolicy value for existing build configurations, by:

  • Changing Parallel to Serial or SerialLatestOnly and triggering a new build from this configuration causes the new build to wait until all parallel builds complete as the serial build can only run alone.
  • Changing Serial to SerialLatestOnly and triggering a new build causes cancellation of all existing builds in queue, except the currently running build and the most recently created build. The newest build runs next.

2.10. Using Red Hat subscriptions in builds

Use the following sections to run entitled builds on OpenShift Container Platform.

2.10.1. Creating an image stream tag for the Red Hat Universal Base Image

To use Red Hat subscriptions within a build, you create an image stream tag to reference the Universal Base Image (UBI).

To make the UBI available in every project in the cluster, you add the image stream tag to the openshift namespace. Otherwise, to make it available in a specific project, you add the image stream tag to that project.

The benefit of using image stream tags this way is that doing so grants access to the UBI based on the registry.redhat.io credentials in the install pull secret without exposing the pull secret to other users. This is more convenient than requiring each developer to install pull secrets with registry.redhat.io credentials in each project.

Procedure

  • To create an ImageStreamTag in the openshift namespace, so it is available to developers in all projects, enter:

    $ oc tag --source=docker registry.redhat.io/ubi8/ubi:latest ubi:latest -n openshift
    Tip

    You can alternatively apply the following YAML to create an ImageStreamTag in the openshift namespace:

    apiVersion: image.openshift.io/v1
    kind: ImageStream
    metadata:
      name: ubi
      namespace: openshift
    spec:
      tags:
      - from:
          kind: DockerImage
          name: registry.redhat.io/ubi8/ubi:latest
        name: latest
        referencePolicy:
          type: Source
  • To create an ImageStreamTag in a single project, enter:

    $ oc tag --source=docker registry.redhat.io/ubi8/ubi:latest ubi:latest
    Tip

    You can alternatively apply the following YAML to create an ImageStreamTag in a single project:

    apiVersion: image.openshift.io/v1
    kind: ImageStream
    metadata:
      name: ubi
    spec:
      tags:
      - from:
          kind: DockerImage
          name: registry.redhat.io/ubi8/ubi:latest
        name: latest
        referencePolicy:
          type: Source

2.10.2. Adding subscription entitlements as a build secret

Builds that use Red Hat subscriptions to install content must include the entitlement keys as a build secret.

Prerequisites

You must have access to Red Hat entitlements through your subscription. The entitlement secret is automatically created by the Insights Operator.

Tip

When you perform an Entitlement Build using Red Hat Enterprise Linux (RHEL) 7, you must have the following instructions in your Dockerfile before you run any yum commands:

RUN rm /etc/rhsm-host

Procedure

  1. Add the etc-pki-entitlement secret as a build volume in the build configuration’s Docker strategy:

    strategy:
      dockerStrategy:
        from:
          kind: ImageStreamTag
          name: ubi:latest
        volumes:
        - name: etc-pki-entitlement
          mounts:
          - destinationPath: /etc/pki/entitlement
          source:
            type: Secret
            secret:
              secretName: etc-pki-entitlement

2.10.3. Running builds with Subscription Manager

2.10.3.1. Docker builds using Subscription Manager

Docker strategy builds can use the Subscription Manager to install subscription content.

Prerequisites

The entitlement keys must be added as build strategy volumes.

Procedure

Use the following as an example Dockerfile to install content with the Subscription Manager:

FROM registry.redhat.io/ubi8/ubi:latest
RUN dnf search kernel-devel --showduplicates && \
        dnf install -y kernel-devel

2.10.4. Running builds with Red Hat Satellite subscriptions

2.10.4.1. Adding Red Hat Satellite configurations to builds

Builds that use Red Hat Satellite to install content must provide appropriate configurations to obtain content from Satellite repositories.

Prerequisites

  • You must provide or create a yum-compatible repository configuration file that downloads content from your Satellite instance.

    Sample repository configuration

    [test-<name>]
    name=test-<number>
    baseurl = https://satellite.../content/dist/rhel/server/7/7Server/x86_64/os
    enabled=1
    gpgcheck=0
    sslverify=0
    sslclientkey = /etc/pki/entitlement/...-key.pem
    sslclientcert = /etc/pki/entitlement/....pem

Procedure

  1. Create a ConfigMap containing the Satellite repository configuration file:

    $ oc create configmap yum-repos-d --from-file /path/to/satellite.repo
  2. Add the Satellite repository configuration and entitlement key as a build volumes:

    strategy:
      dockerStrategy:
        from:
          kind: ImageStreamTag
          name: ubi:latest
        volumes:
        - name: yum-repos-d
          mounts:
          - destinationPath: /etc/yum.repos.d
          source:
            type: ConfigMap
            configMap:
              name: yum-repos-d
        - name: etc-pki-entitlement
          mounts:
          - destinationPath: /etc/pki/entitlement
          source:
            type: Secret
            secret:
              secretName: etc-pki-entitlement
2.10.4.2. Docker builds using Red Hat Satellite subscriptions

Docker strategy builds can use Red Hat Satellite repositories to install subscription content.

Prerequisites

  • You have added the entitlement keys and Satellite repository configurations as build volumes.

Procedure

Use the following as an example Dockerfile to install content with Satellite:

FROM registry.redhat.io/ubi8/ubi:latest
RUN dnf search kernel-devel --showduplicates && \
        dnf install -y kernel-devel

2.10.5. Running entitled builds using SharedSecret objects

You can configure and perform a build in one namespace that securely uses RHEL entitlements from a Secret object in another namespace.

You can still access RHEL entitlements from OpenShift Builds by creating a Secret object with your subscription credentials in the same namespace as your Build object. However, now, in OpenShift Container Platform 4.10 and later, you can access your credentials and certificates from a Secret object in one of the OpenShift Container Platform system namespaces. You run entitled builds with a CSI volume mount of a SharedSecret custom resource (CR) instance that references the Secret object.

This procedure relies on the newly introduced Shared Resources CSI Driver feature, which you can use to declare CSI Volume mounts in OpenShift Container Platform Builds. It also relies on the OpenShift Container Platform Insights Operator.

Important

The Shared Resources CSI Driver and The Build CSI Volumes are both Technology Preview features, which are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

The Shared Resources CSI Driver and the Build CSI Volumes features also belong to the TechPreviewNoUpgrade feature set, which is a subset of the current Technology Preview features. You can enable the TechPreviewNoUpgrade feature set on test clusters, where you can fully test them while leaving the features disabled on production clusters. Enabling this feature set cannot be undone and prevents minor version updates. This feature set is not recommended on production clusters. See "Enabling Technology Preview features using feature gates" in the following "Additional resources" section.

Prerequisites

  • You have enabled the TechPreviewNoUpgrade feature set by using the feature gates.
  • You have a SharedSecret custom resource (CR) instance that references the Secret object where the Insights Operator stores the subscription credentials.
  • You must have permission to perform the following actions:

    • Create build configs and start builds.
    • Discover which SharedSecret CR instances are available by entering the oc get sharedsecrets command and getting a non-empty list back.
    • Determine if the builder service account available to you in your namespace is allowed to use the given SharedSecret CR instance. In other words, you can run oc adm policy who-can use <identifier of specific SharedSecret> to see if the builder service account in your namespace is listed.
Note

If neither of the last two prerequisites in this list are met, establish, or ask someone to establish, the necessary role-based access control (RBAC) so that you can discover SharedSecret CR instances and enable service accounts to use SharedSecret CR instances.

Procedure

  1. Grant the builder service account RBAC permissions to use the SharedSecret CR instance by using oc apply with YAML content:

    Note

    Currently, kubectl and oc have hard-coded special case logic restricting the use verb to roles centered around pod security. Therefore, you cannot use oc create role …​ to create the role needed for consuming SharedSecret CR instances.

    Example oc apply -f command with YAML Role object definition

    $ oc apply -f - <<EOF
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: shared-resource-my-share
      namespace: my-namespace
    rules:
      - apiGroups:
          - sharedresource.openshift.io
        resources:
          - sharedsecrets
        resourceNames:
          - my-share
        verbs:
          - use
    EOF

  2. Create the RoleBinding associated with the role by using the oc command:

    Example oc create rolebinding command

    $ oc create rolebinding shared-resource-my-share --role=shared-resource-my-share --serviceaccount=my-namespace:builder

  3. Create a BuildConfig object that accesses the RHEL entitlements.

    Example YAML BuildConfig object definition

    apiVersion: build.openshift.io/v1
    kind: BuildConfig
    metadata:
      name: my-csi-bc
      namespace: my-csi-app-namespace
    spec:
      runPolicy: Serial
      source:
        dockerfile: |
          FROM registry.redhat.io/ubi8/ubi:latest
          RUN ls -la /etc/pki/entitlement
          RUN rm /etc/rhsm-host
          RUN yum repolist --disablerepo=*
          RUN subscription-manager repos --enable rhocp-4.9-for-rhel-8-x86_64-rpms
          RUN yum -y update
          RUN yum install -y openshift-clients.x86_64
      strategy:
        type: Docker
        dockerStrategy:
          volumes:
            - mounts:
                - destinationPath: "/etc/pki/entitlement"
              name: my-csi-shared-secret
              source:
                csi:
                  driver: csi.sharedresource.openshift.io
                  readOnly: true
                  volumeAttributes:
                    sharedSecret: my-share-bc
                type: CSI

  4. Start a build from the BuildConfig object and follow the logs with the oc command.

    Example oc start-build command

    $ oc start-build my-csi-bc -F

    Example 2.1. Example output from the oc start-build command

    Note

    Some sections of the following output have been replaced with …​

    build.build.openshift.io/my-csi-bc-1 started
    Caching blobs under "/var/cache/blobs".
    
    Pulling image registry.redhat.io/ubi8/ubi:latest ...
    Trying to pull registry.redhat.io/ubi8/ubi:latest...
    Getting image source signatures
    Copying blob sha256:5dcbdc60ea6b60326f98e2b49d6ebcb7771df4b70c6297ddf2d7dede6692df6e
    Copying blob sha256:8671113e1c57d3106acaef2383f9bbfe1c45a26eacb03ec82786a494e15956c3
    Copying config sha256:b81e86a2cb9a001916dc4697d7ed4777a60f757f0b8dcc2c4d8df42f2f7edb3a
    Writing manifest to image destination
    Storing signatures
    Adding transient rw bind mount for /run/secrets/rhsm
    STEP 1/9: FROM registry.redhat.io/ubi8/ubi:latest
    STEP 2/9: RUN ls -la /etc/pki/entitlement
    total 360
    drwxrwxrwt. 2 root root 	80 Feb  3 20:28 .
    drwxr-xr-x. 10 root root	154 Jan 27 15:53 ..
    -rw-r--r--. 1 root root   3243 Feb  3 20:28 entitlement-key.pem
    -rw-r--r--. 1 root root 362540 Feb  3 20:28 entitlement.pem
    time="2022-02-03T20:28:32Z" level=warning msg="Adding metacopy option, configured globally"
    --> 1ef7c6d8c1a
    STEP 3/9: RUN rm /etc/rhsm-host
    time="2022-02-03T20:28:33Z" level=warning msg="Adding metacopy option, configured globally"
    --> b1c61f88b39
    STEP 4/9: RUN yum repolist --disablerepo=*
    Updating Subscription Management repositories.
    
    
    ...
    
    --> b067f1d63eb
    STEP 5/9: RUN subscription-manager repos --enable rhocp-4.9-for-rhel-8-x86_64-rpms
    Repository 'rhocp-4.9-for-rhel-8-x86_64-rpms' is enabled for this system.
    time="2022-02-03T20:28:40Z" level=warning msg="Adding metacopy option, configured globally"
    --> 03927607ebd
    STEP 6/9: RUN yum -y update
    Updating Subscription Management repositories.
    
    ...
    
    Upgraded:
      systemd-239-51.el8_5.3.x86_64      	systemd-libs-239-51.el8_5.3.x86_64
      systemd-pam-239-51.el8_5.3.x86_64
    Installed:
      diffutils-3.6-6.el8.x86_64           	libxkbcommon-0.9.1-1.el8.x86_64
      xkeyboard-config-2.28-1.el8.noarch
    
    Complete!
    time="2022-02-03T20:29:05Z" level=warning msg="Adding metacopy option, configured globally"
    --> db57e92ff63
    STEP 7/9: RUN yum install -y openshift-clients.x86_64
    Updating Subscription Management repositories.
    
    ...
    
    Installed:
      bash-completion-1:2.7-5.el8.noarch
      libpkgconf-1.4.2-1.el8.x86_64
      openshift-clients-4.9.0-202201211735.p0.g3f16530.assembly.stream.el8.x86_64
      pkgconf-1.4.2-1.el8.x86_64
      pkgconf-m4-1.4.2-1.el8.noarch
      pkgconf-pkg-config-1.4.2-1.el8.x86_64
    
    Complete!
    time="2022-02-03T20:29:19Z" level=warning msg="Adding metacopy option, configured globally"
    --> 609507b059e
    STEP 8/9: ENV "OPENSHIFT_BUILD_NAME"="my-csi-bc-1" "OPENSHIFT_BUILD_NAMESPACE"="my-csi-app-namespace"
    --> cab2da3efc4
    STEP 9/9: LABEL "io.openshift.build.name"="my-csi-bc-1" "io.openshift.build.namespace"="my-csi-app-namespace"
    COMMIT temp.builder.openshift.io/my-csi-app-namespace/my-csi-bc-1:edfe12ca
    --> 821b582320b
    Successfully tagged temp.builder.openshift.io/my-csi-app-namespace/my-csi-bc-1:edfe12ca
    821b582320b41f1d7bab4001395133f86fa9cc99cc0b2b64c5a53f2b6750db91
    Build complete, no image push requested

2.10.6. Additional resources

2.11. Securing builds by strategy

Builds in OpenShift Container Platform are run in privileged containers. Depending on the build strategy used, if you have privileges, you can run builds to escalate their permissions on the cluster and host nodes. And as a security measure, it limits who can run builds and the strategy that is used for those builds. Custom builds are inherently less safe than source builds, because they can execute any code within a privileged container, and are disabled by default. Grant docker build permissions with caution, because a vulnerability in the Dockerfile processing logic could result in a privileges being granted on the host node.

By default, all users that can create builds are granted permission to use the docker and Source-to-image (S2I) build strategies. Users with cluster administrator privileges can enable the custom build strategy, as referenced in the restricting build strategies to a user globally section.

You can control who can build and which build strategies they can use by using an authorization policy. Each build strategy has a corresponding build subresource. A user must have permission to create a build and permission to create on the build strategy subresource to create builds using that strategy. Default roles are provided that grant the create permission on the build strategy subresource.

Table 2.3. Build Strategy Subresources and Roles
StrategySubresourceRole

Docker

builds/docker

system:build-strategy-docker

Source-to-Image

builds/source

system:build-strategy-source

Custom

builds/custom

system:build-strategy-custom

JenkinsPipeline

builds/jenkinspipeline

system:build-strategy-jenkinspipeline

2.11.1. Disabling access to a build strategy globally

To prevent access to a particular build strategy globally, log in as a user with cluster administrator privileges, remove the corresponding role from the system:authenticated group, and apply the annotation rbac.authorization.kubernetes.io/autoupdate: "false" to protect them from changes between the API restarts. The following example shows disabling the docker build strategy.

Procedure

  1. Apply the rbac.authorization.kubernetes.io/autoupdate annotation:

    $ oc edit clusterrolebinding system:build-strategy-docker-binding

    Example output

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      annotations:
        rbac.authorization.kubernetes.io/autoupdate: "false" 1
      creationTimestamp: 2018-08-10T01:24:14Z
      name: system:build-strategy-docker-binding
      resourceVersion: "225"
      selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system%3Abuild-strategy-docker-binding
      uid: 17b1f3d4-9c3c-11e8-be62-0800277d20bf
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:build-strategy-docker
    subjects:
    - apiGroup: rbac.authorization.k8s.io
      kind: Group
      name: system:authenticated

    1
    Change the rbac.authorization.kubernetes.io/autoupdate annotation’s value to "false".
  2. Remove the role:

    $ oc adm policy remove-cluster-role-from-group system:build-strategy-docker system:authenticated
  3. Ensure the build strategy subresources are also removed from these roles:

    $ oc edit clusterrole admin
    $ oc edit clusterrole edit
  4. For each role, specify the subresources that correspond to the resource of the strategy to disable.

    1. Disable the docker Build Strategy for admin:

      kind: ClusterRole
      metadata:
        name: admin
      ...
      - apiGroups:
        - ""
        - build.openshift.io
        resources:
        - buildconfigs
        - buildconfigs/webhooks
        - builds/custom 1
        - builds/source
        verbs:
        - create
        - delete
        - deletecollection
        - get
        - list
        - patch
        - update
        - watch
      ...
      1
      Add builds/custom and builds/source to disable docker builds globally for users with the admin role.

2.11.2. Restricting build strategies to users globally

You can allow a set of specific users to create builds with a particular strategy.

Prerequisites

  • Disable global access to the build strategy.

Procedure

  • Assign the role that corresponds to the build strategy to a specific user. For example, to add the system:build-strategy-docker cluster role to the user devuser:

    $ oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser
    Warning

    Granting a user access at the cluster level to the builds/docker subresource means that the user can create builds with the docker strategy in any project in which they can create builds.

2.11.3. Restricting build strategies to a user within a project

Similar to granting the build strategy role to a user globally, you can allow a set of specific users within a project to create builds with a particular strategy.

Prerequisites

  • Disable global access to the build strategy.

Procedure

  • Assign the role that corresponds to the build strategy to a specific user within a project. For example, to add the system:build-strategy-docker role within the project devproject to the user devuser:

    $ oc adm policy add-role-to-user system:build-strategy-docker devuser -n devproject

2.12. Build configuration resources

Use the following procedure to configure build settings.

2.12.1. Build controller configuration parameters

The build.config.openshift.io/cluster resource offers the following configuration parameters.

ParameterDescription

Build

Holds cluster-wide information on how to handle builds. The canonical, and only valid name is cluster.

spec: Holds user-settable values for the build controller configuration.

buildDefaults

Controls the default information for builds.

defaultProxy: Contains the default proxy settings for all build operations, including image pull or push and source download.

You can override values by setting the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables in the BuildConfig strategy.

gitProxy: Contains the proxy settings for Git operations only. If set, this overrides any proxy settings for all Git commands, such as git clone.

Values that are not set here are inherited from DefaultProxy.

env: A set of default environment variables that are applied to the build if the specified variables do not exist on the build.

imageLabels: A list of labels that are applied to the resulting image. You can override a default label by providing a label with the same name in the BuildConfig.

resources: Defines resource requirements to execute the build.

ImageLabel

name: Defines the name of the label. It must have non-zero length.

buildOverrides

Controls override settings for builds.

imageLabels: A list of labels that are applied to the resulting image. If you provided a label in the BuildConfig with the same name as one in this table, your label will be overwritten.

nodeSelector: A selector which must be true for the build pod to fit on a node.

tolerations: A list of tolerations that overrides any existing tolerations set on a build pod.

BuildList

items: Standard object’s metadata.

2.12.2. Configuring build settings

You can configure build settings by editing the build.config.openshift.io/cluster resource.

Procedure

  • Edit the build.config.openshift.io/cluster resource:

    $ oc edit build.config.openshift.io/cluster

    The following is an example build.config.openshift.io/cluster resource:

    apiVersion: config.openshift.io/v1
    kind: Build1
    metadata:
      annotations:
        release.openshift.io/create-only: "true"
      creationTimestamp: "2019-05-17T13:44:26Z"
      generation: 2
      name: cluster
      resourceVersion: "107233"
      selfLink: /apis/config.openshift.io/v1/builds/cluster
      uid: e2e9cc14-78a9-11e9-b92b-06d6c7da38dc
    spec:
      buildDefaults:2
        defaultProxy:3
          httpProxy: http://proxy.com
          httpsProxy: https://proxy.com
          noProxy: internal.com
        env:4
        - name: envkey
          value: envvalue
        gitProxy:5
          httpProxy: http://gitproxy.com
          httpsProxy: https://gitproxy.com
          noProxy: internalgit.com
        imageLabels:6
        - name: labelkey
          value: labelvalue
        resources:7
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 10m
            memory: 10Mi
      buildOverrides:8
        imageLabels:9
        - name: labelkey
          value: labelvalue
        nodeSelector:10
          selectorkey: selectorvalue
        tolerations:11
        - effect: NoSchedule
          key: node-role.kubernetes.io/builds
    operator: Exists
    1
    Build: Holds cluster-wide information on how to handle builds. The canonical, and only valid name is cluster.
    2
    buildDefaults: Controls the default information for builds.
    3
    defaultProxy: Contains the default proxy settings for all build operations, including image pull or push and source download.
    4
    env: A set of default environment variables that are applied to the build if the specified variables do not exist on the build.
    5
    gitProxy: Contains the proxy settings for Git operations only. If set, this overrides any Proxy settings for all Git commands, such as git clone.
    6
    imageLabels: A list of labels that are applied to the resulting image. You can override a default label by providing a label with the same name in the BuildConfig.
    7
    resources: Defines resource requirements to execute the build.
    8
    buildOverrides: Controls override settings for builds.
    9
    imageLabels: A list of labels that are applied to the resulting image. If you provided a label in the BuildConfig with the same name as one in this table, your label will be overwritten.
    10
    nodeSelector: A selector which must be true for the build pod to fit on a node.
    11
    tolerations: A list of tolerations that overrides any existing tolerations set on a build pod.

2.13. Troubleshooting builds

Use the following to troubleshoot build issues.

2.13.1. Resolving denial for access to resources

If your request for access to resources is denied:

Issue
A build fails with:
requested access to the resource is denied
Resolution
You have exceeded one of the image quotas set on your project. Check your current quota and verify the limits applied and storage in use:
$ oc describe quota

2.13.2. Service certificate generation failure

If your request for access to resources is denied:

Issue
If a service certificate generation fails with (service’s service.beta.openshift.io/serving-cert-generation-error annotation contains):

Example output

secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60

Resolution
The service that generated the certificate no longer exists, or has a different serviceUID. You must force certificates regeneration by removing the old secret, and clearing the following annotations on the service: service.beta.openshift.io/serving-cert-generation-error and service.beta.openshift.io/serving-cert-generation-error-num:
$ oc delete secret <secret_name>
$ oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-
$ oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-
Note

The command removing annotation has a - after the annotation name to be removed.

2.14. Setting up additional trusted certificate authorities for builds

Use the following sections to set up additional certificate authorities (CA) to be trusted by builds when pulling images from an image registry.

The procedure requires a cluster administrator to create a ConfigMap and add additional CAs as keys in the ConfigMap.

  • The ConfigMap must be created in the openshift-config namespace.
  • domain is the key in the ConfigMap and value is the PEM-encoded certificate.

    • Each CA must be associated with a domain. The domain format is hostname[..port].
  • The ConfigMap name must be set in the image.config.openshift.io/cluster cluster scoped configuration resource’s spec.additionalTrustedCA field.

2.14.1. Adding certificate authorities to the cluster

You can add certificate authorities (CA) to the cluster for use when pushing and pulling images with the following procedure.

Prerequisites

  • You must have cluster administrator privileges.
  • You must have access to the public certificates of the registry, usually a hostname/ca.crt file located in the /etc/docker/certs.d/ directory.

Procedure

  1. Create a ConfigMap in the openshift-config namespace containing the trusted certificates for the registries that use self-signed certificates. For each CA file, ensure the key in the ConfigMap is the hostname of the registry in the hostname[..port] format:

    $ oc create configmap registry-cas -n openshift-config \
    --from-file=myregistry.corp.com..5000=/etc/docker/certs.d/myregistry.corp.com:5000/ca.crt \
    --from-file=otherregistry.com=/etc/docker/certs.d/otherregistry.com/ca.crt
  2. Update the cluster image configuration:

    $ oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-cas"}}}' --type=merge

2.14.2. Additional resources

Chapter 3. Migrating from Jenkins to Tekton

3.1. Migrating from Jenkins to Tekton

Jenkins and Tekton are extensively used to automate the process of building, testing, and deploying applications and projects. However, Tekton is a cloud-native CI/CD solution that works seamlessly with Kubernetes and OpenShift Container Platform. This document helps you migrate your Jenkins CI/CD workflows to Tekton.

3.1.1. Comparison of Jenkins and Tekton concepts

This section summarizes the basic terms used in Jenkins and Tekton, and compares the equivalent terms.

3.1.1.1. Jenkins terminology

Jenkins offers declarative and scripted pipelines that are extensible using shared libraries and plugins. Some basic terms in Jenkins are as follows:

  • Pipeline: Automates the entire process of building, testing, and deploying applications, using the Groovy syntax.
  • Node: A machine capable of either orchestrating or executing a scripted pipeline.
  • Stage: A conceptually distinct subset of tasks performed in a pipeline. Plugins or user interfaces often use this block to display status or progress of tasks.
  • Step: A single task that specifies the exact action to be taken, either by using a command or a script.
3.1.1.2. Tekton terminology

Tekton uses the YAML syntax for declarative pipelines and consists of tasks. Some basic terms in Tekton are as follows:

  • Pipeline: A set of tasks in a series, in parallel, or both.
  • Task: A sequence of steps as commands, binaries, or scripts.
  • PipelineRun: Execution of a pipeline with one or more tasks.
  • TaskRun: Execution of a task with one or more steps.

    Note

    You can initiate a PipelineRun or a TaskRun with a set of inputs such as parameters and workspaces, and the execution results in a set of outputs and artifacts.

  • Workspace: In Tekton, workspaces are conceptual blocks that serve the following purposes:

    • Storage of inputs, outputs, and build artifacts.
    • Common space to share data among tasks.
    • Mount points for credentials held in secrets, configurations held in config maps, and common tools shared by an organization.
    Note

    In Jenkins, there is no direct equivalent of Tekton workspaces. You can think of the control node as a workspace, as it stores the cloned code repository, build history, and artifacts. In situations where a job is assigned to a different node, the cloned code and the generated artifacts are stored in that node, but the build history is maintained by the control node.

3.1.1.3. Mapping of concepts

The building blocks of Jenkins and Tekton are not equivalent, and a comparison does not provide a technically accurate mapping. The following terms and concepts in Jenkins and Tekton correlate in general:

Table 3.1. Jenkins and Tekton - basic comparison
JenkinsTekton

Pipeline

Pipeline and PipelineRun

Stage

Task

Step

A step in a task

3.1.2. Migrating a sample pipeline from Jenkins to Tekton

This section provides equivalent examples of pipelines in Jenkins and Tekton and helps you to migrate your build, test, and deploy pipelines from Jenkins to Tekton.

3.1.2.1. Jenkins pipeline

Consider a Jenkins pipeline written in Groovy for building, testing, and deploying:

pipeline {
   agent any
   stages {
       stage('Build') {
           steps {
               sh 'make'
           }
       }
       stage('Test'){
           steps {
               sh 'make check'
               junit 'reports/**/*.xml'
           }
       }
       stage('Deploy') {
           steps {
               sh 'make publish'
           }
       }
   }
}
3.1.2.2. Tekton pipeline

In Tekton, the equivalent example of the Jenkins pipeline comprises of three tasks, each of which can be written declaratively using the YAML syntax:

Example build task

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: myproject-build
spec:
  workspaces:
  - name: source
  steps:
  - image: my-ci-image
    command: ["make"]
    workingDir: $(workspaces.source.path)

Example test task:

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: myproject-test
spec:
  workspaces:
  - name: source
  steps:
  - image: my-ci-image
    command: ["make check"]
    workingDir: $(workspaces.source.path)
  - image: junit-report-image
    script: |
      #!/usr/bin/env bash
      junit-report reports/**/*.xml
    workingDir: $(workspaces.source.path)

Example deploy task:

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: myprojectd-deploy
spec:
  workspaces:
  - name: source
  steps:
  - image: my-deploy-image
    command: ["make deploy"]
    workingDir: $(workspaces.source.path)

You can combine the three tasks sequentially to form a Tekton pipeline:

Example: Tekton pipeline for building, testing, and deployment

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: myproject-pipeline
spec:
  workspaces:
  - name: shared-dir
  tasks:
  - name: build
    taskRef:
      name: myproject-build
    workspaces:
    - name: source
      workspace: shared-dir
  - name: test
    taskRef:
      name: myproject-test
    workspaces:
    - name: source
      workspace: shared-dir
  - name: deploy
    taskRef:
      name: myproject-deploy
    workspaces:
    - name: source
      workspace: shared-dir

3.1.3. Migrating from Jenkins plugins to Tekton Hub tasks

You can extend the capability of Jenkins by using plugins. To achieve similar extensibility in Tekton, use any of the available tasks from Tekton Hub.

As an example, consider the git-clone task available in the Tekton Hub, that corresponds to the git plugin for Jenkins.

Example: git-clone task from Tekton Hub

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
 name: demo-pipeline
spec:
 params:
   - name: repo_url
   - name: revision
 workspaces:
   - name: source
 tasks:
   - name: fetch-from-git
     taskRef:
       name: git-clone
     params:
       - name: url
         value: $(params.repo_url)
       - name: revision
         value: $(params.revision)
     workspaces:
     - name: output
       workspace: source

3.1.4. Extending Tekton capabilities using custom tasks and scripts

In Tekton, if you do not find the right task in Tekton Hub, or need greater control over tasks, you can create custom tasks and scripts to extend Tekton’s capabilities.

Example: Custom task for running the maven test command

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: maven-test
spec:
  workspaces:
  - name: source
  steps:
  - image: my-maven-image
    command: ["mvn test"]
    workingDir: $(workspaces.source.path)

Example: Execute a custom shell script by providing its path

...
steps:
  image: ubuntu
  script: |
      #!/usr/bin/env bash
      /workspace/my-script.sh
...

Example: Execute a custom Python script by writing it in the YAML file

...
steps:
  image: python
  script: |
      #!/usr/bin/env python3
      print(“hello from python!”)
...

3.1.5. Comparison of Jenkins and Tekton execution models

Jenkins and Tekton offer similar functions but are different in architecture and execution. This section outlines a brief comparison of the two execution models.

Table 3.2. Comparison of execution models in Jenkins and Tekton
JenkinsTekton

Jenkins has a control node. Jenkins executes pipelines and steps centrally, or orchestrates jobs running in other nodes.

Tekton is serverless and distributed, and there is no central dependency for execution.

The containers are launched by the control node through the pipeline.

Tekton adopts a 'container-first' approach, where every step is executed as a container running in a pod (equivalent to nodes in Jenkins).

Extensibility is achieved using plugins.

Extensibility is achieved using tasks in Tekton Hub, or by creating custom tasks and scripts.

3.1.6. Examples of common use cases

Both Jenkins and Tekton offer capabilities for common CI/CD use cases, such as:

  • Compiling, building, and deploying images using maven
  • Extending the core capabilities by using plugins
  • Reusing shareable libraries and custom scripts
3.1.6.1. Running a maven pipeline in Jenkins and Tekton

You can use maven in both Jenkins and Tekton workflows for compiling, building, and deploying images. To map your existing Jenkins workflow to Tekton, consider the following examples:

Example: Compile and build an image and deploy it to OpenShift using maven in Jenkins

#!/usr/bin/groovy
node('maven') {
    stage 'Checkout'
    checkout scm

    stage 'Build'
    sh 'cd helloworld && mvn clean'
    sh 'cd helloworld && mvn compile'

    stage 'Run Unit Tests'
    sh 'cd helloworld && mvn test'

    stage 'Package'
    sh 'cd helloworld && mvn package'

    stage 'Archive artifact'
    sh 'mkdir -p artifacts/deployments && cp helloworld/target/*.war artifacts/deployments'
    archive 'helloworld/target/*.war'

    stage 'Create Image'
    sh 'oc login https://kubernetes.default -u admin -p admin --insecure-skip-tls-verify=true'
    sh 'oc new-project helloworldproject'
    sh 'oc project helloworldproject'
    sh 'oc process -f helloworld/jboss-eap70-binary-build.json | oc create -f -'
    sh 'oc start-build eap-helloworld-app --from-dir=artifacts/'

    stage 'Deploy'
    sh 'oc new-app helloworld/jboss-eap70-deploy.json' }

Example: Compile and build an image and deploy it to OpenShift using maven in Tekton.

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: maven-pipeline
spec:
  workspaces:
    - name: shared-workspace
    - name: maven-settings
    - name: kubeconfig-dir
      optional: true
  params:
    - name: repo-url
    - name: revision
    - name: context-path
  tasks:
    - name: fetch-repo
      taskRef:
        name: git-clone
      workspaces:
        - name: output
          workspace: shared-workspace
      params:
        - name: url
          value: "$(params.repo-url)"
        - name: subdirectory
          value: ""
        - name: deleteExisting
          value: "true"
        - name: revision
          value: $(params.revision)
    - name: mvn-build
      taskRef:
        name: maven
      runAfter:
        - fetch-repo
      workspaces:
        - name: source
          workspace: shared-workspace
        - name: maven-settings
          workspace: maven-settings
      params:
        - name: CONTEXT_DIR
          value: "$(params.context-path)"
        - name: GOALS
          value: ["-DskipTests", "clean", "compile"]
    - name: mvn-tests
      taskRef:
        name: maven
      runAfter:
        - mvn-build
      workspaces:
        - name: source
          workspace: shared-workspace
        - name: maven-settings
          workspace: maven-settings
      params:
        - name: CONTEXT_DIR
          value: "$(params.context-path)"
        - name: GOALS
          value: ["test"]
    - name: mvn-package
      taskRef:
        name: maven
      runAfter:
        - mvn-tests
      workspaces:
        - name: source
          workspace: shared-workspace
        - name: maven-settings
          workspace: maven-settings
      params:
        - name: CONTEXT_DIR
          value: "$(params.context-path)"
        - name: GOALS
          value: ["package"]
    - name: create-image-and-deploy
      taskRef:
        name: openshift-client
      runAfter:
        - mvn-package
      workspaces:
        - name: manifest-dir
          workspace: shared-workspace
        - name: kubeconfig-dir
          workspace: kubeconfig-dir
      params:
        - name: SCRIPT
          value: |
            cd "$(params.context-path)"
            mkdir -p ./artifacts/deployments && cp ./target/*.war ./artifacts/deployments
            oc new-project helloworldproject
            oc project helloworldproject
            oc process -f jboss-eap70-binary-build.json | oc create -f -
            oc start-build eap-helloworld-app --from-dir=artifacts/
            oc new-app jboss-eap70-deploy.json

3.1.6.2. Extending the core capabilities of Jenkins and Tekton by using plugins

Jenkins has the advantage of a large ecosystem of numerous plugins developed over the years by its extensive user base. You can search and browse the plugins in the Jenkins Plugin Index.

Tekton also has many tasks developed and contributed by the community and enterprise users. A publicly available catalog of reusable Tekton tasks are available in the Tekton Hub.

In addition, Tekton incorporates many of the plugins of the Jenkins ecosystem within its core capabilities. For example, authorization is a critical function in both Jenkins and Tekton. While Jenkins ensures authorization using the Role-based Authorization Strategy plugin, Tekton uses OpenShift’s built-in Role-based Access Control system.

3.1.6.3. Sharing reusable code in Jenkins and Tekton

Jenkins shared libraries provide reusable code for parts of Jenkins pipelines. The libraries are shared between Jenkinsfiles to create highly modular pipelines without code repetition.

Although there is no direct equivalent of Jenkins shared libraries in Tekton, you can achieve similar workflows by using tasks from the Tekton Hub, in combination with custom tasks and scripts.

3.1.7. Additional resources

Chapter 4. Pipelines

4.1. Red Hat OpenShift Pipelines release notes

Red Hat OpenShift Pipelines is a cloud-native CI/CD experience based on the Tekton project which provides:

  • Standard Kubernetes-native pipeline definitions (CRDs).
  • Serverless pipelines with no CI server management overhead.
  • Extensibility to build images using any Kubernetes tool, such as S2I, Buildah, JIB, and Kaniko.
  • Portability across any Kubernetes distribution.
  • Powerful CLI for interacting with pipelines.
  • Integrated user experience with the Developer perspective of the OpenShift Container Platform web console.

For an overview of Red Hat OpenShift Pipelines, see Understanding OpenShift Pipelines.

4.1.1. Compatibility and support matrix

Some features in this release are currently in Technology Preview. These experimental features are not intended for production use.

In the table, features are marked with the following statuses:

TP

Technology Preview

GA

General Availability

Table 4.1. Compatibility and support matrix
Red Hat OpenShift Pipelines VersionComponent VersionOpenShift VersionSupport Status

Operator

Pipelines

Triggers

CLI

Catalog

Chains

Hub

Pipelines as Code

  

1.10

0.44.x

0.23.x

0.30.x

NA

0.15.x (TP)

1.12.x (TP)

0.17.x (GA)

4.10, 4.11, 4.12, 4.13

GA

1.9

0.41.x

0.22.x

0.28.x

NA

0.13.x (TP)

1.11.x (TP)

0.15.x (GA)

4.10, 4.11, 4.12, 4.13

GA

1.8

0.37.x

0.20.x

0.24.x

NA

0.9.0 (TP)

1.8.x (TP)

0.10.x (TP)

4.10, 4.11, 4.12

GA

1.7

0.33.x

0.19.x

0.23.x

0.33

0.8.0 (TP)

1.7.0 (TP)

0.5.x (TP)

4.9, 4.10, 4.11

GA

1.6

0.28.x

0.16.x

0.21.x

0.28

N/A

N/A

N/A

4.9

GA

1.5

0.24.x

0.14.x (TP)

0.19.x

0.24

N/A

N/A

N/A

4.8

GA

1.4

0.22.x

0.12.x (TP)

0.17.x

0.22

N/A

N/A

N/A

4.7

GA

Additionally, support for running Red Hat OpenShift Pipelines on ARM hardware is in Technology Preview.

For questions and feedback, you can send an email to the product team at pipelines-interest@redhat.com.

4.1.2. Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

4.1.3. Release notes for Red Hat OpenShift Pipelines General Availability 1.10

With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10 is available on OpenShift Container Platform 4.11, 4.12, and 4.13.

4.1.3.1. New features

In addition to fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.10.

4.1.3.1.1. Pipelines
  • With this update, you can specify environment variables in a PipelineRun or TaskRun pod template to override or append the variables that are configured in a task or step. Also, you can specify environment variables in a default pod template to use those variables globally for all PipelineRuns and TaskRuns. This update also adds a new default configuration named forbidden-envs to filter environment variables while propagating from pod templates.
  • With this update, custom tasks in pipelines are enabled by default.

    Note

    To disable this update, set the enable-custom-tasks flag to false in the feature-flags config custom resource.

  • This update supports the v1beta1.CustomRun API version for custom tasks.
  • This update adds support for the PipelineRun reconciler to create a custom run. For example, custom TaskRuns created from PipelineRuns can now use the v1beta1.CustomRun API version instead of v1alpha1.Run, if the custom-task-version feature flag is set to v1beta1, instead of the default value v1alpha1.

    Note

    You need to update the custom task controller to listen for the *v1beta1.CustomRun API version instead of *v1alpha1.Run in order to respond to v1beta1.CustomRun requests.

  • This update adds a new retries field to the v1beta1.TaskRun and v1.TaskRun specifications.
4.1.3.1.2. Triggers
  • With this update, triggers support the creation of Pipelines, Tasks, PipelineRuns, and TaskRuns objects of the v1 API version along with CustomRun objects of the v1beta1 API version.
  • With this update, GitHub Interceptor blocks a pull request trigger from being executed unless invoked by an owner or with a configurable comment by an owner.

    Note

    To enable or disable this update, set the value of the githubOwners parameter to true or false in the GitHub Interceptor configuration file.

  • With this update, GitHub Interceptor has the ability to add a comma delimited list of all files that have changed for the push and pull request events. The list of changed files is added to the changed_files property of the event payload in the top-level extensions field.
  • This update changes the MinVersion of TLS to tls.VersionTLS12 so that triggers run on OpenShift Container Platform when the Federal Information Processing Standards (FIPS) mode is enabled.
4.1.3.1.3. CLI
  • This update adds support to pass a Container Storage Interface (CSI) file as a workspace at the time of starting a Task, ClusterTask or Pipeline.
  • This update adds v1 API support to all CLI commands associated with task, pipeline, pipeline run, and task run resources. Tekton CLI works with both v1beta1 and v1 APIs for these resources.
  • This update adds support for an object type parameter in the start and describe commands.
4.1.3.1.4. Operator
  • This update adds a default-forbidden-env parameter in optional pipeline properties. The parameter includes forbidden environment variables that should not be propagated if provided through pod templates.
  • This update adds support for custom logos in Tekton Hub UI. To add a custom logo, set the value of the customLogo parameter to base64 encoded URI of logo in the Tekton Hub CR.
  • This update increments the version number of the git-clone task to 0.9.
4.1.3.1.5. Tekton Chains
Important

Tekton Chains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

  • This update adds annotations and labels to the PipelineRun and TaskRun attestations.
  • This update adds a new format named slsa/v1, which generates the same provenance as the one generated when requesting in the in-toto format.
  • With this update, Sigstore features are moved out from the experimental features.
  • With this update, the predicate.materials function includes image URI and digest information from all steps and sidecars for a TaskRun object.
4.1.3.1.6. Tekton Hub
Important

Tekton Hub is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

  • This update supports installing, upgrading, or downgrading Tekton resources of the v1 API version on the cluster.
  • This update supports adding a custom logo in place of the Tekton Hub logo in UI.
  • This update extends the tkn hub install command functionality by adding a --type artifact flag, which fetches resources from the Artifact Hub and installs them on your cluster.
  • This update adds support tier, catalog, and org information as labels to the resources being installed from Artifact Hub to your cluster.
4.1.3.1.7. Pipelines as Code
  • This update enhances incoming webhook support. For a GitHub application installed on the OpenShift Container Platform cluster, you do not need to provide the git_provider specification for an incoming webhook. Instead, Pipelines as Code detects the secret and use it for the incoming webhook.
  • With this update, you can use the same token to fetch remote tasks from the same host on GitHub with a non-default branch.
  • With this update, Pipelines as Code supports Tekton v1 templates. You can have v1 and v1beta1 templates, which Pipelines as Code reads for PR generation. The PR is created as v1 on cluster.
  • Before this update, OpenShift console UI would use a hardcoded pipeline run template as a fallback template when a runtime template was not found in the OpenShift namespace. This update in the pipelines-as-code config map provides a new default pipeline run template named, pipelines-as-code-template-default for the console to use.
  • With this update, Pipelines as Code supports Tekton Pipelines 0.44.0 minimal status.
  • With this update, Pipelines as Code supports Tekton v1 API, which means Pipelines as Code is now compatible with Tekton v0.44 and later.
  • With this update, you can configure custom console dashboards in addition to configuring a console for OpenShift and Tekton dashboards for k8s.
  • With this update, Pipelines as Code detects the installation of a GitHub application initiated using the tkn pac create repo command and does not require a GitHub webhook if it was installed globally.
  • Before this update, if there was an error on a PipelineRun execution and not on the tasks attached to PipelineRun, Pipelines as Code would not report the failure properly. With this update, Pipelines as Code reports the error properly on the GitHub checks when a PipelineRun could not be created.
  • With this update, Pipelines as Code includes a target_namespace variable, which expands to the currently running namespace where the PipelineRun is executed.
  • With this update, Pipelines as Code lets you bypass GitHub enterprise questions in the CLI bootstrap GitHub application.
  • With this update, Pipelines as Code does not report errors when the repository CR was not found.
  • With this update, Pipelines as Code reports an error if multiple pipeline runs with the same name were found.
4.1.3.2. Breaking changes
  • With this update, the prior version of the tkn command is not compatible with Red Hat OpenShift Pipelines 1.10.
  • This update removes support for Cluster and CloudEvent pipeline resources from Tekton CLI. You cannot create pipeline resources by using the tkn pipelineresource create command. Also, pipeline resources are no longer supported in the start command of a task, cluster task, or pipeline.
  • This update removes tekton as a provenance format from Tekton Chains.
4.1.3.3. Deprecated and removed features
  • In Red Hat OpenShift Pipelines 1.10, the ClusterTask commands are now deprecated and are planned to be removed in a future release. The tkn task create command is also deprecated with this update.
  • In Red Hat OpenShift Pipelines 1.10, the flags -i and -o that were used with the tkn task start command are now deprecated because the v1 API does not support pipeline resources.
  • In Red Hat OpenShift Pipelines 1.10, the flag -r that was used with the tkn pipeline start command is deprecated because the v1 API does not support pipeline resources.
  • The Red Hat OpenShift Pipelines 1.10 update sets the openshiftDefaultEmbeddedStatus parameter to both with full and minimal embedded status. The flag to change the default embedded status is also deprecated and will be removed. In addition, the pipeline default embedded status will be changed to minimal in a future release.
4.1.3.4. Known issues
  • This update includes the following backward incompatible changes:

    • Removal of the PipelineResources cluster
    • Removal of the PipelineResources cloud event
  • If the pipelines metrics feature does not work after a cluster upgrade, run the following command as a workaround:

    $ oc get tektoninstallersets.operator.tekton.dev | awk '/pipeline-main-static/ {print $1}' | xargs oc delete tektoninstallersets
  • With this update, usage of external databases, such as the Crunchy PostgreSQL is not supported on IBM Power, IBM Z, and {linuxoneProductName}. Instead, use the default Tekton Hub database.
4.1.3.5. Fixed issues
  • Before this update, the opc pac command generated a runtime error instead of showing any help. This update fixes the opc pac command to show the help message.
  • Before this update, running the tkn pac create repo command needed the webhook details for creating a repository. With this update, the tkn-pac create repo command does not configure a webhook when your GitHub application is installed.
  • Before this update, Pipelines as Code would not report a pipeline run creation error when Tekton Pipelines had issues creating the PipelineRun resource. For example, a non-existing task in a pipeline run would show no status. With this update, Pipelines as Code shows the proper error message coming from Tekton Pipelines along with the task that is missing.
  • This update fixes UI page redirection after a successful authentication. Now, you are redirected to the same page where you had attempted to log in to Tekton Hub.
  • This update fixes the list command with these flags, --all-namespaces and --output=yaml, for a cluster task, an individual task, and a pipeline.
  • This update removes the forward slash in the end of the repo.spec.url URL so that it matches the URL coming from GitHub.
  • Before this update, the marshalJSON function would not marshal a list of objects. With this update, the marshalJSON function marshals the list of objects.
  • With this update, Pipelines as Code lets you bypass GitHub enterprise questions in the CLI bootstrap GitHub application.
  • This update fixes the GitHub collaborator check when your repository has more than 100 users.
  • With this update, the sign and verify commands for a task or pipeline now work without a kubernetes configuration file.
  • With this update, Tekton Operator cleans leftover pruner cron jobs if pruner has been skipped on a namespace.
  • Before this update, the API ConfigMap object would not be updated with a user configured value for a catalog refresh interval. This update fixes the CATALOG_REFRESH_INTERVAL API in the Tekon Hub CR.
  • This update fixes reconciling of PipelineRunStatus when changing the EmbeddedStatus feature flag. This update resets the following parameters:

    • The status.runs and status.taskruns parameters to nil with minimal EmbeddedStatus
    • The status.childReferences parameter to nil with full EmbeddedStatus
  • This update adds a conversion configuration to the ResolutionRequest CRD. This update properly configures conversion from the v1alpha1.ResolutionRequest request to the v1beta1.ResolutionRequest request.
  • This update checks for duplicate workspaces associated with a pipeline task.
  • This update fixes the default value for enabling resolvers in the code.
  • This update fixes TaskRef and PipelineRef names conversion by using a resolver.
4.1.3.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.10.1

With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10.1 is available on OpenShift Container Platform 4.11, 4.12, and 4.13.

4.1.3.6.1. Fixed issues for Pipelines as Code
  • Before this update, if the source branch information coming from payload included refs/heads/ but the user-configured target branch only included the branch name, main, in a CEL expression, the push request would fail. With this update, Pipelines as Code passes the push request and triggers a pipeline if either the base branch or target branch has refs/heads/ in the payload.
  • Before this update, when a PipelineRun object could not be created, the error received from the Tekton controller was not reported to the user. With this update, Pipelines as Code reports the error messages to the GitHub interface so that users can troubleshoot the errors. Pipelines as Code also reports the errors that occurred during pipeline execution.
  • With this update, Pipelines as Code does not echo a secret to the GitHub checks interface when it failed to create the secret on the OpenShift Container Platform cluster because of an infrastructure issue.
  • This update removes the deprecated APIs that are no longer in use from Red Hat OpenShift Pipelines.
4.1.3.7. Release notes for Red Hat OpenShift Pipelines General Availability 1.10.2

With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10.2 is available on OpenShift Container Platform 4.11, 4.12, and 4.13.

4.1.3.7.1. Fixed issues

Before this update, an issue in the Tekton Operator prevented the user from setting the value of the enable-api-fields flag to beta. This update fixes the issue. Now, you can set the value of the enable-api-fields flag to beta in the TektonConfig CR.

4.1.3.8. Release notes for Red Hat OpenShift Pipelines General Availability 1.10.3

With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10.3 is available on OpenShift Container Platform 4.11, 4.12, and 4.13.

4.1.3.8.1. Fixed issues

Before this update, the Tekton Operator did not expose the performance configuration fields for any customizations. With this update, as a cluster administrator, you can customize the following performance configuration fields in the TektonConfig CR based on your needs:

  • disable-ha
  • buckets
  • kube-api-qps
  • kube-api-burst
  • threads-per-controller
4.1.3.9. Release notes for Red Hat OpenShift Pipelines General Availability 1.10.4

With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10.4 is available on OpenShift Container Platform 4.11, 4.12, and 4.13.

4.1.3.9.1. Fixed issues
  • This update fixes the bundle resolver conversion issue for the PipelineRef field in a pipeline run. Now, the conversion feature sets the value of the kind field to Pipeline after conversion.
  • Before this update, the pipelinerun.timeouts field was reset to the timeouts.pipeline value, ignoring the timeouts.tasks and timeouts.finally values. This update fixes the issue and sets the correct default timeout value for a PipelineRun resource.
  • Before this update, the controller logs contained unnecessary data. This update fixes the issue.
4.1.3.10. Release notes for Red Hat OpenShift Pipelines General Availability 1.10.5

With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10.5 is available on OpenShift Container Platform 4.10 in addition to 4.11, 4.12, and 4.13.

Important

Red Hat OpenShift Pipelines 1.10.5 is only available in the pipelines-1.10 channel on OpenShift Container Platform 4.10, 4.11, 4.12, and 4.13. It is not available in the latest channel for any OpenShift Container Platform version.

4.1.3.10.1. Fixed issues
  • Before this update, huge pipeline runs were not getting listed or deleted using the oc and tkn commands. This update mitigates this issue by compressing the huge annotations that were causing this problem. Remember that if the pipeline runs are still too huge after compression, then the same error still recurs.
  • Before this update, only the pod template specified in the pipelineRun.spec.taskRunSpecs[].podTemplate object would be considered for a pipeline run. With this update, the pod template specified in the pipelineRun.spec.podTemplate object is also considered and merged with the template specified in the pipelineRun.spec.taskRunSpecs[].podTemplate object.

4.1.4. Release notes for Red Hat OpenShift Pipelines General Availability 1.9

With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.9 is available on OpenShift Container Platform 4.11, 4.12, and 4.13.

4.1.4.1. New features

In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.9.

4.1.4.1.1. Pipelines
  • With this update, you can specify pipeline parameters and results in arrays and object dictionary forms.
  • This update provides support for Container Storage Interface (CSI) and projected volumes for your workspace.
  • With this update, you can specify the stdoutConfig and stderrConfig parameters when defining pipeline steps. Defining these parameters helps to capture standard output and standard error, associated with steps, to local files.
  • With this update, you can add variables in the steps[].onError event handler, for example, $(params.CONTINUE).
  • With this update, you can use the output from the finally task in the PipelineResults definition. For example, $(finally.<pipelinetask-name>.result.<result-name>), where <pipelinetask-name> denotes the pipeline task name and <result-name> denotes the result name.
  • This update supports task-level resource requirements for a task run.
  • With this update, you do not need to recreate parameters that are shared, based on their names, between a pipeline and the defined tasks. This update is part of a developer preview feature.
  • This update adds support for remote resolution, such as built-in git, cluster, bundle, and hub resolvers.
4.1.4.1.2. Triggers
  • This update adds the Interceptor CRD to define NamespacedInterceptor. You can use NamespacedInterceptor in the kind section of interceptors reference in triggers or in the EventListener specification.
  • This update enables CloudEvents.
  • With this update, you can configure the webhook port number when defining a trigger.
  • This update supports using trigger eventID as input to TriggerBinding.
  • This update supports validation and rotation of certificates for the ClusterInterceptor server. 

    • Triggers perform certificate validation for core interceptors and rotate a new certificate to ClusterInterceptor when its certificate expires.
4.1.4.1.3. CLI 
  • This update supports showing annotations in the describe command.
  • This update supports showing pipeline, tasks, and timeout in the pr describe command.
  • This update adds flags to provide pipeline, tasks, and timeout in the pipeline start command.
  • This update supports showing the presence of workspace, optional or mandatory, in the describe command of a task and pipeline.
  • This update adds the timestamps flag to show logs with a timestamp.
  • This update adds a new flag --ignore-running-pipelinerun, which ignores the deletion of TaskRun associated with PipelineRun.
  • This update adds support for experimental commands. This update also adds experimental subcommands, sign and verify to the tkn CLI tool.
  • This update makes the Z shell (Zsh) completion feature usable without generating any files.
  • This update introduces a new CLI tool called opc. It is anticipated that an upcoming release will replace the tkn CLI tool with opc.

    Important
    • The new CLI tool opc is a Technology Preview feature.
    • opc will be a replacement for tkn with additional Red Hat OpenShift Pipelines specific features, which do not necessarily fit in tkn.
4.1.4.1.4. Operator
  • With this update, Pipelines as Code is installed by default. You can disable Pipelines as Code by using the -p flag:

    $ oc patch tektonconfig config --type="merge" -p '{"spec": {"platforms": {"openshift":{"pipelinesAsCode": {"enable": false}}}}}'
  • With this update, you can also modify Pipelines as Code configurations in the TektonConfig CRD.
  • With this update, if you disable the developer perspective, the Operator does not install developer console related custom resources.
  • This update includes ClusterTriggerBinding support for Bitbucket Server and Bitbucket Cloud and helps you to reuse a TriggerBinding across your entire cluster.
4.1.4.1.5. Resolvers
Important

Resolvers is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

  • With this update, you can configure pipeline resolvers in the TektonConfig CRD. You can enable or disable these pipeline resolvers:  enable-bundles-resolver, enable-cluster-resolver, enable-git-resolver, and enable-hub-resolver.

    apiVersion: operator.tekton.dev/v1alpha1
    kind: TektonConfig
    metadata:
      name: config
    spec:
      pipeline:
        enable-bundles-resolver: true
        enable-cluster-resolver: true
        enable-git-resolver: true
        enable-hub-resolver: true
    ...

    You can also provide resolver specific configurations in TektonConfig. For example, you can define the following fields in the map[string]string format to set configurations for individual resolvers:

    apiVersion: operator.tekton.dev/v1alpha1
    kind: TektonConfig
    metadata:
      name: config
    spec:
      pipeline:
        bundles-resolver-config:
          default-service-account: pipelines
        cluster-resolver-config:
          default-namespace: test
        git-resolver-config:
          server-url: localhost.com
        hub-resolver-config:
          default-tekton-hub-catalog: tekton
    ...
4.1.4.1.6. Tekton Chains
Important

Tekton Chains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

  • Before this update, only Open Container Initiative (OCI) images were supported as outputs of TaskRun in the in-toto provenance agent. This update adds in-toto provenance metadata as outputs with these suffixes, ARTIFACT_URI and ARTIFACT_DIGEST.
  • Before this update, only TaskRun attestations were supported. This update adds support for PipelineRun attestations as well.
  • This update adds support for Tekton Chains to get the imgPullSecret parameter from the pod template. This update helps you to configure repository authentication based on each pipeline run or task run without modifying the service account.
4.1.4.1.7. Tekton Hub
Important

Tekton Hub is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

  • With this update, as an administrator, you can use an external database, such as Crunchy PostgreSQL with Tekton Hub, instead of using the default Tekton Hub database. This update helps you to perform the following actions:

    • Specify the coordinates of an external database to be used with Tekton Hub
    • Disable the default Tekton Hub database deployed by the Operator
  • This update removes the dependency of config.yaml from external Git repositories and moves the complete configuration data into the API ConfigMap. This update helps an administrator to perform the following actions:

    • Add the configuration data, such as categories, catalogs, scopes, and defaultScopes in the Tekton Hub custom resource.
    • Modify Tekton Hub configuration data on the cluster. All modifications are preserved upon Operator upgrades.
    • Update the list of catalogs for Tekton Hub
    • Change the categories for Tekton Hub

      Note

      If you do not add any configuration data, you can use the default data in the API ConfigMap for Tekton Hub configurations.

4.1.4.1.8. Pipelines as Code
  • This update adds support for concurrency limit in the Repository CRD to define the maximum number of PipelineRuns running for a repository at a time. The PipelineRuns from a pull request or a push event are queued in alphabetical order.
  • This update adds a new command tkn pac logs for showing the logs of the latest pipeline run for a repository.
  • This update supports advanced event matching on file path for push and pull requests to GitHub and GitLab. For example, you can use the Common Expression Language (CEL) to run a pipeline only if a path has changed for any markdown file in the docs directory.

      ...
      annotations:
         pipelinesascode.tekton.dev/on-cel-expression: |
          event == "pull_request" && "docs/*.md".pathChanged()
  • With this update, you can reference a remote pipeline in the pipelineRef: object using annotations.
  • With this update, you can auto-configure new GitHub repositories with Pipelines as Code, which sets up a namespace and creates a Repository CRD for your GitHub repository.
  • With this update, Pipelines as Code generates metrics for PipelineRuns with provider information.
  • This update provides the following enhancements for the tkn-pac plugin:

    • Detects running pipelines correctly
    • Fixes showing duration when there is no failure completion time
    • Shows an error snippet and highlights the error regular expression pattern in the tkn-pac describe command
    • Adds the use-real-time switch to the tkn-pac ls and tkn-pac describe commands
    • Imports the tkn-pac logs documentation
    • Shows pipelineruntimeout as a failure in the tkn-pac ls and tkn-pac describe commands.
    • Show a specific pipeline run failure with the --target-pipelinerun option.
  • With this update, you can view the errors for your pipeline run in the form of a version control system (VCS) comment or a small snippet in the GitHub checks.
  • With this update, Pipelines as Code optionally can detect errors inside the tasks if they are of a simple format and add those tasks as annotations in GitHub. This update is part of a developer preview feature.
  • This update adds the following new commands:

    • tkn-pac webhook add: Adds a webhook to project repository settings and updates the webhook.secret key in the existing k8s Secret object without updating the repository.
    • tkn-pac webhook update-token: Updates provider token for an existing k8s Secret object without updating the repository.
  • This update enhances functionality of the tkn-pac create repo command, which creates and configures webhooks for GitHub, GitLab, and BitbucketCloud along with creating repositories.
  • With this update, the tkn-pac describe command shows the latest fifty events in a sorted order.
  • This update adds the --last option to the tkn-pac logs command.
  • With this update, the tkn-pac resolve command prompts for a token on detecting a git_auth_secret in the file template.
  • With this update, Pipelines as Code hides secrets from log snippets to avoid exposing secrets in the GitHub interface.
  • With this update, the secrets automatically generated for git_auth_secret are an owner reference with PipelineRun. The secrets get cleaned with the PipelineRun, not after the pipeline run execution.
  • This update adds support to cancel a pipeline run with the /cancel comment.
  • Before this update, the GitHub apps token scoping was not defined and tokens would be used on every repository installation. With this update, you can scope the GitHub apps token to the target repository using the following parameters:

    • secret-github-app-token-scoped: Scopes the app token to the target repository, not to every repository the app installation has access to.
    • secret-github-app-scope-extra-repos: Customizes the scoping of the app token with an additional owner or repository.
  • With this update, you can use Pipelines as Code with your own Git repositories that are hosted on GitLab.
  • With this update, you can access pipeline execution details in the form of kubernetes events in your namespace. These details help you to troubleshoot pipeline errors without needing access to admin namespaces.
  • This update supports authentication of URLs in the Pipelines as Code resolver with the Git provider.
  • With this update, you can set the name of the hub catalog by using a setting in the pipelines-as-code config map.
  • With this update, you can set the maximum and default limits for the max-keep-run parameter.
  • This update adds documents on how to inject custom Secure Sockets Layer (SSL) certificates in Pipelines as Code to let you connect to provider instance with custom certificates.
  • With this update, the PipelineRun resource definition has the log URL included as an annotation. For example, the tkn-pac describe command shows the log link when describing a PipelineRun.
  • With this update, tkn-pac logs show repository name, instead of PipelineRun name.
4.1.4.2. Breaking changes
  • With this update, the Conditions custom resource definition (CRD) type has been removed. As an alternative, use the WhenExpressions instead.
  • With this update, support for tekton.dev/v1alpha1 API pipeline resources, such as Pipeline, PipelineRun, Task, Clustertask, and TaskRun has been removed.
  • With this update, the tkn-pac setup command has been removed. Instead, use the tkn-pac webhook add command to re-add a webhook to an existing Git repository. And use the tkn-pac webhook update-token command to update the personal provider access token for an existing Secret object in the Git repository.
  • With this update, a namespace that runs a pipeline with default settings does not apply the pod-security.kubernetes.io/enforce:privileged label to a workload.
4.1.4.3. Deprecated and removed features
  • In the Red Hat OpenShift Pipelines 1.9.0 release, ClusterTasks are deprecated and planned to be removed in a future release. As an alternative, you can use Cluster Resolver.
  • In the Red Hat OpenShift Pipelines 1.9.0 release, the use of the triggers and the namespaceSelector fields in a single EventListener specification is deprecated and planned to be removed in a future release. You can use these fields in different EventListener specifications successfully.
  • In the Red Hat OpenShift Pipelines 1.9.0 release, the tkn pipelinerun describe command does not display timeouts for the PipelineRun resource.
  • In the Red Hat OpenShift Pipelines 1.9.0 release, the PipelineResource` custom resource (CR) is deprecated. The PipelineResource CR was a Tech Preview feature and part of the tekton.dev/v1alpha1 API.
  • In the Red Hat OpenShift Pipelines 1.9.0 release, custom image parameters from cluster tasks are deprecated. As an alternative, you can copy a cluster task and use your custom image in it.
4.1.4.4. Known issues
  • The chains-secret and chains-config config maps are removed after you uninstall the Red Hat OpenShift Pipelines Operator. As they contain user data, they should be preserved and not deleted.
  • When running the tkn pac set of commands on Windows, you may receive the following error message: Command finished with error: not supported by Windows.

    Workaround: Set the NO_COLOR environment variable to true.

  • Running the tkn pac resolve -f <filename> | oc create -f command may not provide expected results, if the tkn pac resolve command uses a templated parameter value to function.

    Workaround: To mitigate this issue, save the output of tkn pac resolve in a temporary file by running the tkn pac resolve -f <filename> -o tempfile.yaml command and then run the oc create -f tempfile.yaml command. For example, tkn pac resolve -f <filename> -o /tmp/pull-request-resolved.yaml && oc create -f /tmp/pull-request-resolved.yaml.

4.1.4.5. Fixed issues
  • Before this update, after replacing an empty array, the original array returned an empty string rendering the paramaters inside it invalid. With this update, this issue is resolved and the original array returns as empty.
  • Before this update, if duplicate secrets were present in a service account for a pipelines run, it resulted in failure in task pod creation. With this update, this issue is resolved and the task pod is created successfully even if duplicate secrets are present in a service account.
  • Before this update, by looking at the TaskRun’s spec.StatusMessage field, users could not distinguish whether the TaskRun had been cancelled by the user or by a PipelineRun that was part of it. With this update, this issue is resolved and users can distinguish the status of the TaskRun by looking at the TaskRun’s spec.StatusMessage field.
  • Before this update, webhook validation was removed on deletion of old versions of invalid objects. With this update, this issue is resolved.
  • Before this update, if you set the timeouts.pipeline parameter to 0, you could not set the timeouts.tasks parameter or the timeouts.finally parameters. This update resolves the issue. Now, when you set the timeouts.pipeline parameter value, you can set the value of either the`timeouts.tasks` parameter or the timeouts.finally parameter. For example:

    yaml
    kind: PipelineRun
    spec:
      timeouts:
        pipeline: "0"  # No timeout
        tasks: "0h3m0s"
  • Before this update, a race condition could occur if another tool updated labels or annotations on a PipelineRun or TaskRun. With this update, this issue is resolved and you can merge labels or annotations.
  • Before this update, log keys did not have the same keys as in pipelines controllers. With this update, this issue has been resolved and the log keys have been updated to match the log stream of pipeline controllers. The keys in logs have been changed from "ts" to "timestamp", from "level" to "severity", and from "message" to "msg".
  • Before this update, if a PipelineRun was deleted with an unknown status, an error message was not generated. With this update, this issue is resolved and an error message is generated.
  • Before this update, to access bundle commands like list and push, it was required to use the kubeconfig file . With this update, this issue has been resolved and the kubeconfig file is not required to access bundle commands.
  • Before this update, if the parent PipelineRun was running while deleting TaskRuns, then TaskRuns would be deleted. With this update, this issue is resolved and TaskRuns are not getting deleted if the parent PipelineRun is running.
  • Before this update, if the user attempted to build a bundle with more objects than the pipeline controller permitted, the Tekton CLI did not display an error message. With this update, this issue is resolved and the Tekton CLI displays an error message if the user attempts to build a bundle with more objects than the limit permitted in the pipeline controller.
  • Before this update, if namespaces were removed from the cluster, then the operator did not remove namespaces from the ClusterInterceptor ClusterRoleBinding subjects. With this update, this issue has been resolved, and the operator removes the namespaces from the ClusterInterceptor ClusterRoleBinding subjects.
  • Before this update, the default installation of the Red Hat OpenShift Pipelines Operator resulted in the pipelines-scc-rolebinding security context constraint (SCC) role binding resource remaining in the cluster. With this update, the default installation of the Red Hat OpenShift Pipelines Operator results in the pipelines-scc-rolebinding security context constraint (SCC) role binding resource resource being removed from the cluster.
  • Before this update, Pipelines as Code did not get updated values from the Pipelines as Code ConfigMap object. With this update, this issue is fixed and the Pipelines as Code ConfigMap object looks for any new changes.
  • Before this update, Pipelines as Code controller did not wait for the tekton.dev/pipeline label to be updated and added the checkrun id label, which would cause race conditions. With this update, the Pipelines as Code controller waits for the tekton.dev/pipeline label to be updated and then adds the checkrun id label, which helps to avoid race conditions.
  • Before this update, the tkn-pac create repo command did not override a PipelineRun if it already existed in the git repository. With this update, tkn-pac create command is fixed to override a PipelineRun if it exists in the git repository and this resolves the issue successfully.
  • Before this update, the tkn pac describe command did not display reasons for every message. With this update, this issue is fixed and the tkn pac describe command displays reasons for every message.
  • Before this update, a pull request failed if the user in the annotation provided values by using a regex form, for example, refs/head/rel-*. The pull request failed because it was missing refs/heads in its base branch. With this update, the prefix is added and checked that it matches. This resolves the issue and the pull request is successful.
4.1.4.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.9.1

With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.9.1 is available on OpenShift Container Platform 4.11, 4.12, and 4.13.

4.1.4.7. Fixed issues
  • Before this update, the tkn pac repo list command did not run on Microsoft Windows. This update fixes the issue, and now you can run the tkn pac repo list command on Microsoft Windows.
  • Before this update, Pipelines as Code watcher did not receive all the configuration change events. With this update, the Pipelines as Code watcher is updated, and now the Pipelines as Code watcher does not miss the configuration change events.
  • Before this update, the pods created by Pipelines as Code, such as TaskRuns or PipelineRuns could not access custom certificates exposed by the user in the cluster. This update fixes the issue, and you can now access custom certificates from the TaskRuns or PipelineRuns pods in the cluster.
  • Before this update, on a cluster enabled with FIPS, the tekton-triggers-core-interceptors core interceptor used in the Trigger resource did not function after the Pipelines Operator was upgraded to version 1.9. This update resolves the issue. Now, OpenShift uses MInTLS 1.2 for all its components. As a result, the tekton-triggers-core-interceptors core interceptor updates to TLS version 1.2and its functionality runs accurately.
  • Before this update, when using a pipeline run with an internal OpenShift image registry, the URL to the image had to be hardcoded in the pipeline run definition. For example:

    ...
      - name: IMAGE_NAME
        value: 'image-registry.openshift-image-registry.svc:5000/<test_namespace>/<test_pipelinerun>'
    ...

    When using a pipeline run in the context of Pipelines as Code, such hardcoded values prevented the pipeline run definitions from being used in different clusters and namespaces.

    With this update, you can use the dynamic template variables instead of hardcoding the values for namespaces and pipeline run names to generalize pipeline run definitions. For example:

    ...
      - name: IMAGE_NAME
        value: 'image-registry.openshift-image-registry.svc:5000/{{ target_namespace }}/$(context.pipelineRun.name)'
    ...
  • Before this update, Pipelines as Code used the same GitHub token to fetch a remote task available in the same host only on the default GitHub branch. This update resolves the issue. Now Pipelines as Code uses the same GitHub token to fetch a remote task from any GitHub branch.
4.1.4.8. Known issues
  • The value for CATALOG_REFRESH_INTERVAL, a field in the Hub API ConfigMap object used in the Tekton Hub CR, is not getting updated with a custom value provided by the user.

    Workaround: None. You can track the issue SRVKP-2854.

4.1.4.9. Breaking changes
  • With this update, an OLM misconfiguration issue has been introduced, which prevents the upgrade of the OpenShift Container Platform. This issue will be fixed in a future release.
4.1.4.10. Release notes for Red Hat OpenShift Pipelines General Availability 1.9.2

With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.9.2 is available on OpenShift Container Platform 4.11, 4.12, and 4.13.

4.1.4.11. Fixed issues
  • Before this update, an OLM misconfiguration issue had been introduced in the previous version of the release, which prevented the upgrade of OpenShift Container Platform. With this update, this misconfiguration issue has been fixed.
4.1.4.12. Release notes for Red Hat OpenShift Pipelines General Availability 1.9.3

With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.9.3 is available on OpenShift Container Platform 4.10 in addition to 4.11, 4.12, and 4.13.

4.1.4.13. Fixed issues
  • This update fixes the performance issues for huge pipelines. Now, the CPU usage is reduced by 61% and the memory usage is reduced by 44%.
  • Before this update, a pipeline run would fail if a task did not run because of its when expression. This update fixes the issue by preventing the validation of a skipped task result in pipeline results. Now, the pipeline result is not emitted and the pipeline run does not fail because of a missing result.
  • This update fixes the pipelineref.bundle conversion to the bundle resolver for the v1beta1 API. Now, the conversion feature sets the value of the kind field to Pipeline after conversion.
  • Before this update, an issue in the Pipelines Operator prevented the user from setting the value of the spec.pipeline.enable-api-fields field to beta. This update fixes the issue. Now, you can set the value to beta along with alpha and stable in the TektonConfig custom resource.
  • Before this update, when Pipelines as Code could not create a secret due to a cluster error, it would show the temporary token on the GitHub check run, which is public. This update fixes the issue. Now, the token is no longer displayed on the GitHub checks interface when the creation of the secret fails.
4.1.4.14. Known issues
  • There is currently a known issue with the stop option for pipeline runs in the OpenShift Container Platform web console. The stop option in the Actions drop-down list is not working as expected and does not cancel the pipeline run.
  • There is currently a known issue with upgrading to Pipelines version 1.9.x due to a failing custom resource definition conversion.

    Workaround: Before upgrading to Pipelines version 1.9.x, perform the step mentioned in the solution on the Red Hat Customer Portal.

4.1.5. Release notes for Red Hat OpenShift Pipelines General Availability 1.8

With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.8 is available on OpenShift Container Platform 4.10, 4.11, and 4.12.

4.1.5.1. New features

In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.8.

4.1.5.1.1. Pipelines
  • With this update, you can run Red Hat OpenShift Pipelines GA 1.8 and later on an OpenShift Container Platform cluster that is running on ARM hardware. This includes support for ClusterTask resources and the tkn CLI tool.
Important

Running Red Hat OpenShift Pipelines on ARM hardware is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

  • This update implements Step and Sidecar overrides for TaskRun resources.
  • This update adds minimal TaskRun and Run statuses within PipelineRun statuses.

    To enable this feature, in the TektonConfig custom resource definition, in the pipeline section, you must set the enable-api-fields field to alpha.

  • With this update, the graceful termination of pipeline runs feature is promoted from an alpha feature to a stable feature. As a result, the previously deprecated PipelineRunCancelled status remains deprecated and is planned to be removed in a future release.

    Because this feature is available by default, you no longer need to set the pipeline.enable-api-fields field to alpha in the TektonConfig custom resource definition.

  • With this update, you can specify the workspace for a pipeline task by using the name of the workspace. This change makes it easier to specify a shared workspace for a pair of Pipeline and PipelineTask resources. You can also continue to map workspaces explicitly.

    To enable this feature, in the TektonConfig custom resource definition, in the pipeline section, you must set the enable-api-fields field to alpha.

  • With this update, parameters in embedded specifications are propagated without mutations.
  • With this update, you can specify the required metadata of a Task resource referenced by a PipelineRun resource by using annotations and labels. This way, Task metadata that depends on the execution context is available during the pipeline run.
  • This update adds support for object or dictionary types in params and results values. This change affects backward compatibility and sometimes breaks forward compatibility, such as using an earlier client with a later Red Hat OpenShift Pipelines version. This update changes the ArrayOrStruct structure, which affects projects that use the Go language API as a library.
  • This update adds a SkippingReason value to the SkippedTasks field of the PipelineRun status fields so that users know why a given PipelineTask was skipped.
  • This update supports an alpha feature in which you can use an array type for emitting results from a Task object. The result type is changed from string to ArrayOrString. For example, a task can specify a type to produce an array result:

    kind: Task
    apiVersion: tekton.dev/v1beta1
    metadata:
      name: write-array
      annotations:
        description: |
          A simple task that writes array
    spec:
      results:
        - name: array-results
          type: array
          description: The array results
    ...

    Additionally, you can run a task script to populate the results with an array:

    $ echo -n "[\"hello\",\"world\"]" | tee $(results.array-results.path)

    To enable this feature, in the TektonConfig custom resource definition, in the pipeline section, you must set the enable-api-fields field to alpha.

    This feature is in progress and is part of TEP-0076.

4.1.5.1.2. Triggers
  • This update transitions the TriggerGroups field in the EventListener specification from an alpha feature to a stable feature. Using this field, you can specify a set of interceptors before selecting and running a group of triggers.

    Because this feature is available by default, you no longer need to set the pipeline.enable-api-fields field to alpha in the TektonConfig custom resource definition.

  • With this update, the Trigger resource supports end-to-end secure connections by running the ClusterInterceptor server using HTTPS.
4.1.5.1.3. CLI
  • With this update, you can use the tkn taskrun export command to export a live task run from a cluster to a YAML file, which you can use to import the task run to another cluster.
  • With this update, you can add the -o name flag to the tkn pipeline start command to print the name of the pipeline run right after it starts.
  • This update adds a list of available plug-ins to the output of the tkn --help command.
  • With this update, while deleting a pipeline run or task run, you can use both the --keep and --keep-since flags together.
  • With this update, you can use Cancelled as the value of the spec.status field rather than the deprecated PipelineRunCancelled value.
4.1.5.1.4. Operator
  • With this update, as an administrator, you can configure your local Tekton Hub instance to use a custom database rather than the default database.
  • With this update, as a cluster administrator, if you enable your local Tekton Hub instance, it periodically refreshes the database so that changes in the catalog appear in the Tekton Hub web console. You can adjust the period between refreshes.

    Previously, to add the tasks and pipelines in the catalog to the database, you performed that task manually or set up a cron job to do it for you.

  • With this update, you can install and run a Tekton Hub instance with minimal configuration. This way, you can start working with your teams to decide which additional customizations they might want.
  • This update adds GIT_SSL_CAINFO to the git-clone task so you can clone secured repositories.
4.1.5.1.5. Tekton Chains
Important

Tekton Chains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

  • With this update, you can log in to a vault by using OIDC rather than a static token. This change means that Spire can generate the OIDC credential so that only trusted workloads are allowed to log in to the vault. Additionally, you can pass the vault address as a configuration value rather than inject it as an environment variable.
  • The chains-config config map for Tekton Chains in the openshift-pipelines namespace is automatically reset to default after upgrading the Red Hat OpenShift Pipelines Operator because directly updating the config map is not supported when installed by using the Red Hat OpenShift Pipelines Operator. However, with this update, you can configure Tekton Chains by using the TektonChain custom resource. This feature enables your configuration to persist after upgrading, unlike the chains-config config map, which gets overwritten during upgrades.
4.1.5.1.6. Tekton Hub
Important

Tekton Hub is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

  • With this update, if you install a fresh instance of Tekton Hub by using the Operator, the Tekton Hub login is disabled by default. To enable the login and rating features, you must create the Hub API secret while installing Tekton Hub.

    Note

    Because Tekton Hub login was enabled by default in Red Hat OpenShift Pipelines 1.7, if you upgrade the Operator, the login is enabled by default in Red Hat OpenShift Pipelines 1.8. To disable this login, see Disabling Tekton Hub login after upgrading from OpenShift Pipelines 1.7.x -→ 1.8.x

  • With this update, as an administrator, you can configure your local Tekton Hub instance to use a custom PostgreSQL 13 database rather than the default database. To do so, create a Secret resource named tekton-hub-db. For example:

    apiVersion: v1
    kind: Secret
    metadata:
      name: tekton-hub-db
      labels:
        app: tekton-hub-db
    type: Opaque
    stringData:
      POSTGRES_HOST: <hostname>
      POSTGRES_DB: <database_name>
      POSTGRES_USER: <username>
      POSTGRES_PASSWORD: <password>
      POSTGRES_PORT: <listening_port_number>
  • With this update, you no longer need to log in to the Tekton Hub web console to add resources from the catalog to the database. Now, these resources are automatically added when the Tekton Hub API starts running for the first time.
  • This update automatically refreshes the catalog every 30 minutes by calling the catalog refresh API job. This interval is user-configurable.
4.1.5.1.7. Pipelines as Code
Important

Pipelines as Code (PAC) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

  • With this update, as a developer, you get a notification from the tkn-pac CLI tool if you try to add a duplicate repository to a Pipelines as Code run. When you enter tkn pac create repository, each repository must have a unique URL. This notification also helps prevent hijacking exploits.
  • With this update, as a developer, you can use the new tkn-pac setup cli command to add a Git repository to Pipelines as Code by using the webhook mechanism. This way, you can use Pipelines as Code even when using GitHub Apps is not feasible. This capability includes support for repositories on GitHub, GitLab, and BitBucket.
  • With this update, Pipelines as Code supports GitLab integration with features such as the following:

    • ACL (Access Control List) on project or group
    • /ok-to-test support from allowed users
    • /retest support.
  • With this update, you can perform advanced pipeline filtering with Common Expression Language (CEL). With CEL, you can match pipeline runs with different Git provider events by using annotations in the PipelineRun resource. For example:

      ...
      annotations:
         pipelinesascode.tekton.dev/on-cel-expression: |
          event == "pull_request" && target_branch == "main" && source_branch == "wip"
  • Previously, as a developer, you could have only one pipeline run in your .tekton directory for each Git event, such as a pull request. With this update, you can have multiple pipeline runs in your .tekton directory. The web console displays the status and reports of the runs. The pipeline runs operate in parallel and report back to the Git provider interface.
  • With this update, you can test or retest a pipeline run by commenting /test or /retest on a pull request. You can also specify the pipeline run by name. For example, you can enter /test <pipelinerun_name> or /retest <pipelinerun-name>.
  • With this update, you can delete a repository custom resource and its associated secrets by using the new tkn-pac delete repository command.
4.1.5.2. Breaking changes
  • This update changes the default metrics level of TaskRun and PipelineRun resources to the following values:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: config-observability
      namespace: tekton-pipelines
      labels:
        app.kubernetes.io/instance: default
        app.kubernetes.io/part-of: tekton-pipelines
    data:
      _example: |
      ...
        metrics.taskrun.level: "task"
        metrics.taskrun.duration-type: "histogram"
        metrics.pipelinerun.level: "pipeline"
        metrics.pipelinerun.duration-type: "histogram"
  • With this update, if an annotation or label is present in both Pipeline and PipelineRun resources, the value in the Run type takes precedence. The same is true if an annotation or label is present in Task and TaskRun resources.
  • In Red Hat OpenShift Pipelines 1.8, the previously deprecated PipelineRun.Spec.ServiceAccountNames field has been removed. Use the PipelineRun.Spec.TaskRunSpecs field instead.
  • In Red Hat OpenShift Pipelines 1.8, the previously deprecated TaskRun.Status.ResourceResults.ResourceRef field has been removed. Use the TaskRun.Status.ResourceResults.ResourceName field instead.
  • In Red Hat OpenShift Pipelines 1.8, the previously deprecated Conditions resource type has been removed. Remove the Conditions resource from Pipeline resource definitions that include it. Use when expressions in PipelineRun definitions instead.
  • For Tekton Chains, the tekton-provenance format has been removed in this release. Use the in-toto format by setting "artifacts.taskrun.format": "in-toto" in the TektonChain custom resource instead.
  • Red Hat OpenShift Pipelines 1.7.x shipped with Pipelines as Code 0.5.x. The current update ships with Pipelines as Code 0.10.x. This change creates a new route in the openshift-pipelines namespace for the new controller. You must update this route in GitHub Apps or webhooks that use Pipelines as Code. To fetch the route, use the following command:

    $ oc get route -n openshift-pipelines pipelines-as-code-controller \
      --template='https://{{ .spec.host }}'
  • With this update, Pipelines as Code renames the default secret keys for the Repository custom resource definition (CRD). In your CRD, replace token with provider.token, and replace secret with webhook.secret.
  • With this update, Pipelines as Code replaces a special template variable with one that supports multiple pipeline runs for private repositories. In your pipeline runs, replace secret: pac-git-basic-auth-{{repo_owner}}-{{repo_name}} with secret: {{ git_auth_secret }}.
  • With this update, Pipelines as Code updates the following commands in the tkn-pac CLI tool:

    • Replace tkn pac repository create with tkn pac create repository.
    • Replace tkn pac repository delete with tkn pac delete repository.
    • Replace tkn pac repository list with tkn pac list.
4.1.5.3. Deprecated and removed features
  • Starting with OpenShift Container Platform 4.11, the preview and stable channels for installing and upgrading the Red Hat OpenShift Pipelines Operator are removed. To install and upgrade the Operator, use the appropriate pipelines-<version> channel, or the latest channel for the most recent stable version. For example, to install the Pipelines Operator version 1.8.x, use the pipelines-1.8 channel.

    Note

    In OpenShift Container Platform 4.10 and earlier versions, you can use the preview and stable channels for installing and upgrading the Operator.

  • Support for the tekton.dev/v1alpha1 API version, which was deprecated in Red Hat OpenShift Pipelines GA 1.6, is planned to be removed in the upcoming Red Hat OpenShift Pipelines GA 1.9 release.

    This change affects the pipeline component, which includes the TaskRun, PipelineRun, Task, Pipeline, and similar tekton.dev/v1alpha1 resources. As an alternative, update existing resources to use apiVersion: tekton.dev/v1beta1 as described in Migrating From Tekton v1alpha1 to Tekton v1beta1.

    Bug fixes and support for the tekton.dev/v1alpha1 API version are provided only through the end of the current GA 1.8 lifecycle.

    Important

    For the Tekton Operator, the operator.tekton.dev/v1alpha1 API version is not deprecated. You do not need to make changes to this value.

  • In Red Hat OpenShift Pipelines 1.8, the PipelineResource custom resource (CR) is available but no longer supported. The PipelineResource CR was a Tech Preview feature and part of the tekton.dev/v1alpha1 API, which had been deprecated and planned to be removed in the upcoming Red Hat OpenShift Pipelines GA 1.9 release.
  • In Red Hat OpenShift Pipelines 1.8, the Condition custom resource (CR) is removed. The Condition CR was part of the tekton.dev/v1alpha1 API, which has been deprecated and is planned to be removed in the upcoming Red Hat OpenShift Pipelines GA 1.9 release.
  • In Red Hat OpenShift Pipelines 1.8, the gcr.io image for gsutil has been removed. This removal might break clusters with Pipeline resources that depend on this image. Bug fixes and support are provided only through the end of the Red Hat OpenShift Pipelines 1.7 lifecycle.
  • In Red Hat OpenShift Pipelines 1.8, the PipelineRun.Status.TaskRuns and PipelineRun.Status.Runs fields are deprecated and are planned to be removed in a future release. See TEP-0100: Embedded TaskRuns and Runs Status in PipelineRuns.
  • In Red Hat OpenShift Pipelines 1.8, the pipelineRunCancelled state is deprecated and planned to be removed in a future release. Graceful termination of PipelineRun objects is now promoted from an alpha feature to a stable feature. (See TEP-0058: Graceful Pipeline Run Termination.) As an alternative, you can use the Cancelled state, which replaces the pipelineRunCancelled state.

    You do not need to make changes to your Pipeline and Task resources. If you have tools that cancel pipeline runs, you must update tools in the next release. This change also affects tools such as the CLI, IDE extensions, and so on, so that they support the new PipelineRun statuses.

    Because this feature is available by default, you no longer need to set the pipeline.enable-api-fields field to alpha in the TektonConfig custom resource definition.

  • In Red Hat OpenShift Pipelines 1.8, the timeout field in PipelineRun has been deprecated. Instead, use the PipelineRun.Timeouts field, which is now promoted from an alpha feature to a stable feature.

    Because this feature is available by default, you no longer need to set the pipeline.enable-api-fields field to alpha in the TektonConfig custom resource definition.

  • In Red Hat OpenShift Pipelines 1.8, init containers are omitted from the LimitRange object’s default request calculations.
4.1.5.4. Known issues
  • The s2i-nodejs pipeline cannot use the nodejs:14-ubi8-minimal image stream to perform source-to-image (S2I) builds. Using that image stream produces an error building at STEP "RUN /usr/libexec/s2i/assemble": exit status 127 message.

    Workaround: Use nodejs:14-ubi8 rather than the nodejs:14-ubi8-minimal image stream.

  • When you run Maven and Jib-Maven cluster tasks, the default container image is supported only on Intel (x86) architecture. Therefore, tasks will fail on ARM, IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) clusters.

    Workaround: Specify a custom image by setting the MAVEN_IMAGE parameter value to maven:3.6.3-adoptopenjdk-11.

    Tip

    Before you install tasks that are based on the Tekton Catalog on ARM, IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) using tkn hub, verify if the task can be executed on these platforms. To check if ppc64le and s390x are listed in the "Platforms" section of the task information, you can run the following command: tkn hub info task <name>

  • On ARM, IBM Power Systems, IBM Z, and LinuxONE, the s2i-dotnet cluster task is unsupported.
  • Implicit parameter mapping incorrectly passes parameters from the top-level Pipeline or PipelineRun definitions to the taskRef tasks. Mapping should only occur from a top-level resource to tasks with in-line taskSpec specifications. This issue only affects clusters where this feature was enabled by setting the enable-api-fields field to alpha in the pipeline section of the TektonConfig custom resource definition.
4.1.5.5. Fixed issues
  • Before this update, the metrics for pipeline runs in the Developer view of the web console were incomplete and outdated. With this update, the issue has been fixed so that the metrics are correct.
  • Before this update, if a pipeline had two parallel tasks that failed and one of them had retries=2, the final tasks never ran, and the pipeline timed out and failed to run. For example, the pipelines-operator-subscription task failed intermittently with the following error message: Unable to connect to the server: EOF. With this update, the issue has been fixed so that the final tasks always run.
  • Before this update, if a pipeline run stopped because a task run failed, other task runs might not complete their retries. As a result, no finally tasks were scheduled, which caused the pipeline to hang. This update resolves the issue. TaskRuns and Run objects can retry when a pipeline run has stopped, even by graceful stopping, so that pipeline runs can complete.
  • This update changes how resource requirements are calculated when one or more LimitRange objects are present in the namespace where a TaskRun object exists. The scheduler now considers step containers and excludes all other app containers, such as sidecar containers, when factoring requests from LimitRange objects.
  • Before this update, under specific conditions, the flag package might incorrectly parse a subcommand immediately following a double dash flag terminator, --. In that case, it ran the entrypoint subcommand rather than the actual command. This update fixes this flag-parsing issue so that the entrypoint runs the correct command.
  • Before this update, the controller might generate multiple panics if pulling an image failed, or its pull status was incomplete. This update fixes the issue by checking the step.ImageID value rather than the status.TaskSpec value.
  • Before this update, canceling a pipeline run that contained an unscheduled custom task produced a PipelineRunCouldntCancel error. This update fixes the issue. You can cancel a pipeline run that contains an unscheduled custom task without producing that error.
  • Before this update, if the <NAME> in $params["<NAME>"] or $params['<NAME>'] contained a dot character (.), any part of the name to the right of the dot was not extracted. For example, from $params["org.ipsum.lorem"], only org was extracted.

    This update fixes the issue so that $params fetches the complete value. For example, $params["org.ipsum.lorem"] and $params['org.ipsum.lorem'] are valid and the entire value of <NAME>, org.ipsum.lorem, is extracted.

    It also throws an error if <NAME> is not enclosed in single or double quotes. For example, $params.org.ipsum.lorem is not valid and generates a validation error.

  • With this update, Trigger resources support custom interceptors and ensure that the port of the custom interceptor service is the same as the port in the ClusterInterceptor definition file.
  • Before this update, the tkn version command for Tekton Chains and Operator components did not work correctly. This update fixes the issue so that the command works correctly and returns version information for those components.
  • Before this update, if you ran a tkn pr delete --ignore-running command and a pipeline run did not have a status.condition value, the tkn CLI tool produced a null-pointer error (NPE). This update fixes the issue so that the CLI tool now generates an error and correctly ignores pipeline runs that are still running.
  • Before this update, if you used the tkn pr delete --keep <value> or tkn tr delete --keep <value> commands, and the number of pipeline runs or task runs was less than the value, the command did not return an error as expected. This update fixes the issue so that the command correctly returns an error under those conditions.
  • Before this update, if you used the tkn pr delete or tkn tr delete commands with the -p or -t flags together with the --ignore-running flag, the commands incorrectly deleted running or pending resources. This update fixes the issue so that these commands correctly ignore running or pending resources.
  • With this update, you can configure Tekton Chains by using the TektonChain custom resource. This feature enables your configuration to persist after upgrading, unlike the chains-config config map, which gets overwritten during upgrades.
  • With this update, ClusterTask resources no longer run as root by default, except for the buildah and s2i cluster tasks.
  • Before this update, tasks on Red Hat OpenShift Pipelines 1.7.1 failed when using init as a first argument followed by two or more arguments. With this update, the flags are parsed correctly, and the task runs are successful.
  • Before this update, installation of the Red Hat OpenShift Pipelines Operator on OpenShift Container Platform 4.9 and 4.10 failed due to an invalid role binding, with the following error message:

    error updating rolebinding openshift-operators-prometheus-k8s-read-binding: RoleBinding.rbac.authorization.k8s.io
    "openshift-operators-prometheus-k8s-read-binding" is invalid:
    roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"Role", Name:"openshift-operator-read"}: cannot change roleRef

    This update fixes the issue so that the failure no longer occurs.

  • Previously, upgrading the Red Hat OpenShift Pipelines Operator caused the pipeline service account to be recreated, which meant that the secrets linked to the service account were lost. This update fixes the issue. During upgrades, the Operator no longer recreates the pipeline service account. As a result, secrets attached to the pipeline service account persist after upgrades, and the resources (tasks and pipelines) continue to work correctly.
  • With this update, Pipelines as Code pods run on infrastructure nodes if infrastructure node settings are configured in the TektonConfig custom resource (CR).
  • Previously, with the resource pruner, each namespace Operator created a command that ran in a separate container. This design consumed too many resources in clusters with a high number of namespaces. For example, to run a single command, a cluster with 1000 namespaces produced 1000 containers in a pod.

    This update fixes the issue. It passes the namespace-based configuration to the job so that all the commands run in one container in a loop.

  • In Tekton Chains, you must define a secret called signing-secrets to hold the key used for signing tasks and images. However, before this update, updating the Red Hat OpenShift Pipelines Operator reset or overwrote this secret, and the key was lost. This update fixes the issue. Now, if the secret is configured after installing Tekton Chains through the Operator, the secret persists, and it is not overwritten by upgrades.
  • Before this update, all S2I build tasks failed with an error similar to the following message:

    Error: error writing "0 0 4294967295\n" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted
    time="2022-03-04T09:47:57Z" level=error msg="error writing \"0 0 4294967295\\n\" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted"
    time="2022-03-04T09:47:57Z" level=error msg="(unable to determine exit status)"

    With this update, the pipelines-scc security context constraint (SCC) is compatible with the SETFCAP capability necessary for Buildah and S2I cluster tasks. As a result, the Buildah and S2I build tasks can run successfully.

    To successfully run the Buildah cluster task and S2I build tasks for applications written in various languages and frameworks, add the following snippet for appropriate steps objects such as build and push:

    securityContext:
      capabilities:
        add: ["SETFCAP"]
  • Before this update, installing the Red Hat OpenShift Pipelines Operator took longer than expected. This update optimizes some settings to speed up the installation process.
  • With this update, Buildah and S2I cluster tasks have fewer steps than in previous versions. Some steps have been combined into a single step so that they work better with ResourceQuota and LimitRange objects and do not require more resources than necessary.
  • This update upgrades the Buildah, tkn CLI tool, and skopeo CLI tool versions in cluster tasks.
  • Before this update, the Operator failed when creating RBAC resources if any namespace was in a Terminating state. With this update, the Operator ignores namespaces in a Terminating state and creates the RBAC resources.
  • Before this update, pods for the prune cronjobs were not scheduled on infrastructure nodes, as expected. Instead, they were scheduled on worker nodes or not scheduled at all. With this update, these types of pods can now be scheduled on infrastructure nodes if configured in the TektonConfig custom resource (CR).
4.1.5.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.8.1

With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.8.1 is available on OpenShift Container Platform 4.10, 4.11, and 4.12.

4.1.5.6.1. Known issues
  • By default, the containers have restricted permissions for enhanced security. The restricted permissions apply to all controller pods in the Red Hat OpenShift Pipelines Operator, and to some cluster tasks. Due to restricted permissions, the git-clone cluster task fails under certain configurations.

    Workaround: None. You can track the issue SRVKP-2634.

  • When installer sets are in a failed state, the status of the TektonConfig custom resource is incorrectly displayed as True instead of False.

    Example: Failed installer sets

    $ oc get tektoninstallerset
    NAME                                     READY   REASON
    addon-clustertasks-nx5xz                 False   Error
    addon-communityclustertasks-cfb2p        True
    addon-consolecli-ftrb8                   True
    addon-openshift-67dj2                    True
    addon-pac-cf7pz                          True
    addon-pipelines-fvllm                    True
    addon-triggers-b2wtt                     True
    addon-versioned-clustertasks-1-8-hqhnw   False   Error
    pipeline-w75ww                           True
    postpipeline-lrs22                       True
    prepipeline-ldlhw                        True
    rhosp-rbac-4dmgb                         True
    trigger-hfg64                            True
    validating-mutating-webhoook-28rf7       True

    Example: Incorrect TektonConfig status

    $ oc get tektonconfig config
    NAME     VERSION   READY   REASON
    config   1.8.1     True

4.1.5.6.2. Fixed issues
  • Before this update, the pruner deleted task runs of running pipelines and displayed the following warning: some tasks were indicated completed without ancestors being done. With this update, the pruner retains the task runs that are part of running pipelines.
  • Before this update, pipeline-1.8 was the default channel for installing the Red Hat OpenShift Pipelines Operator 1.8.x. With this update, latest is the default channel.
  • Before this update, the Pipelines as Code controller pods did not have access to certificates exposed by the user. With this update, Pipelines as Code can now access routes and Git repositories guarded by a self-signed or a custom certificate.
  • Before this update, the task failed with RBAC errors after upgrading from Red Hat OpenShift Pipelines 1.7.2 to 1.8.0. With this update, the tasks run successfully without any RBAC errors.
  • Before this update, using the tkn CLI tool, you could not remove task runs and pipeline runs that contained a result object whose type was array. With this update, you can use the tkn CLI tool to remove task runs and pipeline runs that contain a result object whose type is array.
  • Before this update, if a pipeline specification contained a task with an ENV_VARS parameter of array type, the pipeline run failed with the following error: invalid input params for task func-buildpacks: param types don’t match the user-specified type: [ENV_VARS]. With this update, pipeline runs with such pipeline and task specifications do not fail.
  • Before this update, cluster administrators could not provide a config.json file to the Buildah cluster task for accessing a container registry. With this update, cluster administrators can provide the Buildah cluster task with a config.json file by using the dockerconfig workspace.
4.1.5.7. Release notes for Red Hat OpenShift Pipelines General Availability 1.8.2

With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.8.2 is available on OpenShift Container Platform 4.10, 4.11, and 4.12.

4.1.5.7.1. Fixed issues
  • Before this update, the git-clone task failed when cloning a repository using SSH keys. With this update, the role of the non-root user in the git-init task is removed, and the SSH program looks in the $HOME/.ssh/ directory for the correct keys.

4.1.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.7

With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.7 is available on OpenShift Container Platform 4.9, 4.10, and 4.11.

4.1.6.1. New features

In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.7.

4.1.6.1.1. Pipelines
  • With this update, pipelines-<version> is the default channel to install the Red Hat OpenShift Pipelines Operator. For example, the default channel to install the Pipelines Operator version 1.7 is pipelines-1.7. Cluster administrators can also use the latest channel to install the most recent stable version of the Operator.

    Note

    The preview and stable channels will be deprecated and removed in a future release.

  • When you run a command in a user namespace, your container runs as root (user id 0) but has user privileges on the host. With this update, to run pods in the user namespace, you must pass the annotations that CRI-O expects.

    • To add these annotations for all users, run the oc edit clustertask buildah command and edit the buildah cluster task.
    • To add the annotations to a specific namespace, export the cluster task as a task to that namespace.
  • Before this update, if certain conditions were not met, the when expression skipped a Task object and its dependent tasks. With this update, you can scope the when expression to guard the Task object only, not its dependent tasks. To enable this update, set the scope-when-expressions-to-task flag to true in the TektonConfig CRD.

    Note

    The scope-when-expressions-to-task flag is deprecated and will be removed in a future release. As a best practice for Pipelines, use when expressions scoped to the guarded Task only.

  • With this update, you can use variable substitution in the subPath field of a workspace within a task.
  • With this update, you can reference parameters and results by using a bracket notation with single or double quotes. Prior to this update, you could only use the dot notation. For example, the following are now equivalent:

    • $(param.myparam), $(param['myparam']), and $(param["myparam"]).

      You can use single or double quotes to enclose parameter names that contain problematic characters, such as ".". For example, $(param['my.param']) and $(param["my.param"]).

  • With this update, you can include the onError parameter of a step in the task definition without enabling the enable-api-fields flag.
4.1.6.1.2. Triggers
  • With this update, the feature-flag-triggers config map has a new field labels-exclusion-pattern. You can set the value of this field to a regular expression (regex) pattern. The controller filters out labels that match the regex pattern from propagating from the event listener to the resources created for the event listener.
  • With this update, the TriggerGroups field is added to the EventListener specification. Using this field, you can specify a set of interceptors to run before selecting and running a group of triggers. To enable this feature, in the TektonConfig custom resource definition, in the pipeline section, you must set the enable-api-fields field to alpha.
  • With this update, Trigger resources support custom runs defined by a TriggerTemplate template.
  • With this update, Triggers support emitting Kubernetes events from an EventListener pod.
  • With this update, count metrics are available for the following objects: ClusterInteceptor, EventListener, TriggerTemplate, ClusterTriggerBinding, and TriggerBinding.
  • This update adds the ServicePort specification to Kubernetes resource. You can use this specification to modify which port exposes the event listener service. The default port is 8080.
  • With this update, you can use the targetURI field in the EventListener specification to send cloud events during trigger processing. To enable this feature, in the TektonConfig custom resource definition, in the pipeline section, you must set the enable-api-fields field to alpha.
  • With this update, the tekton-triggers-eventlistener-roles object now has a patch verb, in addition to the create verb that already exists.
  • With this update, the securityContext.runAsUser parameter is removed from event listener deployment.
4.1.6.1.3. CLI
  • With this update, the tkn [pipeline | pipelinerun] export command exports a pipeline or pipeline run as a YAML file. For example:

    • Export a pipeline named test_pipeline in the openshift-pipelines namespace:

      $ tkn pipeline export test_pipeline -n openshift-pipelines
    • Export a pipeline run named test_pipeline_run in the openshift-pipelines namespace:

      $ tkn pipelinerun export test_pipeline_run -n openshift-pipelines
  • With this update, the --grace option is added to the tkn pipelinerun cancel. Use the --grace option to terminate a pipeline run gracefully instead of forcing the termination. To enable this feature, in the TektonConfig custom resource definition, in the pipeline section, you must set the enable-api-fields field to alpha.
  • This update adds the Operator and Chains versions to the output of the tkn version command.

    Important

    Tekton Chains is a Technology Preview feature.

  • With this update, the tkn pipelinerun describe command displays all canceled task runs, when you cancel a pipeline run. Before this fix, only one task run was displayed.
  • With this update, you can skip supplying the asking specifications for optional workspace when you run the tkn [t | p | ct] start command skips with the --skip-optional-workspace flag. You can also skip it when running in interactive mode.
  • With this update, you can use the tkn chains command to manage Tekton Chains. You can also use the --chains-namespace option to specify the namespace where you want to install Tekton Chains.

    Important

    Tekton Chains is a Technology Preview feature.

4.1.6.1.4. Operator
  • With this update, you can use the Red Hat OpenShift Pipelines Operator to install and deploy Tekton Hub and Tekton Chains.

    Important

    Tekton Chains and deployment of Tekton Hub on a cluster are Technology Preview features.

  • With this update, you can find and use Pipelines as Code (PAC) as an add-on option.

    Important

    Pipelines as Code is a Technology Preview feature.

  • With this update, you can now disable the installation of community cluster tasks by setting the communityClusterTasks parameter to false. For example:

    ...
    spec:
      profile: all
      targetNamespace: openshift-pipelines
      addon:
        params:
        - name: clusterTasks
          value: "true"
        - name: pipelineTemplates
          value: "true"
        - name: communityClusterTasks
          value: "false"
    ...
  • With this update, you can disable the integration of Tekton Hub with the Developer perspective by setting the enable-devconsole-integration flag in the TektonConfig custom resource to false. For example:

    ...
    hub:
      params:
        - name: enable-devconsole-integration
          value: "true"
    ...
  • With this update, the operator-config.yaml config map enables the output of the tkn version command to display of the Operator version.
  • With this update, the version of the argocd-task-sync-and-wait tasks is modified to v0.2.
  • With this update to the TektonConfig CRD, the oc get tektonconfig command displays the OPerator version.
  • With this update, service monitor is added to the Triggers metrics.
4.1.6.1.5. Hub
Important

Deploying Tekton Hub on a cluster is a Technology Preview feature.

Tekton Hub helps you discover, search, and share reusable tasks and pipelines for your CI/CD workflows. A public instance of Tekton Hub is available at hub.tekton.dev.

Staring with Red Hat OpenShift Pipelines 1.7, cluster administrators can also install and deploy a custom instance of Tekton Hub on enterprise clusters. You can curate a catalog with reusable tasks and pipelines specific to your organization.

4.1.6.1.6. Chains
Important

Tekton Chains is a Technology Preview feature.

Tekton Chains is a Kubernetes Custom Resource Definition (CRD) controller. You can use it to manage the supply chain security of the tasks and pipelines created using Red Hat OpenShift Pipelines.

By default, Tekton Chains monitors the task runs in your OpenShift Container Platform cluster. Chains takes snapshots of completed task runs, converts them to one or more standard payload formats, and signs and stores all artifacts.

Tekton Chains supports the following features:

  • You can sign task runs, task run results, and OCI registry images with cryptographic key types and services such as cosign.
  • You can use attestation formats such as in-toto.
  • You can securely store signatures and signed artifacts using OCI repository as a storage backend.
4.1.6.1.7. Pipelines as Code (PAC)
Important

Pipelines as Code is a Technology Preview feature.

With Pipelines as Code, cluster administrators and users with the required privileges can define pipeline templates as part of source code Git repositories. When triggered by a source code push or a pull request for the configured Git repository, the feature runs the pipeline and reports status.

Pipelines as Code supports the following features:

  • Pull request status. When iterating over a pull request, the status and control of the pull request is exercised on the platform hosting the Git repository.
  • GitHub checks the API to set the status of a pipeline run, including rechecks.
  • GitHub pull request and commit events.
  • Pull request actions in comments, such as /retest.
  • Git events filtering, and a separate pipeline for each event.
  • Automatic task resolution in Pipelines for local tasks, Tekton Hub, and remote URLs.
  • Use of GitHub blobs and objects API for retrieving configurations.
  • Access Control List (ACL) over a GitHub organization, or using a Prow-style OWNER file.
  • The tkn pac plugin for the tkn CLI tool, which you can use to manage Pipelines as Code repositories and bootstrapping.
  • Support for GitHub Application, GitHub Webhook, Bitbucket Server, and Bitbucket Cloud.
4.1.6.2. Deprecated features
  • Breaking change: This update removes the disable-working-directory-overwrite and disable-home-env-overwrite fields from the TektonConfig custom resource (CR). As a result, the TektonConfig CR no longer automatically sets the $HOME environment variable and workingDir parameter. You can still set the $HOME environment variable and workingDir parameter by using the env and workingDir fields in the Task custom resource definition (CRD).
  • The Conditions custom resource definition (CRD) type is deprecated and planned to be removed in a future release. Instead, use the recommended When expression.
  • Breaking change: The Triggers resource validates the templates and generates an error if you do not specify the EventListener and TriggerBinding values.
4.1.6.3. Known issues
  • When you run Maven and Jib-Maven cluster tasks, the default container image is supported only on Intel (x86) architecture. Therefore, tasks will fail on ARM, IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) clusters. As a workaround, you can specify a custom image by setting the MAVEN_IMAGE parameter value to maven:3.6.3-adoptopenjdk-11.

    Tip

    Before you install tasks that are based on the Tekton Catalog on ARM, IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) using tkn hub, verify if the task can be executed on these platforms. To check if ppc64le and s390x are listed in the "Platforms" section of the task information, you can run the following command: tkn hub info task <name>

  • On IBM Power Systems, IBM Z, and LinuxONE, the s2i-dotnet cluster task is unsupported.
  • You cannot use the nodejs:14-ubi8-minimal image stream because doing so generates the following errors:

    STEP 7: RUN /usr/libexec/s2i/assemble
    /bin/sh: /usr/libexec/s2i/assemble: No such file or directory
    subprocess exited with status 127
    subprocess exited with status 127
    error building at STEP "RUN /usr/libexec/s2i/assemble": exit status 127
    time="2021-11-04T13:05:26Z" level=error msg="exit status 127"
  • Implicit parameter mapping incorrectly passes parameters from the top-level Pipeline or PipelineRun definitions to the taskRef tasks. Mapping should only occur from a top-level resource to tasks with in-line taskSpec specifications. This issue only affects clusters where this feature was enabled by setting the enable-api-fields field to alpha in the pipeline section of the TektonConfig custom resource definition.
4.1.6.4. Fixed issues
  • With this update, if metadata such as labels and annotations are present in both Pipeline and PipelineRun object definitions, the values in the PipelineRun type takes precedence. You can observe similar behavior for Task and TaskRun objects.
  • With this update, if the timeouts.tasks field or the timeouts.finally field is set to 0, then the timeouts.pipeline is also set to 0.
  • With this update, the -x set flag is removed from scripts that do not use a shebang. The fix reduces potential data leak from script execution.
  • With this update, any backslash character present in the usernames in Git credentials is escaped with an additional backslash in the .gitconfig file.
  • With this update, the finalizer property of the EventListener object is not necessary for cleaning up logging and config maps.
  • With this update, the default HTTP client associated with the event listener server is removed, and a custom HTTP client added. As a result, the timeouts have improved.
  • With this update, the Triggers cluster role now works with owner references.
  • With this update, the race condition in the event listener does not happen when multiple interceptors return extensions.
  • With this update, the tkn pr delete command does not delete the pipeline runs with the ignore-running flag.
  • With this update, the Operator pods do not continue restarting when you modify any add-on parameters.
  • With this update, the tkn serve CLI pod is scheduled on infra nodes, if not configured in the subscription and config custom resources.
  • With this update, cluster tasks with specified versions are not deleted during upgrade.
4.1.6.5. Release notes for Red Hat OpenShift Pipelines General Availability 1.7.1

With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.7.1 is available on OpenShift Container Platform 4.9, 4.10, and 4.11.

4.1.6.5.1. Fixed issues
  • Before this update, upgrading the Red Hat OpenShift Pipelines Operator deleted the data in the database associated with Tekton Hub and installed a new database. With this update, an Operator upgrade preserves the data.
  • Before this update, only cluster administrators could access pipeline metrics in the OpenShift Container Platform console. With this update, users with other cluster roles also can access the pipeline metrics.
  • Before this update, pipeline runs failed for pipelines containing tasks that emit large termination messages. The pipeline runs failed because the total size of termination messages of all containers in a pod cannot exceed 12 KB. With this update, the place-tools and step-init initialization containers that uses the same image are merged to reduce the number of containers running in each tasks’s pod. The solution reduces the chance of failed pipeline runs by minimizing the number of containers running in a task’s pod. However, it does not remove the limitation of the maximum allowed size of a termination message.
  • Before this update, attempts to access resource URLs directly from the Tekton Hub web console resulted in an Nginx 404 error. With this update, the Tekton Hub web console image is fixed to allow accessing resource URLs directly from the Tekton Hub web console.
  • Before this update, for each namespace the resource pruner job created a separate container to prune resources. With this update, the resource pruner job runs commands for all namespaces as a loop in one container.
4.1.6.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.7.2

With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.7.2 is available on OpenShift Container Platform 4.9, 4.10, and the upcoming version.

4.1.6.6.1. Known issues
  • The chains-config config map for Tekton Chains in the openshift-pipelines namespace is automatically reset to default after upgrading the Red Hat OpenShift Pipelines Operator. Currently, there is no workaround for this issue.
4.1.6.6.2. Fixed issues
  • Before this update, tasks on Pipelines 1.7.1 failed on using init as the first argument, followed by two or more arguments. With this update, the flags are parsed correctly and the task runs are successful.
  • Before this update, installation of the Red Hat OpenShift Pipelines Operator on OpenShift Container Platform 4.9 and 4.10 failed due to invalid role binding, with the following error message:

    error updating rolebinding openshift-operators-prometheus-k8s-read-binding: RoleBinding.rbac.authorization.k8s.io "openshift-operators-prometheus-k8s-read-binding" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"Role", Name:"openshift-operator-read"}: cannot change roleRef

    With this update, the Red Hat OpenShift Pipelines Operator installs with distinct role binding namespaces to avoid conflict with installation of other Operators.

  • Before this update, upgrading the Operator triggered a reset of the signing-secrets secret key for Tekton Chains to its default value. With this update, the custom secret key persists after you upgrade the Operator.

    Note

    Upgrading to Red Hat OpenShift Pipelines 1.7.2 resets the key. However, when you upgrade to future releases, the key is expected to persist.

  • Before this update, all S2I build tasks failed with an error similar to the following message:

    Error: error writing "0 0 4294967295\n" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted
    time="2022-03-04T09:47:57Z" level=error msg="error writing \"0 0 4294967295\\n\" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted"
    time="2022-03-04T09:47:57Z" level=error msg="(unable to determine exit status)"

    With this update, the pipelines-scc security context constraint (SCC) is compatible with the SETFCAP capability necessary for Buildah and S2I cluster tasks. As a result, the Buildah and S2I build tasks can run successfully.

    To successfully run the Buildah cluster task and S2I build tasks for applications written in various languages and frameworks, add the following snippet for appropriate steps objects such as build and push:

    securityContext:
      capabilities:
        add: ["SETFCAP"]
4.1.6.7. Release notes for Red Hat OpenShift Pipelines General Availability 1.7.3

With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.7.3 is available on OpenShift Container Platform 4.9, 4.10, and 4.11.

4.1.6.7.1. Fixed issues
  • Before this update, the Operator failed when creating RBAC resources if any namespace was in a Terminating state. With this update, the Operator ignores namespaces in a Terminating state and creates the RBAC resources.
  • Previously, upgrading the Red Hat OpenShift Pipelines Operator caused the pipeline service account to be recreated, which meant that the secrets linked to the service account were lost. This update fixes the issue. During upgrades, the Operator no longer recreates the pipeline service account. As a result, secrets attached to the pipeline service account persist after upgrades, and the resources (tasks and pipelines) continue to work correctly.

4.1.7. Release notes for Red Hat OpenShift Pipelines General Availability 1.6

With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.6 is available on OpenShift Container Platform 4.9.

4.1.7.1. New features

In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.6.

  • With this update, you can configure a pipeline or task start command to return a YAML or JSON-formatted string by using the --output <string>, where <string> is yaml or json. Otherwise, without the --output option, the start command returns a human-friendly message that is hard for other programs to parse. Returning a YAML or JSON-formatted string is useful for continuous integration (CI) environments. For example, after a resource is created, you can use yq or jq to parse the YAML or JSON-formatted message about the resource and wait until that resource is terminated without using the showlog option.
  • With this update, you can authenticate to a registry using the auth.json authentication file of Podman. For example, you can use tkn bundle push to push to a remote registry using Podman instead of Docker CLI.
  • With this update, if you use the tkn [taskrun | pipelinerun] delete --all command, you can preserve runs that are younger than a specified number of minutes by using the new --keep-since <minutes> option. For example, to keep runs that are less than five minutes old, you enter tkn [taskrun | pipelinerun] delete -all --keep-since 5.
  • With this update, when you delete task runs or pipeline runs, you can use the --parent-resource and --keep-since options together. For example, the tkn pipelinerun delete --pipeline pipelinename --keep-since 5 command preserves pipeline runs whose parent resource is named pipelinename and whose age is five minutes or less. The tkn tr delete -t <taskname> --keep-since 5 and tkn tr delete --clustertask <taskname> --keep-since 5 commands work similarly for task runs.
  • This update adds support for the triggers resources to work with v1beta1 resources.
  • This update adds an ignore-running option to the tkn pipelinerun delete and tkn taskrun delete commands.
  • This update adds a create subcommand to the tkn task and tkn clustertask commands.
  • With this update, when you use the tkn pipelinerun delete --all command, you can use the new --label <string> option to filter the pipeline runs by label. Optionally, you can use the --label option with = and == as equality operators, or != as an inequality operator. For example, the tkn pipelinerun delete --all --label asdf and tkn pipelinerun delete --all --label==asdf commands both delete all the pipeline runs that have the asdf label.
  • With this update, you can fetch the version of installed Tekton components from the config map or, if the config map is not present, from the deployment controller.
  • With this update, triggers support the feature-flags and config-defaults config map to configure feature flags and to set default values respectively.
  • This update adds a new metric, eventlistener_event_count, that you can use to count events received by the EventListener resource.
  • This update adds v1beta1 Go API types. With this update, triggers now support the v1beta1 API version.

    With the current release, the v1alpha1 features are now deprecated and will be removed in a future release. Begin using the v1beta1 features instead.

  • In the current release, auto-prunning of resources is enabled by default. In addition, you can configure auto-prunning of task run and pipeline run for each namespace separately, by using the following new annotations:

    • operator.tekton.dev/prune.schedule: If the value of this annotation is different from the value specified at the TektonConfig custom resource definition, a new cron job in that namespace is created.
    • operator.tekton.dev/prune.skip: When set to true, the namespace for which it is configured will not be prunned.
    • operator.tekton.dev/prune.resources: This annotation accepts a comma-separated list of resources. To prune a single resource such as a pipeline run, set this annotation to "pipelinerun". To prune multiple resources, such as task run and pipeline run, set this annotation to "taskrun, pipelinerun".
    • operator.tekton.dev/prune.keep: Use this annotation to retain a resource without prunning.
    • operator.tekton.dev/prune.keep-since: Use this annotation to retain resources based on their age. The value for this annotation must be equal to the age of the resource in minutes. For example, to retain resources which were created not more than five days ago, set keep-since to 7200.

      Note

      The keep and keep-since annotations are mutually exclusive. For any resource, you must configure only one of them.

    • operator.tekton.dev/prune.strategy: Set the value of this annotation to either keep or keep-since.
  • Administrators can disable the creation of the pipeline service account for the entire cluster, and prevent privilege escalation by misusing the associated SCC, which is very similar to anyuid.
  • You can now configure feature flags and components by using the TektonConfig custom resource (CR) and the CRs for individual components, such as TektonPipeline and TektonTriggers. This level of granularity helps customize and test alpha features such as the Tekton OCI bundle for individual components.
  • You can now configure optional Timeouts field for the PipelineRun resource. For example, you can configure timeouts separately for a pipeline run, each task run, and the finally tasks.
  • The pods generated by the TaskRun resource now sets the activeDeadlineSeconds field of the pods. This enables OpenShift to consider them as terminating, and allows you to use specifically scoped ResourceQuota object for the pods.
  • You can use configmaps to eliminate metrics tags or labels type on a task run, pipeline run, task, and pipeline. In addition, you can configure different types of metrics for measuring duration, such as a histogram, gauge, or last value.
  • You can define requests and limits on a pod coherently, as Tekton now fully supports the LimitRange object by considering the Min, Max, Default, and DefaultRequest fields.
  • The following alpha features are introduced:

    • A pipeline run can now stop after running the finally tasks, rather than the previous behavior of stopping the execution of all task run directly. This update adds the following spec.status values:

      • StoppedRunFinally will stop the currently running tasks after they are completed, and then run the finally tasks.
      • CancelledRunFinally will immediately cancel the running tasks, and then run the finally tasks.
      • Cancelled will retain the previous behavior provided by the PipelineRunCancelled status.

        Note

        The Cancelled status replaces the deprecated PipelineRunCancelled status, which will be removed in the v1 version.

    • You can now use the oc debug command to put a task run into debug mode, which pauses the execution and allows you to inspect specific steps in a pod.
    • When you set the onError field of a step to continue, the exit code for the step is recorded and passed on to subsequent steps. However, the task run does not fail and the execution of the rest of the steps in the task continues. To retain the existing behavior, you can set the value of the onError field to stopAndFail.
    • Tasks can now accept more parameters than are actually used. When the alpha feature flag is enabled, the parameters can implicitly propagate to inlined specs. For example, an inlined task can access parameters of its parent pipeline run, without explicitly defining each parameter for the task.
    • If you enable the flag for the alpha features, the conditions under When expressions will only apply to the task with which it is directly associated, and not the dependents of the task. To apply the When expressions to the associated task and its dependents, you must associate the expression with each dependent task separately. Note that, going forward, this will be the default behavior of the When expressions in any new API versions of Tekton. The existing default behavior will be deprecated in favor of this update.
  • The current release enables you to configure node selection by specifying the nodeSelector and tolerations values in the TektonConfig custom resource (CR). The Operator adds these values to all the deployments that it creates.

    • To configure node selection for the Operator’s controller and webhook deployment, you edit the config.nodeSelector and config.tolerations fields in the specification for the Subscription CR, after installing the Operator.
    • To deploy the rest of the control plane pods of OpenShift Pipelines on an infrastructure node, update the TektonConfig CR with the nodeSelector and tolerations fields. The modifications are then applied to all the pods created by Operator.
4.1.7.2. Deprecated features
  • In CLI 0.21.0, support for all v1alpha1 resources for clustertask, task, taskrun, pipeline, and pipelinerun commands are deprecated. These resources are now deprecated and will be removed in a future release.
  • In Tekton Triggers v0.16.0, the redundant status label is removed from the metrics for the EventListener resource.

    Important

    Breaking change: The status label has been removed from the eventlistener_http_duration_seconds_* metric. Remove queries that are based on the status label.

  • With the current release, the v1alpha1 features are now deprecated and will be removed in a future release. With this update, you can begin using the v1beta1 Go API types instead. Triggers now supports the v1beta1 API version.
  • With the current release, the EventListener resource sends a response before the triggers finish processing.

    Important

    Breaking change: With this change, the EventListener resource stops responding with a 201 Created status code when it creates resources. Instead, it responds with a 202 Accepted response code.

  • The current release removes the podTemplate field from the EventListener resource.

    Important

    Breaking change: The podTemplate field, which was deprecated as part of #1100, has been removed.

  • The current release removes the deprecated replicas field from the specification for the EventListener resource.

    Important

    Breaking change: The deprecated replicas field has been removed.

  • In Red Hat OpenShift Pipelines 1.6, the values of HOME="/tekton/home" and workingDir="/workspace" are removed from the specification of the Step objects.

    Instead, Red Hat OpenShift Pipelines sets HOME and workingDir to the values defined by the containers running the Step objects. You can override these values in the specification of your Step objects.

    To use the older behavior, you can change the disable-working-directory-overwrite and disable-home-env-overwrite fields in the TektonConfig CR to false:

    apiVersion: operator.tekton.dev/v1alpha1
      kind: TektonConfig
      metadata:
        name: config
      spec:
        pipeline:
          disable-working-directory-overwrite: false
          disable-home-env-overwrite: false
      ...
    Important

    The disable-working-directory-overwrite and disable-home-env-overwrite fields in the TektonConfig CR are now deprecated and will be removed in a future release.

4.1.7.3. Known issues
  • When you run Maven and Jib-Maven cluster tasks, the default container image is supported only on Intel (x86) architecture. Therefore, tasks will fail on IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) clusters. As a workaround, you can specify a custom image by setting the MAVEN_IMAGE parameter value to maven:3.6.3-adoptopenjdk-11.
  • On IBM Power Systems, IBM Z, and LinuxONE, the s2i-dotnet cluster task is unsupported.
  • Before you install tasks based on the Tekton Catalog on IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) using tkn hub, verify if the task can be executed on these platforms. To check if ppc64le and s390x are listed in the "Platforms" section of the task information, you can run the following command: tkn hub info task <name>
  • You cannot use the nodejs:14-ubi8-minimal image stream because doing so generates the following errors:

    STEP 7: RUN /usr/libexec/s2i/assemble
    /bin/sh: /usr/libexec/s2i/assemble: No such file or directory
    subprocess exited with status 127
    subprocess exited with status 127
    error building at STEP "RUN /usr/libexec/s2i/assemble": exit status 127
    time="2021-11-04T13:05:26Z" level=error msg="exit status 127"
4.1.7.4. Fixed issues
  • The tkn hub command is now supported on IBM Power Systems, IBM Z, and LinuxONE.
  • Before this update, the terminal was not available after the user ran a tkn command, and the pipeline run was done, even if retries were specified. Specifying a timeout in the task run or pipeline run had no effect. This update fixes the issue so that the terminal is available after running the command.
  • Before this update, running tkn pipelinerun delete --all would delete all resources. This update prevents the resources in the running state from getting deleted.
  • Before this update, using the tkn version --component=<component> command did not return the component version. This update fixes the issue so that this command returns the component version.
  • Before this update, when you used the tkn pr logs command, it displayed the pipelines output logs in the wrong task order. This update resolves the issue so that logs of completed PipelineRuns are listed in the appropriate TaskRun execution order.
  • Before this update, editing the specification of a running pipeline might prevent the pipeline run from stopping when it was complete. This update fixes the issue by fetching the definition only once and then using the specification stored in the status for verification. This change reduces the probability of a race condition when a PipelineRun or a TaskRun refers to a Pipeline or Task that changes while it is running.
  • When expression values can now have array parameter references, such as: values: [$(params.arrayParam[*])].