このコンテンツは選択した言語では利用できません。
Chapter 8. Builds
8.1. How Builds Work
8.1.1. What Is a Build?
A build in OpenShift Container Platform is the process of transforming input parameters into a resulting object. Most often, builds are used to transform source code into a runnable container image.
A build configuration, or BuildConfig
, is characterized by a build strategy and one or more sources. The strategy determines the aforementioned process, while the sources provide its input.
The build strategies are:
- Source-to-Image (S2I) (description, options)
- Pipeline (description, options)
- Docker (description, options)
- Custom (description, options)
And there are six types of sources that can be given as build input:
It is up to each build strategy to consider or ignore a certain type of source, as well as to determine how it is to be used. Binary and Git are mutually exclusive source types. Dockerfile and Image can be used by themselves, with each other, or together with either Git or Binary. The Binary source type is unique from the other options in how it is specified to the system.
8.1.2. What Is a BuildConfig?
A build configuration describes a single build definition and a set of triggers for when a new build should be created. Build configurations are defined by a BuildConfig
, which is a REST object that can be used in a POST to the API server to create a new instance.
Depending on how you choose to create your application using OpenShift Container Platform, a BuildConfig
is typically generated automatically for you if you use the web console or CLI, and it can be edited at any time. Understanding the parts that make up a BuildConfig
and their available options can help if you choose to manually tweak your configuration later.
The following example BuildConfig
results in a new build every time a container image tag or the source code changes:
BuildConfig Object Definition
kind: "BuildConfig" apiVersion: "v1" metadata: name: "ruby-sample-build" 1 spec: runPolicy: "Serial" 2 triggers: 3 - type: "GitHub" github: secret: "secret101" - type: "Generic" generic: secret: "secret101" - type: "ImageChange" source: 4 git: uri: "https://github.com/openshift/ruby-hello-world" strategy: 5 sourceStrategy: from: kind: "ImageStreamTag" name: "ruby-20-centos7:latest" output: 6 to: kind: "ImageStreamTag" name: "origin-ruby-sample:latest" postCommit: 7 script: "bundle exec rake test"
- 1
- This specification will create a new
BuildConfig
named ruby-sample-build. - 2
- The
runPolicy
field controls whether builds created from this build configuration can be run simultaneously. The default value isSerial
, which means new builds will run sequentially, not simultaneously. - 3
- You can specify a list of triggers, which cause a new build to be created.
- 4
- The
source
section defines the source of the build. The source type determines the primary source of input, and can be eitherGit
, to point to a code repository location,Dockerfile
, to build from an inline Dockerfile, orBinary
, to accept binary payloads. It is possible to have multiple sources at once, refer to the documentation for each source type for details. - 5
- The
strategy
section describes the build strategy used to execute the build. You can specify aSource
,Docker
, orCustom
strategy here. This above example uses theruby-20-centos7
container image that Source-To-Image will use for the application build. - 6
- After the container image is successfully built, it will be pushed into the repository described in the
output
section. - 7
- The
postCommit
section defines an optional build hook.
8.2. Basic Build Operations
8.2.1. Starting a Build
Manually start a new build from an existing build configuration in your current project using the following command:
$ oc start-build <buildconfig_name>
Re-run a build using the --from-build
flag:
$ oc start-build --from-build=<build_name>
Specify the --follow
flag to stream the build’s logs in stdout:
$ oc start-build <buildconfig_name> --follow
Specify the --env
flag to set any desired environment variable for the build:
$ oc start-build <buildconfig_name> --env=<key>=<value>
Rather than relying on a Git source pull or a Dockerfile for a build, you can can also start a build by directly pushing your source, which could be the contents of a Git or SVN working directory, a set of prebuilt binary artifacts you want to deploy, or a single file. This can be done by specifying one of the following options for the start-build
command:
Option | Description |
---|---|
| Specifies a directory that will be archived and used as a binary input for the build. |
| Specifies a single file that will be the only file in the build source. The file is placed in the root of an empty directory with the same file name as the original file provided. |
|
Specifies a path to a local repository to use as the binary input for a build. Add the |
When passing any of these options directly to the build, the contents are streamed to the build and override the current build source settings.
Builds triggered from binary input will not preserve the source on the server, so rebuilds triggered by base image changes will use the source specified in the build configuration.
For example, the following command sends the contents of a local Git repository as an archive from the tag v2
and starts a build:
$ oc start-build hello-world --from-repo=../hello-world --commit=v2
8.2.2. Canceling a Build
Manually cancel a build using the web console, or with the following CLI command:
$ oc cancel-build <build_name>
Cancel multiple builds at the same time:
$ oc cancel-build <build1_name> <build2_name> <build3_name>
Cancel all builds created from the build configuration:
$ oc cancel-build bc/<buildconfig_name>
Cancel all builds in a given state (for example, new or pending), ignoring the builds in other states:
$ oc cancel-build bc/<buildconfig_name> --state=<state>
8.2.3. Deleting a BuildConfig
Delete a BuildConfig
using the following command:
$ oc delete bc <BuildConfigName>
This will also delete all builds that were instantiated from this BuildConfig
. Specify the --cascade=false
flag if you do not want to delete the builds:
$ oc delete --cascade=false bc <BuildConfigName>
8.2.4. Viewing Build Details
You can view build details with the web console or by using the oc describe
CLI command:
$ oc describe build <build_name>
This displays information such as:
- The build source
- The build strategy
- The output destination
- Digest of the image in the destination registry
- How the build was created
If the build uses the Docker
or Source
strategy, the oc describe
output also includes information about the source revision used for the build, including the commit ID, author, committer, and message.
8.2.5. Accessing Build Logs
You can access build logs using the web console or the CLI.
To stream the logs using the build directly:
$ oc logs -f build/<build_name>
To stream the logs of the latest build for a build configuration:
$ oc logs -f bc/<buildconfig_name>
To return the logs of a given version build for a build configuration:
$ oc logs --version=<number> bc/<buildconfig_name>
Log Verbosity
To enable more verbose output, pass the BUILD_LOGLEVEL
environment variable as part of the sourceStrategy
or dockerStrategy
in a BuildConfig
:
sourceStrategy:
...
env:
- name: "BUILD_LOGLEVEL"
value: "2" 1
- 1
- Adjust this value to the desired log level.
A platform administrator can set the default build verbosity for the entire OpenShift Container Platform instance by configuring env/BUILD_LOGLEVEL
for the BuildDefaults
admission controller. This default can be overridden by specifying BUILD_LOGLEVEL
in a given BuildConfig
. You can specify a higher priority override on the command line for non-binary builds by passing --build-loglevel
to oc start-build
.
Available log levels for Source builds are as follows:
Level 0 | Produces output from containers running the assemble script and all encountered errors. This is the default. |
Level 1 | Produces basic information about the executed process. |
Level 2 | Produces very detailed information about the executed process. |
Level 3 | Produces very detailed information about the executed process, and a listing of the archive contents. |
Level 4 | Currently produces the same information as level 3. |
Level 5 | Produces everything mentioned on previous levels and additionally provides docker push messages. |
8.3. Build Inputs
8.3.1. How Build Inputs Work
A build input provides source content for builds to operate on. There are several ways to provide source in OpenShift Container Platform. In order of precedence:
Different inputs can be combined into a single build. As the inline Dockerfile takes precedence, it can overwrite any other file named Dockerfile provided by another input. Binary (local) input and Git repositories are mutually exclusive inputs.
Input secrets are useful for when you do not want certain resources or credentials used during a build to be available in the final application image produced by the build, or want to consume a value that is defined in a Secret
resource. External artifacts can be used to pull in additional files that are not available as one of the other build input types.
Whenever a build is run:
- A working directory is constructed and all input content is placed in the working directory. For example, the input Git repository is cloned into the working directory, and files specified from input images are copied into the working directory using the target path.
-
The build process changes directories into the
contextDir
, if one is defined. - The inline Dockerfile, if any, is written to the current directory.
-
The content from the current directory is provided to the build process for reference by the Dockerfile, custom builder logic, or assemble script. This means any input content that resides outside the
contextDir
will be ignored by the build.
The following example of a source definition includes multiple input types and an explanation of how they are combined. For more details on how each input type is defined, see the specific sections for each input type.
source: git: uri: https://github.com/openshift/ruby-hello-world.git 1 images: - from: kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: - destinationDir: app/dir/injected/dir 2 sourcePath: /usr/lib/somefile.jar contextDir: "app/dir" 3 dockerfile: "FROM centos:7\nRUN yum install -y httpd" 4
- 1
- The repository to be cloned into the working directory for the build.
- 2
- /usr/lib/somefile.jar from
myinputimage
will be stored in <workingdir>/app/dir/injected/dir. - 3
- The working directory for the build will become <original_workingdir>/app/dir.
- 4
- A Dockerfile with this content will be created in <original_workingdir>/app/dir, overwriting any existing file with that name.
8.3.2. Dockerfile Source
When a dockerfile
value is supplied, the content of this field will be written to disk as a file named Dockerfile. This is done after other input sources are processed, so if the input source repository contains a Dockerfile in the root directory, it will be overwritten with this content.
The typical use for this field is to provide a Dockerfile
to a Docker strategy build.
The source definition is part of the spec
section in the BuildConfig
:
source:
dockerfile: "FROM centos:7\nRUN yum install -y httpd" 1
- 1
- The
dockerfile
field contains an inline Dockerfile that will be built.
8.3.3. Image Source
Additional files can be provided to the build process via images. Input images are referenced in the same way the From
and To
image targets are defined. This means both container images and image stream tags can be referenced. In conjunction with the image, you must provide one or more path pairs to indicate the path of the files or directories to copy the image and the destination to place them in the build context.
The source path can be any absolute path within the image specified. The destination must be a relative directory path. At build time, the image will be loaded and the indicated files and directories will be copied into the context directory of the build process. This is the same directory into which the source repository content (if any) is cloned. If the source path ends in /. then the content of the directory will be copied, but the directory itself will not be created at the destination.
Image inputs are specified in the source
definition of the BuildConfig
:
source: git: uri: https://github.com/openshift/ruby-hello-world.git images: 1 - from: 2 kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: 3 - destinationDir: injected/dir 4 sourcePath: /usr/lib/somefile.jar 5 - from: kind: ImageStreamTag name: myotherinputimage:latest namespace: myothernamespace pullSecret: mysecret 6 paths: - destinationDir: injected/dir sourcePath: /usr/lib/somefile.jar
- 1
- An array of one or more input images and files.
- 2
- A reference to the image containing the files to be copied.
- 3
- An array of source/destination paths.
- 4
- The directory relative to the build root where the build process can access the file.
- 5
- The location of the file to be copied out of the referenced image.
- 6
- An optional secret provided if credentials are needed to access the input image.
This feature is not supported for builds using the Custom Strategy.
8.3.4. Git Source
When specified, source code will be fetched from the location supplied.
If an inline Dockerfile is supplied, it will overwrite the Dockerfile (if any) in the contextDir
of the Git repository.
The source definition is part of the spec
section in the BuildConfig
:
source: git: 1 uri: "https://github.com/openshift/ruby-hello-world" ref: "master" contextDir: "app/dir" 2 dockerfile: "FROM openshift/ruby-22-centos7\nUSER example" 3
- 1
- The
git
field contains the URI to the remote Git repository of the source code. Optionally, specify theref
field to check out a specific Git reference. A validref
can be a SHA1 tag or a branch name. - 2
- The
contextDir
field allows you to override the default location inside the source code repository where the build looks for the application source code. If your application exists inside a sub-directory, you can override the default location (the root folder) using this field. - 3
- If the optional
dockerfile
field is provided, it should be a string containing a Dockerfile that overwrites any Dockerfile that may exist in the source repository.
If the ref
field denotes a pull request, the system will use a git fetch
operation and then checkout FETCH_HEAD
.
When no ref
value is provided, OpenShift Container Platform performs a shallow clone (--depth=1
). In this case, only the files associated with the most recent commit on the default branch (typically master
) are downloaded. This results in repositories downloading faster, but without the full commit history. To perform a full git clone
of the default branch of a specified repository, set ref
to the name of the default branch (for example master
).
8.3.4.1. Using a Proxy
If your Git repository can only be accessed using a proxy, you can define the proxy to use in the source
section of the BuildConfig
. You can configure both a HTTP and HTTPS proxy to use. Both fields are optional. Domains for which no proxying should be performed can also be specified via the NoProxy field.
Your source URI must use the HTTP or HTTPS protocol for this to work.
source: git: uri: "https://github.com/openshift/ruby-hello-world" httpProxy: http://proxy.example.com httpsProxy: https://proxy.example.com noProxy: somedomain.com, otherdomain.com
Cluster administrators can also configure a global proxy for Git cloning using Ansible.
For Pipeline strategy builds, given the current restrictions with the Git plug-in for Jenkins, any Git operations through the Git plug-in will not leverage the HTTP or HTTPS proxy defined in the BuildConfig
. The Git plug-in only will use the the proxy configured in the Jenkins UI at the Plugin Manager panel. This proxy will then be used for all git interactions within Jenkins, across all jobs. You can find instructions on how to configure proxies through the Jenkins UI at JenkinsBehindProxy.
8.3.4.2. Source Clone Secrets
Builder pods require access to any Git repositories defined as source for a build. Source clone secrets are used to provide the builder pod with access it would not normally have access to, such as private repositories or repositories with self-signed or untrusted SSL certificates.
The following source clone secret configurations are supported.
You can also use combinations of these configurations to meet your specific needs.
Builds are run with the builder service account, which must have access to any source clone secrets used. Access is granted with the following command:
$ oc secrets link builder mysecret
Limiting secrets to only the service accounts that reference them is disabled by default. This means that if serviceAccountConfig.limitSecretReferences
is set to false
(the default setting) in the master configuration file, linking secrets to a service is not required.
8.3.4.2.1. Automatically Adding a Source Clone Secret to a Build Configuration
When a BuildConfig
is created, OpenShift Container Platform can automatically populate its source clone secret reference. This behaviour allows the resulting Builds
to automatically use the credentials stored in the referenced Secret
to authenticate to a remote Git repository, without requiring further configuration.
To use this functionality, a Secret
containing the Git repository credentials must exist in the namespace in which the BuildConfig
will later be created. This Secret
must additionally include one or more annotations prefixed with build.openshift.io/source-secret-match-uri-
. The value of each of these annotations is a URI pattern, defined as follows. When a BuildConfig
is created without a source clone secret reference and its Git source URI matches a URI pattern in a Secret
annotation, OpenShift Container Platform will automatically insert a reference to that Secret
in the BuildConfig
.
A URI pattern must consist of:
-
a valid scheme (
*://
,git://
,http://
,https://
orssh://
). -
a host (
*
or a valid hostname or IP address optionally preceded by*.
). -
a path (
/*
or/
followed by any characters optionally including*
characters).
In all of the above, a *
character is interpreted as a wildcard.
URI patterns only match Git source URIs which are conformant to RFC3986. For example, https://github.com/openshift/origin.git. They do not match the alternate SSH style that Git also uses. For example, git@github.com:openshift/origin.git.
It is not valid to attempt to express a URI pattern in the alternate style, or to include a username/password component in a URI pattern.
If multiple Secrets
match the Git URI of a particular BuildConfig
, OpenShift Container Platform will select the secret with the longest match. This allows for basic overriding, as in the following example.
The following fragment shows two partial source clone secrets, the first matching any server in the domain mycorp.com
accessed by HTTPS, and the second overriding access to servers mydev1.mycorp.com
and mydev2.mycorp.com
:
kind: Secret apiVersion: v1 metadata: name: matches-all-corporate-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://*.mycorp.com/* data: ... kind: Secret apiVersion: v1 metadata: name: override-for-my-dev-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://mydev1.mycorp.com/* build.openshift.io/source-secret-match-uri-2: https://mydev2.mycorp.com/* data: ...
Add a build.openshift.io/source-secret-match-uri-
annotation to a pre-existing secret using:
$ oc annotate secret mysecret \ 'build.openshift.io/source-secret-match-uri-1=https://*.mycorp.com/*'
8.3.4.2.2. Manually Adding Source Clone Secrets
Source clone secrets can be added manually to a build configuration by adding a sourceSecret
field to the source
section inside the BuildConfig
and setting it to the name of the secret
that you created (basicsecret
, in this example).
apiVersion: "v1" kind: "BuildConfig" metadata: name: "sample-build" spec: output: to: kind: "ImageStreamTag" name: "sample-image:latest" source: git: uri: "https://github.com/user/app.git" sourceSecret: name: "basicsecret" strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "python-33-centos7:latest"
You can also use the oc set build-secret
command to set the source clone secret on an existing build configuration:
$ oc set build-secret --source bc/sample-build basicsecret
Defining Secrets in the BuildConfig provides more information on this topic.
8.3.4.2.3. .Gitconfig File
If the cloning of your application is dependent on a .gitconfig file, then you can create a secret that contains it, and then add it to the builder service account, and then your BuildConfig
.
To create a secret from a .gitconfig file:
$ oc secrets new mysecret .gitconfig=path/to/.gitconfig
SSL verification can be turned off if sslVerify=false
is set for the http
section in your .gitconfig file:
[http] sslVerify=false
8.3.4.2.4. .gitconfig File for Secured Git
If your Git server is secured with 2-way SSL and user name with password you must add the certificate files to your source build and add references to the certificate files in the .gitconfig file:
- Add the client.crt, cacert.crt, and client.key files to the /var/run/secrets/openshift.io/source/ folder in the application source code.
In the .gitconfig file for the server, add the
[http]
section shown in the following example:# cat .gitconfig [user] name = <name> email = <email> [http] sslVerify = false sslCert = /var/run/secrets/openshift.io/source/client.crt sslKey = /var/run/secrets/openshift.io/source/client.key sslCaInfo = /var/run/secrets/openshift.io/source/cacert.crt
Create the secret:
$ oc secrets new <secret_name> \ --from-literal=username=<user_name> \ 1 --from-literal=password=<password> \ 2 --from-file=.gitconfig=.gitconfig \ --from-file=client.crt=/var/run/secrets/openshift.io/source/client.crt \ --from-file=cacert.crt=/var/run/secrets/openshift.io/source/cacert.crt \ --from-file=client.key=/var/run/secrets/openshift.io/source/client.key
To avoid having to enter your password again, be sure to specify the S2I image in your builds. However, if you cannot clone the repository, you still need to specify your user name and password to promote the build.
8.3.4.2.5. Basic Authentication
Basic authentication requires either a combination of --username
and --password
, or a token
to authenticate against the SCM server.
Create the secret
first before using the user name and password to access the private repository:
$ oc secrets new-basicauth <secret_name> \ --username=<user_name> \ --password=<password>
To create a basic authentication secret with a token:
$ oc secrets new-basicauth <secret_name> \ --password=<token>
8.3.4.2.6. SSH Key Authentication
SSH key based authentication requires a private SSH key.
The repository keys are usually located in the $HOME/.ssh/ directory, and are named id_dsa.pub
, id_ecdsa.pub
, id_ed25519.pub
, or id_rsa.pub
by default. Generate SSH key credentials with the following command:
$ ssh-keygen -t rsa -C "your_email@example.com"
Creating a passphrase for the SSH key prevents OpenShift Container Platform from building. When prompted for a passphrase, leave it blank.
Two files are created: the public key and a corresponding private key (one of id_dsa
, id_ecdsa
, id_ed25519
, or id_rsa
). With both of these in place, consult your source control management (SCM) system’s manual on how to upload the public key. The private key is used to access your private repository.
Before using the SSH key to access the private repository, create the secret first:
$ oc secrets new-sshauth sshsecret \ --ssh-privatekey=$HOME/.ssh/id_rsa
8.3.4.2.7. Trusted Certificate Authorities
The set of TLS certificate authorities that are trusted during a git clone
operation are built into the OpenShift Container Platform infrastructure images. If your Git server uses a self-signed certificate or one signed by an authority not trusted by the image, you can create a secret that contains the certificate or disable TLS verification.
If you create a secret for the CA certificate
, OpenShift Container Platform uses it to access your Git server during the git clone
operation. Using this method is significantly more secure than disabling Git’s SSL verification, which accepts any TLS certificate that is presented.
Complete one of the following processes:
Create a secret with a CA certificate file (recommended).
If your CA uses Intermediate Certificate Authorities, combine the certificates for all CAs in a ca.crt file. Run the following command:
$ cat intermediateCA.crt intermediateCA.crt rootCA.crt > ca.crt
Create the secret:
$ oc create secret generic mycert --from-file=ca.crt=</path/to/file> 1
- 1
- You must use the key name ca.crt.
Disable Git TLS verification.
Set the
GIT_SSL_NO_VERIFY
environment variable totrue
in the appropriate strategy section of your build configuration. You can use theoc set env
command to manageBuildConfig
environment variables.
8.3.4.2.8. Combinations
Below are several examples of how you can combine the above methods for creating source clone secrets for your specific needs.
To create an SSH-based authentication secret with a .gitconfig file:
$ oc secrets new-sshauth sshsecret \ --ssh-privatekey=$HOME/.ssh/id_rsa \ --gitconfig=</path/to/file>
To create a secret that combines a .gitconfig file and CA certificate:
$ oc secrets new mysecret \ ca.crt=path/to/certificate \ .gitconfig=path/to/.gitconfig
To create a basic authentication secret with a CA certificate file:
$ oc secrets new-basicauth <secret_name> \ --username=<user_name> \ --password=<password> \ --ca-cert=</path/to/file>
To create a basic authentication secret with a .gitconfig file:
$ oc secrets new-basicauth <secret_name> \ --username=<user_name> \ --password=<password> \ --gitconfig=</path/to/file>
To create a basic authentication secret with a .gitconfig file and CA certificate file:
$ oc secrets new-basicauth <secret_name> \ --username=<user_name> \ --password=<password> \ --gitconfig=</path/to/file> \ --ca-cert=</path/to/file>
8.3.5. Binary (Local) Source
Streaming content from a local file system to the builder is called a Binary
type build. The corresponding value of BuildConfig.spec.source.type
is Binary
for such builds.
This source type is unique in that it is leveraged solely based on your use of the oc start-build
.
Binary type builds require content to be streamed from the local file system, so automatically triggering a binary type build (e.g. via an image change trigger) is not possible, because the binary files cannot be provided. Similarly, you cannot launch binary type builds from the web console.
To utilize binary builds, invoke oc start-build
with one of these options:
-
--from-file
: The contents of the file you specify are sent as a binary stream to the builder. You can also specify a URL to a file. Then, the builder stores the data in a file with the same name at the top of the build context. -
--from-dir
and--from-repo
: The contents are archived and sent as a binary stream to the builder. Then, the builder extracts the contents of the archive within the build context directory. With--from-dir
, you can also specify a URL to an archive, which will be extracted. -
--from-archive
: The archive you specify is sent to the builder, where it is extracted within the build context directory. This option behaves the same as--from-dir
; an archive is created on your host first, whenever the argument to these options is a directory.
In each of the above cases:
-
If your
BuildConfig
already has aBinary
source type defined, it will effectively be ignored and replaced by what the client sends. -
If your
BuildConfig
has aGit
source type defined, it is dynamically disabled, sinceBinary
andGit
are mutually exclusive, and the data in the binary stream provided to the builder takes precedence.
Instead of a file name, you can pass a URL with HTTP or HTTPS schema to --from-file
and --from-archive
. When using --from-file
with a URL, the name of the file in the builder image is determined by the Content-Disposition
header sent by the web server, or the last component of the URL path if the header is not present. No form of authentication is supported and it is not possible to use custom TLS certificate or disable certificate validation.
When using oc new-build --binary=true
, the command ensures that the restrictions associated with binary builds are enforced. The resulting BuildConfig
will have a source type of Binary
, meaning that the only valid way to run a build for this BuildConfig
is to use oc start-build
with one of the --from
options to provide the requisite binary data.
The dockerfile
and contextDir
source options have special meaning with binary builds.
dockerfile
can be used with any binary build source. If dockerfile
is used and the binary stream is an archive, its contents serve as a replacement Dockerfile to any Dockerfile in the archive. If dockerfile
is used with the --from-file
argument, and the file argument is named dockerfile
, the value from dockerfile
replaces the value from the binary stream.
In the case of the binary stream encapsulating extracted archive content, the value of the contextDir
field is interpreted as a subdirectory within the archive, and, if valid, the builder changes into that subdirectory before executing the build.
8.3.6. Input Secrets
In some scenarios, build operations require credentials to access dependent resources, but it is undesirable for those credentials to be available in the final application image produced by the build. You can define input secrets for this purpose.
For example, when building a Node.js application, you can set up your private mirror for Node.js modules. In order to download modules from that private mirror, you have to supply a custom .npmrc file for the build that contains a URL, user name, and password. For security reasons, you do not want to expose your credentials in the application image.
This example describes Node.js, but you can use the same approach for adding SSL certificates into the /etc/ssl/certs directory, API keys or tokens, license files, and more.
8.3.6.1. Adding Input Secrets
To add an input secret to an existing BuildConfig
:
Create the secret, if it does not exist:
$ oc secrets new secret-npmrc .npmrc=~/.npmrc
This creates a new secret named secret-npmrc, which contains the base64 encoded content of the ~/.npmrc file.
Add the secret to the
source
section in the existingBuildConfig
:source: git: uri: https://github.com/sclorg/nodejs-ex.git secrets: - secret: name: secret-npmrc
To include the secret in a new BuildConfig
, run the following command:
$ oc new-build \ openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git \ --build-secret secret-npmrc
During the build, the .npmrc file is copied into the directory where the source code is located. In OpenShift Container Platform S2I builder images, this is the image working directory, which is set using the WORKDIR
instruction in the Dockerfile. If you want to specify another directory, add a destinationDir
to the secret definition:
source: git: uri: https://github.com/sclorg/nodejs-ex.git secrets: - secret: name: secret-npmrc destinationDir: /etc
You can also specify the destination directory when creating a new BuildConfig
:
$ oc new-build \ openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git \ --build-secret “secret-npmrc:/etc”
In both cases, the .npmrc file is added to the /etc directory of the build environment. Note that for a Docker strategy the destination directory must be a relative path.
8.3.6.2. Source-to-Image Strategy
When using a Source
strategy, all defined input secrets are copied to their respective destinationDir
. If you left destinationDir
empty, then the secrets are placed in the working directory of the builder image.
The same rule is used when a destinationDir
is a relative path; the secrets are placed in the paths that are relative to the image’s working directory. The destinationDir
must exist or an error will occur. No directory paths are created during the copy process.
Currently, any files with these secrets are world-writable (have 0666
permissions) and will be truncated to size zero after executing the assemble script. This means that the secret files will exist in the resulting image, but they will be empty for security reasons.
8.3.6.3. Docker Strategy
When using a Docker
strategy, you can add all defined input secrets into your container image using the ADD
and COPY
instructions in your Dockerfile.
If you do not specify the destinationDir
for a secret, then the files will be copied into the same directory in which the Dockerfile is located. If you specify a relative path as destinationDir
, then the secrets will be copied into that directory, relative to your Dockerfile location. This makes the secret files available to the Docker build operation as part of the context directory used during the build.
Example 8.1. Example of a Dockerfile referencing secret data
FROM centos/ruby-22-centos7 USER root ADD ./secret-dir /secrets COPY ./secret2 / # Create a shell script that will output secrets when the image is run RUN echo '#!/bin/sh' > /secret_report.sh RUN echo '(test -f /secrets/secret1 && echo -n "secret1=" && cat /secrets/secret1)' >> /secret_report.sh RUN echo '(test -f /secret2 && echo -n "relative-secret2=" && cat /secret2)' >> /secret_report.sh RUN chmod 755 /secret_report.sh CMD ["/bin/sh", "-c", "/secret_report.sh"]
Users should normally remove their input secrets from the final application image so that the secrets are not present in the container running from that image. However, the secrets will still exist in the image itself in the layer where they were added. This removal should be part of the Dockerfile itself.
8.3.6.4. Custom Strategy
When using a Custom
strategy, all the defined input secrets are available inside the builder container in the /var/run/secrets/openshift.io/build directory. The custom build image is responsible for using these secrets appropriately. The Custom
strategy also allows secrets to be defined as described in Custom Strategy Options.
There is no technical difference between existing strategy secrets and the input secrets. However, your builder image might distinguish between them and use them differently, based on your build use case.
The input secrets are always mounted into the /var/run/secrets/openshift.io/build directory or your builder can parse the $BUILD
environment variable, which includes the full build object.
8.3.7. Using External Artifacts
It is not recommended to store binary files in a source repository. Therefore, you may find it necessary to define a build which pulls additional files (such as Java .jar dependencies) during the build process. How this is done depends on the build strategy you are using.
For a Source
build strategy, you must put appropriate shell commands into the assemble script:
.s2i/bin/assemble File
#!/bin/sh APP_VERSION=1.0 wget http://repository.example.com/app/app-$APP_VERSION.jar -O app.jar
.s2i/bin/run File
#!/bin/sh exec java -jar app.jar
For more information on how to control which assemble and run script is used by a Source build, see Overriding Builder Image Scripts.
For a Docker
build strategy, you must modify the Dockerfile and invoke shell commands with the RUN
instruction:
Excerpt of Dockerfile
FROM jboss/base-jdk:8 ENV APP_VERSION 1.0 RUN wget http://repository.example.com/app/app-$APP_VERSION.jar -O app.jar EXPOSE 8080 CMD [ "java", "-jar", "app.jar" ]
In practice, you may want to use an environment variable for the file location so that the specific file to be downloaded can be customized using an environment variable defined on the BuildConfig
, rather than updating the Dockerfile or assemble script.
You can choose between different methods of defining environment variables:
- Using the .s2i/environment file (only for a Source build strategy)
-
Setting in
BuildConfig
-
Providing explicitly using
oc start-build --env
(only for builds that are triggered manually)
8.3.8. Using Docker Credentials for Private Registries
You can supply builds with a .docker/config.json file with valid credentials for private Docker registries. This allows you to push the output image into a private Docker registry or pull a builder image from the private Docker registry that requires authentication.
For the OpenShift Container Platform Docker registry, this is not required because secrets are generated automatically for you by OpenShift Container Platform.
The .docker/config.json file is found in your home directory by default and has the following format:
auths: https://index.docker.io/v1/: 1 auth: "YWRfbGzhcGU6R2labnRib21ifTE=" 2 email: "user@example.com" 3
You can define multiple Docker registry entries in this file. Alternatively, you can also add authentication entries to this file by running the docker login
command. The file will be created if it does not exist.
Kubernetes provides Secret
objects, which can be used to store configuration and passwords.
Create the secret from your local .docker/config.json file:
$ oc secrets new dockerhub ~/.docker/config.json
This generates a JSON specification of the secret named
dockerhub
and creates the object.Once the secret is created, add it to the builder service account. Each build is run with the
builder
role, so you must give it access your secret with the following command:$ oc secrets link builder dockerhub
Add a
pushSecret
field into theoutput
section of theBuildConfig
and set it to the name of thesecret
that you created, which in the above example isdockerhub
:spec: output: to: kind: "DockerImage" name: "private.registry.com/org/private-image:latest" pushSecret: name: "dockerhub"
You can also use the
oc set build-secret
command to set the push secret on the build configuration:$ oc set build-secret --push bc/sample-build dockerhub
Pull the builder container image from a private Docker registry by specifying the
pullSecret
field, which is part of the build strategy definition:strategy: sourceStrategy: from: kind: "DockerImage" name: "docker.io/user/private_repository" pullSecret: name: "dockerhub"
You can also use the
oc set build-secret
command to set the pull secret on the build configuration:$ oc set build-secret --pull bc/sample-build dockerhub
This example uses pullSecret
in a Source build, but it is also applicable in Docker and Custom builds.
8.4. Build Output
8.4.1. Build Output Overview
Builds that use the Docker
or Source
strategy result in the creation of a new container image. The image is then pushed to the container image registry specified in the output
section of the Build
specification.
If the output kind is ImageStreamTag
, then the image will be pushed to the integrated OpenShift Container Platform registry and tagged in the specified image stream. If the output is of type DockerImage
, then the name of the output reference will be used as a Docker push specification. The specification may contain a registry or will default to DockerHub if no registry is specified. If the output section of the build specification is empty, then the image will not be pushed at the end of the build.
Output to an ImageStreamTag
spec: output: to: kind: "ImageStreamTag" name: "sample-image:latest"
Output to a Docker Push Specification
spec: output: to: kind: "DockerImage" name: "my-registry.mycompany.com:5000/myimages/myimage:tag"
8.4.2. Output Image Environment Variables
Docker
and Source
strategy builds set the following environment variables on output images:
Variable | Description |
---|---|
| Name of the build |
| Namespace of the build |
| The source URL of the build |
| The Git reference used in the build |
| Source commit used in the build |
Additionally, any user-defined environment variable, for example those configured via Source
or Docker
strategy options, will also be part of the output image environment variable list.
8.4.3. Output Image Labels
Docker
and Source
builds set the following labels on output images:
Label | Description |
---|---|
| Author of the source commit used in the build |
| Date of the source commit used in the build |
| Hash of the source commit used in the build |
| Message of the source commit used in the build |
| Branch or reference specified in the source |
| Source URL for the build |
You can also use the BuildConfig.spec.output.imageLabels
field to specify a list of custom labels that will be applied to each image built from the BuildConfig
.
Custom Labels to be Applied to Built Images
spec: output: to: kind: "ImageStreamTag" name: "my-image:latest" imageLabels: - name: "vendor" value: "MyCompany" - name: "authoritative-source-url" value: "registry.mycompany.com"
8.4.4. Output Image Digest
Built images can be uniquely identified by their digest, which can later be used to pull the image by digest regardless of its current tag.
Docker
and Source
builds store the digest in Build.status.output.to.imageDigest
after the image is pushed to a registry. The digest is computed by the registry. Therefore, it may not always be present, for example when the registry did not return a digest, or when the builder image did not understand its format.
Built Image Digest After a Successful Push to the Registry
status: output: to: imageDigest: sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912
8.4.5. Using Docker Credentials for Private Registries
To push an image to a private Docker registry, credentials can be supplied using a secret. See Build Inputs for instructions.
8.5. Build Strategy Options
8.5.1. Source-to-Image Strategy Options
The following options are specific to the S2I build strategy.
8.5.1.1. Force Pull
By default, if the builder image specified in the build configuration is available locally on the node, that image will be used. However, to override the local image and refresh it from the registry to which the image stream points, create a BuildConfig
with the forcePull
flag set to true:
strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "builder-image:latest" 1 forcePull: true 2
- 1
- The builder image being used, where the local version on the node may not be up to date with the version in the registry to which the image stream points.
- 2
- This flag causes the local builder image to be ignored and a fresh version to be pulled from the registry to which the image stream points. Setting
forcePull
to false results in the default behavior of honoring the image stored locally.
8.5.1.2. Incremental Builds
S2I can perform incremental builds, which means it reuses artifacts from previously-built images. To create an incremental build, create a BuildConfig
with the following modification to the strategy definition:
strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "incremental-image:latest" 1 incremental: true 2
- 1
- Specify an image that supports incremental builds. Consult the documentation of the builder image to determine if it supports this behavior.
- 2
- This flag controls whether an incremental build is attempted. If the builder image does not support incremental builds, the build will still succeed, but you will get a log message stating the incremental build was not successful because of a missing save-artifacts script.
See the S2I Requirements topic for information on how to create a builder image supporting incremental builds.
8.5.1.3. Extended Builds
This feature is in technology preview. This means the API may change without notice or the feature may be removed entirely. For a supported mechanism to produce application images with runtime-only content, consider using the Image Source feature and defining two builds, one which produces an image containing the runtime artifacts and a second build which consumes the runtime artifacts from that image and adds them to a runtime-only image.
For compiled languages (Go, C, C++, Java, etc.) the dependencies necessary for compilation might increase the size of the image or introduce vulnerabilities that can be exploited.
To avoid these problems, S2I (Source-to-Image) introduces a two-image build process that allows an application to be built via the normal flow in a builder image, but then injects the resulting application artifacts into a runtime-only image for execution.
To offer flexibility in this process, S2I executes an assemble-runtime
script inside the runtime image that allows further customization of the resulting runtime image.
More information about this can be found in the official S2I extended builds documents.
This feature is available only for the source strategy.
strategy: type: "Source" sourceStrategy: from: kind: "ImageStreamTag" name: "builder-image:latest" runtimeImage: 1 kind: "ImageStreamTag" name: "runtime-image:latest" runtimeArtifacts: 2 - sourcePath: "/path/to/source" destinationDir: "path/to/destination"
- 1
- The runtime image that the artifacts should be copied to. This is the final image that the application will run on. This image should contain the minimum application dependencies to run the injected content from the builder image.
- 2
- The runtime artifacts are a mapping of artifacts produced in the builder image that should be injected into the runtime image.
sourcePath
can be the full path to a file or directory inside the builder image.destinationDir
must be a directory inside the runtime image where the artifacts will be copied. This directory is relative to the specified WORKDIR inside that image.
In the current implementation, you cannot have incremental extended builds thus, the incremental option is not valid with runtimeImage
.
If the runtime image needs authentication to be pulled across OpenShift projects or from another private registry, the details can be specified within the image pull secret configuration.
8.5.1.3.1. Testing your Application
Extended builds offer two ways of running tests against your application.
The first option is to install all test dependencies and run the tests inside your builder image since that image, in the context of extended builds, will not be pushed to a registry. This can be done as a part of the assemble
script for the builder image.
The second option is to specify a script via the postcommit hook. This is executed in an ephemeral container based on the runtime image, thus it is not committed to the image.
8.5.1.4. Overriding Builder Image Scripts
You can override the assemble, run, and save-artifactsS2I scripts provided by the builder image in one of two ways. Either:
- Provide an assemble, run, and/or save-artifacts script in the .s2i/bin directory of your application source repository, or
- Provide a URL of a directory containing the scripts as part of the strategy definition. For example:
strategy:
sourceStrategy:
from:
kind: "ImageStreamTag"
name: "builder-image:latest"
scripts: "http://somehost.com/scripts_directory" 1
- 1
- This path will have run, assemble, and save-artifacts appended to it. If any or all scripts are found they will be used in place of the same named script(s) provided in the image.
Files located at the scripts
URL take precedence over files located in .s2i/bin of the source repository. See the S2I Requirements topic and the S2I documentation for information on how S2I scripts are used.
8.5.1.5. Environment Variables
There are two ways to make environment variables available to the source build process and resulting image. Environment files and BuildConfig environment values. Variables provided will be present during the build process and in the output image.
8.5.1.5.1. Environment Files
Source build enables you to set environment values (one per line) inside your application, by specifying them in a .s2i/environment file in the source repository. The environment variables specified in this file are present during the build process and in the output image. The complete list of supported environment variables is available in the documentation for each image.
If you provide a .s2i/environment file in your source repository, S2I reads this file during the build. This allows customization of the build behavior as the assemble script may use these variables.
For example, if you want to disable assets compilation for your Rails application, you can add DISABLE_ASSET_COMPILATION=true
in the .s2i/environment file to cause assets compilation to be skipped during the build.
In addition to builds, the specified environment variables are also available in the running application itself. For example, you can add RAILS_ENV=development
to the .s2i/environment file to cause the Rails application to start in development
mode instead of production
.
8.5.1.5.2. BuildConfig Environment
You can add environment variables to the sourceStrategy
definition of the BuildConfig
. The environment variables defined there are visible during the assemble script execution and will be defined in the output image, making them also available to the run script and application code.
For example disabling assets compilation for your Rails application:
sourceStrategy: ... env: - name: "DISABLE_ASSET_COMPILATION" value: "true"
You can also manage environment variables defined in the BuildConfig
with the oc set env
command.
8.5.1.6. Adding Secrets via Web Console
To add a secret to your build configuration so that it can access a private repository:
- Create a new OpenShift Container Platform project.
- Create a secret that contains credentials for accessing a private source code repository.
- Create a Source-to-Image (S2I) build configuration.
-
On the build configuration editor page or in the
create app from builder image
page of the web console, set the Source Secret. - Click the Save button.
8.5.1.6.1. Enabling Pulling and Pushing
Enable pulling to a private registry by setting the Pull Secret
in the build configuration and enable pushing by setting the Push Secret
.
8.5.1.7. Ignoring Source Files
Source to image supports a .s2iignore
file, which contains a list of file patterns that should be ignored. Files in the build working directory, as provided by the various input sources, that match a pattern found in the .s2iignore
file will not be made available to the assemble
script.
For more details on the format of the .s2iignore
file, see the source-to-image documentation.
8.5.2. Docker Strategy Options
The following options are specific to the Docker build strategy.
8.5.2.1. FROM Image
The FROM
instruction of the Dockerfile will be replaced by the from
of the BuildConfig
:
strategy: dockerStrategy: from: kind: "ImageStreamTag" name: "debian:latest"
8.5.2.2. Dockerfile Path
By default, Docker builds use a Dockerfile (named Dockerfile) located at the root of the context specified in the BuildConfig.spec.source.contextDir
field.
The dockerfilePath
field allows the build to use a different path to locate your Dockerfile, relative to the BuildConfig.spec.source.contextDir
field. It can be simply a different file name other than the default Dockerfile (for example, MyDockerfile), or a path to a Dockerfile in a subdirectory (for example, dockerfiles/app1/Dockerfile):
strategy: dockerStrategy: dockerfilePath: dockerfiles/app1/Dockerfile
8.5.2.3. No Cache
Docker builds normally reuse cached layers found on the host performing the build. Setting the noCache
option to true forces the build to ignore cached layers and rerun all steps of the Dockerfile:
strategy: dockerStrategy: noCache: true
8.5.2.4. Force Pull
By default, if the builder image specified in the build configuration is available locally on the node, that image will be used. However, to override the local image and refresh it from the registry to which the image stream points, create a BuildConfig
with the forcePull
flag set to true:
strategy:
dockerStrategy:
forcePull: true 1
- 1
- This flag causes the local builder image to be ignored, and a fresh version to be pulled from the registry to which the image stream points. Setting
forcePull
to false results in the default behavior of honoring the image stored locally.
8.5.2.5. Environment Variables
To make environment variables available to the Docker build process and resulting image, you can add environment variables to the dockerStrategy
definition of the BuildConfig
.
The environment variables defined there are inserted as a single ENV
Dockerfile instruction right after the FROM
instruction, so that it can be referenced later on within the Dockerfile.
The variables are defined during build and stay in the output image, therefore they will be present in any container that runs that image as well.
For example, defining a custom HTTP proxy to be used during build and runtime:
dockerStrategy: ... env: - name: "HTTP_PROXY" value: "http://myproxy.net:5187/"
Cluster administrators can also configure global build settings using Ansible.
You can also manage environment variables defined in the BuildConfig
with the oc set env
command.
8.5.2.6. Adding Secrets via Web Console
To add a secret to your build configuration so that it can access a private repository"
- Create a new OpenShift Container Platform project.
- Create a secret that contains credentials for accessing a private source code repository.
- Create a docker build configuration.
- On the build configuration editor page or in the fromimage page of the web console, set the Source Secret.
- Click the Save button.
8.5.2.6.1. Enabling Pulling and Pushing
Enable pulling to a private registry by setting the Pull Secret
in the build configuration and enable pushing by setting the Push Secret
.
8.5.3. Custom Strategy Options
The following options are specific to the Custom build strategy.
8.5.3.1. FROM Image
Use the customStrategy.from
section to indicate the image to use for the custom build:
strategy: customStrategy: from: kind: "DockerImage" name: "openshift/sti-image-builder"
8.5.3.2. Exposing the Docker Socket
In order to allow the running of Docker commands and the building of container images from inside the container, the build container must be bound to an accessible socket. To do so, set the exposeDockerSocket
option to true:
strategy: customStrategy: exposeDockerSocket: true
8.5.3.3. Secrets
In addition to secrets for source and images that can be added to all build types, custom strategies allow adding an arbitrary list of secrets to the builder pod.
Each secret can be mounted at a specific location:
strategy: customStrategy: secrets: - secretSource: 1 name: "secret1" mountPath: "/tmp/secret1" 2 - secretSource: name: "secret2" mountPath: "/tmp/secret2"
8.5.3.3.1. Adding Secrets via Web Console
To add a secret to your build configuration so that it can access a private repository:
- Create a new OpenShift Container Platform project.
- Create a secret that contains credentials for accessing a private source code repository.
- Create a custom build configuration.
- On the build configuration editor page or in the fromimage page of the web console, set the Source Secret.
- Click the Save button.
8.5.3.3.2. Enabling Pulling and Pushing
Enable pulling to a private registry by setting the Pull Secret
in the build configuration and enable pushing by setting the Push Secret
.
8.5.3.4. Force Pull
By default, when setting up the build pod, the build controller checks if the image specified in the build configuration is available locally on the node. If so, that image will be used. However, to override the local image and refresh it from the registry to which the image stream points, create a BuildConfig
with the forcePull
flag set to true:
strategy:
customStrategy:
forcePull: true 1
- 1
- This flag causes the local builder image to be ignored, and a fresh version to be pulled from the registry to which the image stream points. Setting
forcePull
to false results in the default behavior of honoring the image stored locally.
8.5.3.5. Environment Variables
To make environment variables available to the Custom build process, you can add environment variables to the customStrategy
definition of the BuildConfig
.
The environment variables defined there are passed to the pod that runs the custom build.
For example, defining a custom HTTP proxy to be used during build:
customStrategy: ... env: - name: "HTTP_PROXY" value: "http://myproxy.net:5187/"
Cluster administrators can also configure global build settings using Ansible.
You can also manage environment variables defined in the BuildConfig
with the oc set env
command.
8.5.4. Pipeline Strategy Options
The following options are specific to the Pipeline build strategy.
8.5.4.1. Providing the Jenkinsfile
You can provide the Jenkinsfile in one of two ways:
- Embed the Jenkinsfile in the build configuration.
- Include in the build configuration a reference to the Git repository that contains the Jenkinsfile.
Embedded Definition
kind: "BuildConfig" apiVersion: "v1" metadata: name: "sample-pipeline" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: "node('agent') {\nstage 'build'\nopenshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true')\nstage 'deploy'\nopenshiftDeploy(deploymentConfig: 'frontend')\n}"
Reference to Git Repository
kind: "BuildConfig"
apiVersion: "v1"
metadata:
name: "sample-pipeline"
spec:
source:
git:
uri: "https://github.com/openshift/ruby-hello-world"
strategy:
jenkinsPipelineStrategy:
jenkinsfilePath: some/repo/dir/filename 1
- 1
- The optional
jenkinsfilePath
field specifies the name of the file to use, relative to the sourcecontextDir
. IfcontextDir
is omitted, it defaults to the root of the repository. IfjenkinsfilePath
is omitted, it defaults to Jenkinsfile.
8.6. Triggering Builds
8.6.1. Build Triggers Overview
When defining a BuildConfig
, you can define triggers to control the circumstances in which the BuildConfig
should be run. The following build triggers are available:
8.6.2. Webhook Triggers
Webhook triggers allow you to trigger a new build by sending a request to the OpenShift Container Platform API endpoint. You can define these triggers using GitHub webhooks or Generic webhooks.
8.6.2.1. GitHub Webhooks
GitHub webhooks handle the call made by GitHub when a repository is updated. When defining the trigger, you must specify a secret
, which will be part of the URL you supply to GitHub when configuring the webhook. The secret ensures the uniqueness of the URL, preventing others from triggering the build. The following example is a trigger definition YAML within the BuildConfig
:
type: "GitHub" github: secret: "secret101"
The secret field in webhook trigger configuration is not the same as secret
field you encounter when configuring webhook in GitHub UI. The former is to make the webhook URL unique and hard to predict, the latter is an optional string field used to create HMAC hex digest of the body, which is sent as an X-Hub-Signature
header.
The payload URL is returned as the GitHub Webhook URL by the describe
command (see Displaying Webhook URLs), and is structured as follows:
http://<openshift_api_host:port>/oapi/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github
To configure a GitHub Webhook:
Describe the build configuration to get the webhook URL:
$ oc describe bc <name>
- Copy the webhook URL.
- Follow the GitHub setup instructions to paste the webhook URL into your GitHub repository settings.
Gogs supports the same webhook payload format as GitHub. Therefore, if you are using a Gogs server, you can define a GitHub webhook trigger on your BuildConfig
and trigger it via your Gogs server also.
Given a file containing a valid JSON payload, you can manually trigger the webhook via curl
:
$ curl -H "X-GitHub-Event: push" -H "Content-Type: application/json" -k -X POST --data-binary @github_payload_file.json https://<openshift_api_host:port>/oapi/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github
The -k
argument is only necessary if your API server does not have a properly signed certificate.
8.6.2.2. Generic Webhooks
Generic webhooks are invoked from any system capable of making a web request. As with a GitHub webhook, you must specify a secret, which will be part of the URL that the caller must use to trigger the build. The secret ensures the uniqueness of the URL, preventing others from triggering the build. The following is an example trigger definition YAML within the BuildConfig
:
type: "Generic"
generic:
secret: "secret101"
allowEnv: true 1
- 1
- Set to
true
to allow a generic webhook to pass in environment variables.
To set up the caller, supply the calling system with the URL of the generic webhook endpoint for your build:
http://<openshift_api_host:port>/oapi/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic
The caller must invoke the webhook as a POST
operation.
To invoke the webhook manually you can use curl
:
$ curl -X POST -k https://<openshift_api_host:port>/oapi/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic
The HTTP verb must be set to POST
. The insecure -k
flag is specified to ignore certificate validation. This second flag is not necessary if your cluster has properly signed certificates.
The endpoint can accept an optional payload with the following format:
git:
uri: "<url to git repository>"
ref: "<optional git reference>"
commit: "<commit hash identifying a specific git commit>"
author:
name: "<author name>"
email: "<author e-mail>"
committer:
name: "<committer name>"
email: "<committer e-mail>"
message: "<commit message>"
env: 1
- name: "<variable name>"
value: "<variable value>"
- 1
- Similar to the
BuildConfig
environment variables, the environment variables defined here are made available to your build. If these variables collide with theBuildConfig
environment variables, these variables take precedence. By default, environment variables passed via webhook are ignored. Set theallowEnv
field totrue
on the webhook definition to enable this behavior.
To pass this payload using curl
, define it in a file named payload_file.yaml and run:
$ curl -H "Content-Type: application/yaml" --data-binary @payload_file.yaml -X POST -k https://<openshift_api_host:port>/oapi/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic
The arguments are the same as the previous example with the addition of a header and a payload. The -H
argument sets the Content-Type
header to application/yaml
or application/json
depending on your payload format. The --data-binary
argument is used to send a binary payload with newlines intact with the POST
request.
OpenShift Container Platform permits builds to be triggered via the generic webhook even if an invalid request payload is presented (for example, invalid content type, unparsable or invalid content, and so on). This behavior is maintained for backwards compatibility. If an invalid request payload is presented, OpenShift Container Platform returns a warning in JSON format as part of its HTTP 200 OK
response.
8.6.2.3. Displaying Webhook URLs
Use the following command to display any webhook URLs associated with a build configuration:
$ oc describe bc <name>
If the above command does not display any webhook URLs, then no webhook trigger is defined for that build configuration.
8.6.3. Image Change Triggers
Image change triggers allow your build to be automatically invoked when a new version of an upstream image is available. For example, if a build is based on top of a RHEL image, then you can trigger that build to run any time the RHEL image changes. As a result, the application image is always running on the latest RHEL base image.
Configuring an image change trigger requires the following actions:
Define an
ImageStream
that points to the upstream image you want to trigger on:kind: "ImageStream" apiVersion: "v1" metadata: name: "ruby-20-centos7"
This defines the image stream that is tied to a container image repository located at <system-registry>/<namespace>/ruby-20-centos7. The <system-registry> is defined as a service with the name
docker-registry
running in OpenShift Container Platform.If an image stream is the base image for the build, set the from field in the build strategy to point to the image stream:
strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "ruby-20-centos7:latest"
In this case, the
sourceStrategy
definition is consuming thelatest
tag of the image stream namedruby-20-centos7
located within this namespace.Define a build with one or more triggers that point to image streams:
type: "imageChange" 1 imageChange: {} type: "imageChange" 2 imageChange: from: kind: "ImageStreamTag" name: "custom-image:latest"
- 1
- An image change trigger that monitors the
ImageStream
andTag
as defined by the build strategy’sfrom
field. TheimageChange
object here must be empty. - 2
- An image change trigger that monitors an arbitrary image stream. The
imageChange
part in this case must include afrom
field that references theImageStreamTag
to monitor.
When using an image change trigger for the strategy image stream, the generated build is supplied with an immutable Docker tag that points to the latest image corresponding to that tag. This new image reference will be used by the strategy when it executes for the build.
For other image change triggers that do not reference the strategy image stream, a new build will be started, but the build strategy will not be updated with a unique image reference.
In the example above that has an image change trigger for the strategy, the resulting build will be:
strategy: sourceStrategy: from: kind: "DockerImage" name: "172.30.17.3:5001/mynamespace/ruby-20-centos7:<immutableid>"
This ensures that the triggered build uses the new image that was just pushed to the repository, and the build can be re-run any time with the same inputs.
In addition to setting the image field for all Strategy
types, for custom builds, the OPENSHIFT_CUSTOM_BUILD_BASE_IMAGE
environment variable is checked. If it does not exist, then it is created with the immutable image reference. If it does exist then it is updated with the immutable image reference.
If a build is triggered due to a webhook trigger or manual request, the build that is created uses the <immutableid>
resolved from the ImageStream
referenced by the Strategy
. This ensures that builds are performed using consistent image tags for ease of reproduction.
Image streams that point to container images in v1 Docker registries only trigger a build once when the image stream tag becomes available and not on subsequent image updates. This is due to the lack of uniquely identifiable images in v1 Docker registries.
8.6.4. Configuration Change Triggers
A configuration change trigger allows a build to be automatically invoked as soon as a new BuildConfig
is created. The following is an example trigger definition YAML within the BuildConfig
:
type: "ConfigChange"
Configuration change triggers currently only work when creating a new BuildConfig
. In a future release, configuration change triggers will also be able to launch a build whenever a BuildConfig
is updated.
8.7. Build Hooks
8.7.1. Build Hooks Overview
Build hooks allow behavior to be injected into the build process.
The postCommit
field of a BuildConfig
object executes commands inside a temporary container that is running the build output image. The hook is executed immediately after the last layer of the image has been committed and before the image is pushed to a registry.
The current working directory is set to the image’s WORKDIR
, which is the default working directory of the container image. For most images, this is where the source code is located.
The hook fails if the script or command returns a non-zero exit code or if starting the temporary container fails. When the hook fails it marks the build as failed and the image is not pushed to a registry. The reason for failing can be inspected by looking at the build logs.
Build hooks can be used to run unit tests to verify the image before the build is marked complete and the image is made available in a registry. If all tests pass and the test runner returns with exit code 0, the build is marked successful. In case of any test failure, the build is marked as failed. In all cases, the build log will contain the output of the test runner, which can be used to identify failed tests.
The postCommit
hook is not only limited to running tests, but can be used for other commands as well. Since it runs in a temporary container, changes made by the hook do not persist, meaning that the hook execution cannot affect the final image. This behavior allows for, among other uses, the installation and usage of test dependencies that are automatically discarded and will be not present in the final image.
8.7.2. Configuring Post Commit Build Hooks
There are different ways to configure the post build hook. All forms in the following examples are equivalent and execute bundle exec rake test --verbose
:
Shell script:
postCommit: script: "bundle exec rake test --verbose"
The
script
value is a shell script to be run with/bin/sh -ic
. Use this when a shell script is appropriate to execute the build hook. For example, for running unit tests as above. To control the image entry point, or if the image does not have/bin/sh
, usecommand
and/orargs
.NoteThe additional
-i
flag was introduced to improve the experience working with CentOS and RHEL images, and may be removed in a future release.Command as the image entry point:
postCommit: command: ["/bin/bash", "-c", "bundle exec rake test --verbose"]
In this form,
command
is the command to run, which overrides the image entry point in the exec form, as documented in the Dockerfile reference. This is needed if the image does not have/bin/sh
, or if you do not want to use a shell. In all other cases, usingscript
might be more convenient.Pass arguments to the default entry point:
postCommit: args: ["bundle", "exec", "rake", "test", "--verbose"]
In this form,
args
is a list of arguments that are provided to the default entry point of the image. The image entry point must be able to handle arguments.Shell script with arguments:
postCommit: script: "bundle exec rake test $1" args: ["--verbose"]
Use this form if you need to pass arguments that would otherwise be hard to quote properly in the shell script. In the
script
,$0
will be "/bin/sh" and$1
,$2
, etc, are the positional arguments fromargs
.Command with arguments:
postCommit: command: ["bundle", "exec", "rake", "test"] args: ["--verbose"]
This form is equivalent to appending the arguments to
command
.
Providing both script
and command
simultaneously creates an invalid build hook.
8.7.2.1. Using the CLI
The oc set build-hook
command can be used to set the build hook for a build configuration.
To set a command as the post-commit build hook:
$ oc set build-hook bc/mybc \ --post-commit \ --command \ -- bundle exec rake test --verbose
To set a script as the post-commit build hook:
$ oc set build-hook bc/mybc --post-commit --script="bundle exec rake test --verbose"
8.8. Build Run Policy
8.8.1. Build Run Policy Overview
The build run policy describes the order in which the builds created from the build configuration should run. This can be done by changing the value of the runPolicy
field in the spec
section of the Build
specification.
It is also possible to change the runPolicy
value for existing build configurations.
-
Changing
Parallel
toSerial
orSerialLatestOnly
and triggering a new build from this configuration will cause the new build to wait until all parallel builds complete as the serial build can only run alone. -
Changing
Serial
toSerialLatestOnly
and triggering a new build will cause cancellation of all existing builds in queue, except the currently running build and the most recently created build. The newest build will execute next.
8.8.2. Serial Run Policy
Setting the runPolicy
field to Serial
will cause all new builds created from the Build
configuration to be run sequentially. That means there will be only one build running at a time and every new build will wait until the previous build completes. Using this policy will result in consistent and predictable build output. This is the default runPolicy
.
Triggering three builds from the sample-build configuration, using the Serial
policy will result in:
NAME TYPE FROM STATUS STARTED DURATION sample-build-1 Source Git@e79d887 Running 13 seconds ago 13s sample-build-2 Source Git New sample-build-3 Source Git New
When the sample-build-1 build completes, the sample-build-2 build will run:
NAME TYPE FROM STATUS STARTED DURATION sample-build-1 Source Git@e79d887 Completed 43 seconds ago 34s sample-build-2 Source Git@1aa381b Running 2 seconds ago 2s sample-build-3 Source Git New
8.8.3. SerialLatestOnly Run Policy
Setting the runPolicy
field to SerialLatestOnly
will cause all new builds created from the Build
configuration to be run sequentially, same as using the Serial
run policy. The difference is that when a currently running build completes, the next build that will run is the latest build created. In other words, you do not wait for the queued builds to run, as they are skipped. Skipped builds are marked as Cancelled. This policy can be used for fast, iterative development.
Triggering three builds from the sample-build configuration, using the SerialLatestOnly
policy will result in:
NAME TYPE FROM STATUS STARTED DURATION sample-build-1 Source Git@e79d887 Running 13 seconds ago 13s sample-build-2 Source Git Cancelled sample-build-3 Source Git New
The sample-build-2 build will be canceled (skipped) and the next build run after sample-build-1 completes will be the sample-build-3 build:
NAME TYPE FROM STATUS STARTED DURATION sample-build-1 Source Git@e79d887 Completed 43 seconds ago 34s sample-build-2 Source Git Cancelled sample-build-3 Source Git@1aa381b Running 2 seconds ago 2s
8.8.4. Parallel Run Policy
Setting the runPolicy
field to Parallel
causes all new builds created from the Build
configuration to be run in parallel. This can produce unpredictable results, as the first created build can complete last, which will replace the pushed container image produced by the last build which completed earlier.
Use the parallel run policy in cases where you do not care about the order in which the builds will complete.
Triggering three builds from the sample-build configuration, using the Parallel
policy will result in three simultaneous builds:
NAME TYPE FROM STATUS STARTED DURATION sample-build-1 Source Git@e79d887 Running 13 seconds ago 13s sample-build-2 Source Git@a76d881 Running 15 seconds ago 3s sample-build-3 Source Git@689d111 Running 17 seconds ago 3s
The completion order is not guaranteed:
NAME TYPE FROM STATUS STARTED DURATION sample-build-1 Source Git@e79d887 Running 13 seconds ago 13s sample-build-2 Source Git@a76d881 Running 15 seconds ago 3s sample-build-3 Source Git@689d111 Completed 17 seconds ago 5s
8.9. Advanced Build Operations
8.9.1. Setting Build Resources
By default, builds are completed by pods using unbound resources, such as memory and CPU. These resources can be limited by specifying resource limits in a project’s default container limits.
You can also limit resource use by specifying resource limits as part of the build configuration. In the following example, each of the resources
, cpu
, and memory
parameters are optional:
apiVersion: "v1" kind: "BuildConfig" metadata: name: "sample-build" spec: resources: limits: cpu: "100m" 1 memory: "256Mi" 2
However, if a quota has been defined for your project, one of the following two items is required:
A
resources
section set with an explicitrequests
:resources: requests: 1 cpu: "100m" memory: "256Mi"
- 1
- The
requests
object contains the list of resources that correspond to the list of resources in the quota.
-
A limit range defined in your project, where the defaults from the
LimitRange
object apply to pods created during the build process.
Otherwise, build pod creation will fail, citing a failure to satisfy quota.
8.9.2. Setting Maximum Duration
When defining a BuildConfig
, you can define its maximum duration by setting the completionDeadlineSeconds
field. It is specified in seconds and is not set by default. When not set, there is no maximum duration enforced.
The maximum duration is counted from the time when a build pod gets scheduled in the system, and defines how long it can be active, including the time needed to pull the builder image. After reaching the specified timeout, the build is terminated by OpenShift Container Platform.
The following example shows the part of a BuildConfig
specifying completionDeadlineSeconds
field for 30 minutes:
spec: completionDeadlineSeconds: 1800
8.9.3. Assigning Builds to Specific Nodes
Builds can be targeted to run on specific nodes by specifying labels in the nodeSelector
field of a build configuration. The nodeSelector
value is a set of key/value pairs that are matched to node
labels when scheduling the build pod.
apiVersion: "v1"
kind: "BuildConfig"
metadata:
name: "sample-build"
spec:
nodeSelector:1
key1: value1
key2: value2
- 1
- Builds associated with this build configuration will run only on nodes with the
key1=value2
andkey2=value2
labels.
The nodeSelector
value can also be controlled by cluster-wide default and override values. Defaults will only be applied if the build configuration does not define any key/value pairs for the nodeSelector
and also does not define an explicitly empty map value of nodeSelector:{}
. Override values will replace values in the build configuration on a key by key basis.
See Configuring Global Build Defaults and Overrides for more information.
If the specified NodeSelector
cannot be matched to a node with those labels, the build still stay in the Pending
state indefinitely.
8.9.4. Chaining Builds
For compiled languages (Go, C, C++, Java, etc.), including the dependencies necessary for compilation in the application image might increase the size of the image or introduce vulnerabilities that can be exploited.
To avoid these problems, two builds can be chained together: one that produces the compiled artifact, and a second build that places that artifact in a separate image that runs the artifact. In the following example, a Source-to-Image build is combined with a Docker build to compile an artifact that is then placed in a separate runtime image.
Although this example chains a Source-to-Image build and a Docker build, the first build can use any strategy that will produce an image containing the desired artifacts, and the second build can use any strategy that can consume input content from an image.
The first build takes the application source and produces an image containing a WAR file. The image is pushed to the artifact-image
image stream. The path of the output artifact will depend on the assemble script of the Source-to-Image builder used. In this case, it will be output to /wildfly/standalone/deployments/ROOT.war.
apiVersion: v1 kind: BuildConfig metadata: name: artifact-build spec: output: to: kind: ImageStreamTag name: artifact-image:latest source: git: uri: https://github.com/openshift/openshift-jee-sample.git strategy: sourceStrategy: from: kind: ImageStreamTag name: wildfly:10.1 namespace: openshift
The second build uses Image Source with a path to the WAR file inside the output image from the first build. An inline Dockerfile copies that WAR file into a runtime image.
apiVersion: v1 kind: BuildConfig metadata: name: image-build spec: output: to: kind: ImageStreamTag name: image-build:latest source: dockerfile: |- FROM jee-runtime:latest COPY ROOT.war /deployments/ROOT.war images: - from: 1 kind: ImageStreamTag name: artifact-image:latest paths: 2 - sourcePath: /wildfly/standalone/deployments/ROOT.war destinationDir: "." strategy: dockerStrategy: from: 3 kind: ImageStreamTag name: jee-runtime:latest triggers: - imageChange: {} type: ImageChange
- 1
from
specifies that the Docker build should include the output of the image from theartifact-image
image stream, which was the target of the previous build.- 2
paths
specifies which paths from the target image to include in the current Docker build.- 3
- The runtime image is used as the source image for the Docker build.
The result of this setup is that the output image of the second build does not need to contain any of the build tools that are needed to create the WAR file. Also, because the second build contains an image change trigger, whenever the first build is run and produces a new image with the binary artifact, the second build is automatically triggered to produce a runtime image that contains that artifact. Therefore, both builds behave as a single build with two stages.
8.10. Build Troubleshooting
8.10.1. Requested Access to Resources Denied
- Issue
A build fails with:
requested access to the resource is denied
- Resolution
You have exceeded one of the image quotas set on your project. Check your current quota and verify the limits applied and storage in use:
$ oc describe quota