Este conteúdo não está disponível no idioma selecionado.
Chapter 5. Using build strategies
The following sections define the primary supported build strategies, and how to use them.
5.1. Docker build Copiar o linkLink copiado para a área de transferência!
OpenShift Dedicated uses Buildah to build a container image from a Dockerfile. For more information on building container images with Dockerfiles, see the Dockerfile reference documentation.
If you set Docker build arguments by using the buildArgs array, see Understand how ARG and FROM interact in the Dockerfile reference documentation.
5.1.1. Replacing the Dockerfile FROM image Copiar o linkLink copiado para a área de transferência!
You can replace the FROM instruction of the Dockerfile with the from parameters of the BuildConfig object. If the Dockerfile uses multi-stage builds, the image in the last FROM instruction will be replaced.
Procedure
To replace the
FROMinstruction of the Dockerfile with thefromparameters of theBuildConfigobject, add the following settings to theBuildConfigobject:strategy: dockerStrategy: from: kind: "ImageStreamTag" name: "debian:latest"strategy: dockerStrategy: from: kind: "ImageStreamTag" name: "debian:latest"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.1.2. Using Dockerfile path Copiar o linkLink copiado para a área de transferência!
By default, docker builds use a Dockerfile located at the root of the context specified in the BuildConfig.spec.source.contextDir field.
The dockerfilePath field allows the build to use a different path to locate your Dockerfile, relative to the BuildConfig.spec.source.contextDir field. It can be a different file name than the default Dockerfile, such as MyDockerfile, or a path to a Dockerfile in a subdirectory, such as dockerfiles/app1/Dockerfile.
Procedure
Set the
dockerfilePathfield for the build to use a different path to locate your Dockerfile:strategy: dockerStrategy: dockerfilePath: dockerfiles/app1/Dockerfilestrategy: dockerStrategy: dockerfilePath: dockerfiles/app1/DockerfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.1.3. Using docker environment variables Copiar o linkLink copiado para a área de transferência!
To make environment variables available to the docker build process and resulting image, you can add environment variables to the dockerStrategy definition of the build configuration.
The environment variables defined there are inserted as a single ENV Dockerfile instruction right after the FROM instruction, so that it can be referenced later on within the Dockerfile.
The variables are defined during build and stay in the output image, therefore they will be present in any container that runs that image as well.
For example, defining a custom HTTP proxy to be used during build and runtime:
dockerStrategy:
...
env:
- name: "HTTP_PROXY"
value: "http://myproxy.net:5187/"
dockerStrategy:
...
env:
- name: "HTTP_PROXY"
value: "http://myproxy.net:5187/"
You can also manage environment variables defined in the build configuration with the oc set env command.
5.1.4. Adding Docker build arguments Copiar o linkLink copiado para a área de transferência!
You can set Docker build arguments using the buildArgs array. The build arguments are passed to Docker when a build is started.
See Understand how ARG and FROM interact in the Dockerfile reference documentation.
Procedure
To set Docker build arguments, add entries to the
buildArgsarray, which is located in thedockerStrategydefinition of theBuildConfigobject. For example:dockerStrategy: ... buildArgs: - name: "version" value: "latest"dockerStrategy: ... buildArgs: - name: "version" value: "latest"Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOnly the
nameandvaluefields are supported. Any settings on thevalueFromfield are ignored.
5.1.5. Squashing layers with docker builds Copiar o linkLink copiado para a área de transferência!
Docker builds normally create a layer representing each instruction in a Dockerfile. Setting the imageOptimizationPolicy to SkipLayers merges all instructions into a single layer on top of the base image.
Procedure
Set the
imageOptimizationPolicytoSkipLayers:strategy: dockerStrategy: imageOptimizationPolicy: SkipLayersstrategy: dockerStrategy: imageOptimizationPolicy: SkipLayersCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.1.6. Using build volumes Copiar o linkLink copiado para a área de transferência!
You can mount build volumes to give running builds access to information that you do not want to persist in the output container image.
Build volumes provide sensitive information, such as repository credentials, that the build environment or configuration only needs at build time. Build volumes are different from build inputs, whose data can persist in the output container image.
The mount points of build volumes, from which the running build reads data, are functionally similar to pod volume mounts.
Prerequisites
- You have added an input secret, config map, or both to a BuildConfig object.
Procedure
In the
dockerStrategydefinition of theBuildConfigobject, add any build volumes to thevolumesarray. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
name- Specifies a unique name.
destinationPath-
Specifies the absolute path of the mount point. It must not contain
..or:and does not collide with the destination path generated by the builder. The/opt/app-root/srcis the default home directory for many Red Hat S2I-enabled images. type-
Specifies the type of source,
ConfigMap,Secret, orCSI. secretName- Specifies the name of the source.
5.2. Source-to-image build Copiar o linkLink copiado para a área de transferência!
Source-to-image (S2I) is a tool for building reproducible container images. It produces ready-to-run images by injecting application source into a container image and assembling a new image. The new image incorporates the base image, the builder, and built source and is ready to use with the buildah run command. S2I supports incremental builds, which re-use previously downloaded dependencies, previously built artifacts, and so on.
5.2.1. Performing source-to-image incremental builds Copiar o linkLink copiado para a área de transferência!
Source-to-image (S2I) can perform incremental builds, which means it reuses artifacts from previously-built images.
Procedure
To create an incremental build, create a with the following modification to the strategy definition:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify an image that supports incremental builds. Consult the documentation of the builder image to determine if it supports this behavior.
- 2
- This flag controls whether an incremental build is attempted. If the builder image does not support incremental builds, the build will still succeed, but you will get a log message stating the incremental build was not successful because of a missing
save-artifactsscript.
5.2.2. Overriding source-to-image builder image scripts Copiar o linkLink copiado para a área de transferência!
You can override the assemble, run, and save-artifacts source-to-image (S2I) scripts provided by the builder image.
Procedure
To override the
assemble,run, andsave-artifactsS2I scripts provided by the builder image, complete one of the following actions:-
Provide an
assemble,run, orsave-artifactsscript in the.s2i/bindirectory of your application source repository. Provide a URL of a directory containing the scripts as part of the strategy definition in the
BuildConfigobject. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The build process appends
run,assemble, andsave-artifactsto the path. If any or all scripts with these names exist, the build process uses these scripts in place of scripts with the same name that are provided in the image.
NoteFiles located at the
scriptsURL take precedence over files located in.s2i/binof the source repository.
-
Provide an
5.2.3. Source-to-image environment variables Copiar o linkLink copiado para a área de transferência!
There are two ways to make environment variables available to the source build process and resulting image: environment files and BuildConfig environment values. The variables that you provide using either method will be present during the build process and in the output image.
5.2.3.1. Using source-to-image environment files Copiar o linkLink copiado para a área de transferência!
Source build enables you to set environment values, one per line, inside your application, by specifying them in a .s2i/environment file in the source repository. The environment variables specified in this file are present during the build process and in the output image.
If you provide a .s2i/environment file in your source repository, source-to-image (S2I) reads this file during the build. This allows customization of the build behavior as the assemble script may use these variables.
Procedure
For example, to disable assets compilation for your Rails application during the build:
-
Add
DISABLE_ASSET_COMPILATION=truein the.s2i/environmentfile.
In addition to builds, the specified environment variables are also available in the running application itself. For example, to cause the Rails application to start in development mode instead of production:
-
Add
RAILS_ENV=developmentto the.s2i/environmentfile.
The complete list of supported environment variables is available in the using images section for each image.
5.2.3.2. Using source-to-image build configuration environment Copiar o linkLink copiado para a área de transferência!
You can add environment variables to the sourceStrategy definition of the build configuration. The environment variables defined there are visible during the assemble script execution and will be defined in the output image, making them also available to the run script and application code.
Procedure
For example, to disable assets compilation for your Rails application:
sourceStrategy: ... env: - name: "DISABLE_ASSET_COMPILATION" value: "true"sourceStrategy: ... env: - name: "DISABLE_ASSET_COMPILATION" value: "true"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.4. Ignoring source-to-image source files Copiar o linkLink copiado para a área de transferência!
Source-to-image (S2I) supports a .s2iignore file, which contains a list of file patterns that should be ignored. Files in the build working directory, as provided by the various input sources, that match a pattern found in the .s2iignore file will not be made available to the assemble script.
5.2.5. Creating images from source code with source-to-image Copiar o linkLink copiado para a área de transferência!
Source-to-image (S2I) is a framework that makes it easy to write images that take application source code as an input and produce a new image that runs the assembled application as output.
The main advantage of using S2I for building reproducible container images is the ease of use for developers. As a builder image author, you must understand two basic concepts in order for your images to provide the best S2I performance, the build process and S2I scripts.
5.2.5.1. Understanding the source-to-image build process Copiar o linkLink copiado para a área de transferência!
The build process consists of the following three fundamental elements, which are combined into a final container image:
- Sources
- Source-to-image (S2I) scripts
- Builder image
S2I generates a Dockerfile with the builder image as the first FROM instruction. The Dockerfile generated by S2I is then passed to Buildah.
5.2.5.2. How to write source-to-image scripts Copiar o linkLink copiado para a área de transferência!
You can write source-to-image (S2I) scripts in any programming language, as long as the scripts are executable inside the builder image. S2I supports multiple options providing assemble/run/save-artifacts scripts. All of these locations are checked on each build in the following order:
- A script specified in the build configuration.
-
A script found in the application source
.s2i/bindirectory. -
A script found at the default image URL with the
io.openshift.s2i.scripts-urllabel.
Both the io.openshift.s2i.scripts-url label specified in the image and the script specified in a build configuration can take one of the following forms:
-
image:///path_to_scripts_dir: absolute path inside the image to a directory where the S2I scripts are located. -
file:///path_to_scripts_dir: relative or absolute path to a directory on the host where the S2I scripts are located. -
http(s)://path_to_scripts_dir: URL to a directory where the S2I scripts are located.
| Script | Description |
|---|---|
|
|
The
|
|
|
The |
|
|
The
These dependencies are gathered into a |
|
|
The |
|
|
The
Note
The suggested location to put the test application built by your |
Example S2I scripts
The following example S2I scripts are written in Bash. Each example assumes its tar contents are unpacked into the /tmp/s2i directory.
assemble script:
run script:
run the application
#!/bin/bash
# run the application
/opt/application/run.sh
save-artifacts script:
usage script:
5.2.6. Using build volumes Copiar o linkLink copiado para a área de transferência!
You can mount build volumes to give running builds access to information that you do not want to persist in the output container image.
Build volumes provide sensitive information, such as repository credentials, that the build environment or configuration only needs at build time. Build volumes are different from build inputs, whose data can persist in the output container image.
The mount points of build volumes, from which the running build reads data, are functionally similar to pod volume mounts.
Prerequisites
- You have added an input secret, config map, or both to a BuildConfig object.
Procedure
In the
sourceStrategydefinition of theBuildConfigobject, add any build volumes to thevolumesarray. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
name- Specifies a unique name.
destinationPath-
Specifies the absolute path of the mount point. It must not contain
..or:and does not collide with the destination path generated by the builder. The/opt/app-root/srcis the default home directory for many Red Hat S2I-enabled images. type-
Specifies the type of source:
ConfigMap,Secret, orCSI. secretName- Specifies the name of the source.
5.3. Pipeline build Copiar o linkLink copiado para a área de transferência!
The Pipeline build strategy is deprecated in OpenShift Dedicated 4. Equivalent and improved functionality is present in the OpenShift Dedicated Pipelines based on Tekton.
Jenkins images on OpenShift Dedicated are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system.
The Pipeline build strategy allows developers to define a Jenkins pipeline for use by the Jenkins pipeline plugin. The build can be started, monitored, and managed by OpenShift Dedicated in the same way as any other build type.
Pipeline workflows are defined in a jenkinsfile, either embedded directly in the build configuration, or supplied in a Git repository and referenced by the build configuration.
5.3.1. Understanding OpenShift Dedicated pipelines Copiar o linkLink copiado para a área de transferência!
The Pipeline build strategy is deprecated in OpenShift Dedicated 4. Equivalent and improved functionality is present in the OpenShift Dedicated Pipelines based on Tekton.
Jenkins images on OpenShift Dedicated are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system.
Pipelines give you control over building, deploying, and promoting your applications on OpenShift Dedicated. Using a combination of the Jenkins Pipeline build strategy, jenkinsfiles, and the OpenShift Dedicated Domain Specific Language (DSL) provided by the Jenkins Client Plugin, you can create advanced build, test, deploy, and promote pipelines for any scenario.
OpenShift Dedicated Jenkins Sync Plugin
The OpenShift Dedicated Jenkins Sync Plugin keeps the build configuration and build objects in sync with Jenkins jobs and builds, and provides the following:
- Dynamic job and run creation in Jenkins.
- Dynamic creation of agent pod templates from image streams, image stream tags, or config maps.
- Injection of environment variables.
- Pipeline visualization in the OpenShift Dedicated web console.
- Integration with the Jenkins Git plugin, which passes commit information from OpenShift Dedicated builds to the Jenkins Git plugin.
- Synchronization of secrets into Jenkins credential entries.
OpenShift Dedicated Jenkins Client Plugin
The OpenShift Dedicated Jenkins Client Plugin is a Jenkins plugin which aims to provide a readable, concise, comprehensive, and fluent Jenkins Pipeline syntax for rich interactions with the OpenShift Dedicated API Server. The plugin uses the OpenShift Dedicated command-line tool, oc, which must be available on the nodes executing the script.
The Jenkins Client Plugin must be installed on your Jenkins master so the OpenShift Dedicated DSL will be available to use within the jenkinsfile for your application. This plugin is installed and enabled by default when using the OpenShift Dedicated Jenkins image.
For OpenShift Dedicated Pipelines within your project, you will must use the Jenkins Pipeline Build Strategy. This strategy defaults to using a jenkinsfile at the root of your source repository, but also provides the following configuration options:
-
An inline
jenkinsfilefield within your build configuration. -
A
jenkinsfilePathfield within your build configuration that references the location of thejenkinsfileto use relative to the sourcecontextDir.
The optional jenkinsfilePath field specifies the name of the file to use, relative to the source contextDir. If contextDir is omitted, it defaults to the root of the repository. If jenkinsfilePath is omitted, it defaults to jenkinsfile.
5.3.2. Providing the Jenkins file for pipeline builds Copiar o linkLink copiado para a área de transferência!
The Pipeline build strategy is deprecated in OpenShift Dedicated 4. Equivalent and improved functionality is present in the OpenShift Dedicated Pipelines based on Tekton.
Jenkins images on OpenShift Dedicated are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system.
The jenkinsfile uses the standard groovy language syntax to allow fine grained control over the configuration, build, and deployment of your application.
You can supply the jenkinsfile in one of the following ways:
- A file located within your source code repository.
-
Embedded as part of your build configuration using the
jenkinsfilefield.
When using the first option, the jenkinsfile must be included in your applications source code repository at one of the following locations:
-
A file named
jenkinsfileat the root of your repository. -
A file named
jenkinsfileat the root of the sourcecontextDirof your repository. -
A file name specified via the
jenkinsfilePathfield of theJenkinsPipelineStrategysection of your BuildConfig, which is relative to the sourcecontextDirif supplied, otherwise it defaults to the root of the repository.
The jenkinsfile is run on the Jenkins agent pod, which must have the OpenShift Dedicated client binaries available if you intend to use the OpenShift Dedicated DSL.
Procedure
To provide the Jenkins file, you can either:
- Embed the Jenkins file in the build configuration.
- Include in the build configuration a reference to the Git repository that contains the Jenkins file.
Embedded Definition
Reference to Git Repository
- 1
- The optional
jenkinsfilePathfield specifies the name of the file to use, relative to the sourcecontextDir. IfcontextDiris omitted, it defaults to the root of the repository. IfjenkinsfilePathis omitted, it defaults tojenkinsfile.
5.3.3. Using environment variables for pipeline builds Copiar o linkLink copiado para a área de transferência!
The Pipeline build strategy is deprecated in OpenShift Dedicated 4. Equivalent and improved functionality is present in the OpenShift Dedicated Pipelines based on Tekton.
Jenkins images on OpenShift Dedicated are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system.
To make environment variables available to the Pipeline build process, you can add environment variables to the jenkinsPipelineStrategy definition of the build configuration.
Once defined, the environment variables will be set as parameters for any Jenkins job associated with the build configuration.
Procedure
To define environment variables to be used during build, edit the YAML file:
jenkinsPipelineStrategy: ... env: - name: "FOO" value: "BAR"jenkinsPipelineStrategy: ... env: - name: "FOO" value: "BAR"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You can also manage environment variables defined in the build configuration with the oc set env command.
5.3.3.1. Mapping between BuildConfig environment variables and Jenkins job parameters Copiar o linkLink copiado para a área de transferência!
When a Jenkins job is created or updated based on changes to a Pipeline strategy build configuration, any environment variables in the build configuration are mapped to Jenkins job parameters definitions, where the default values for the Jenkins job parameters definitions are the current values of the associated environment variables.
After the Jenkins job’s initial creation, you can still add additional parameters to the job from the Jenkins console. The parameter names differ from the names of the environment variables in the build configuration. The parameters are honored when builds are started for those Jenkins jobs.
How you start builds for the Jenkins job dictates how the parameters are set.
-
If you start with
oc start-build, the values of the environment variables in the build configuration are the parameters set for the corresponding job instance. Any changes you make to the parameters' default values from the Jenkins console are ignored. The build configuration values take precedence. If you start with
oc start-build -e, the values for the environment variables specified in the-eoption take precedence.- If you specify an environment variable not listed in the build configuration, they will be added as a Jenkins job parameter definitions.
-
Any changes you make from the Jenkins console to the parameters corresponding to the environment variables are ignored. The build configuration and what you specify with
oc start-build -etakes precedence.
- If you start the Jenkins job with the Jenkins console, then you can control the setting of the parameters with the Jenkins console as part of starting a build for the job.
It is recommended that you specify in the build configuration all possible environment variables to be associated with job parameters. Doing so reduces disk I/O and improves performance during Jenkins processing.
5.3.4. Pipeline build tutorial Copiar o linkLink copiado para a área de transferência!
The Pipeline build strategy is deprecated in OpenShift Dedicated 4. Equivalent and improved functionality is present in the OpenShift Dedicated Pipelines based on Tekton.
Jenkins images on OpenShift Dedicated are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system.
This example demonstrates how to create an OpenShift Dedicated Pipeline that will build, deploy, and verify a Node.js/MongoDB application using the nodejs-mongodb.json template.
Procedure
Create the Jenkins master:
oc project <project_name>
$ oc project <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Select the project that you want to use or create a new project with
oc new-project <project_name>.oc new-app jenkins-ephemeral
$ oc new-app jenkins-ephemeral1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you want to use persistent storage, use
jenkins-persistentinstead.Create a file named
nodejs-sample-pipeline.yamlwith the following content:NoteThis creates a
BuildConfigobject that employs the Jenkins pipeline strategy to build, deploy, and scale theNode.js/MongoDBexample application.Copy to Clipboard Copied! Toggle word wrap Toggle overflow After you create a
BuildConfigobject with ajenkinsPipelineStrategy, tell the pipeline what to do by using an inlinejenkinsfile:NoteThis example does not set up a Git repository for the application.
The following
jenkinsfilecontent is written in Groovy using the OpenShift Dedicated DSL. For this example, include inline content in theBuildConfigobject using the YAML Literal Style, though including ajenkinsfilein your source repository is the preferred method.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Path of the template to use.
- 1 2
- Name of the template that will be created.
- 3
- Spin up a
node.jsagent pod on which to run this build. - 4
- Set a timeout of 20 minutes for this pipeline.
- 5
- Delete everything with this template label.
- 6
- Delete any secrets with this template label.
- 7
- Create a new application from the
templatePath. - 8
- Wait up to five minutes for the build to complete.
- 9
- Wait up to five minutes for the deployment to complete.
- 10
- If everything else succeeded, tag the
$ {templateName}:latestimage as$ {templateName}-staging:latest. A pipeline build configuration for the staging environment can watch for the$ {templateName}-staging:latestimage to change and then deploy it to the staging environment.
NoteThe previous example was written using the declarative pipeline style, but the older scripted pipeline style is also supported.
Create the Pipeline
BuildConfigin your OpenShift Dedicated cluster:oc create -f nodejs-sample-pipeline.yaml
$ oc create -f nodejs-sample-pipeline.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you do not want to create your own file, you can use the sample from the Origin repository by running:
oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/pipeline/nodejs-sample-pipeline.yaml
$ oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/pipeline/nodejs-sample-pipeline.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Start the Pipeline:
oc start-build nodejs-sample-pipeline
$ oc start-build nodejs-sample-pipelineCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can start your pipeline with the OpenShift Dedicated web console by navigating to the Builds
Pipeline section and clicking Start Pipeline, or by visiting the Jenkins Console, navigating to the Pipeline that you created, and clicking Build Now. Once the pipeline is started, you should see the following actions performed within your project:
- A job instance is created on the Jenkins server.
- An agent pod is launched, if your pipeline requires one.
The pipeline runs on the agent pod, or the master if no agent is required.
-
Any previously created resources with the
template=nodejs-mongodb-examplelabel will be deleted. -
A new application, and all of its associated resources, will be created from the
nodejs-mongodb-exampletemplate. A build will be started using the
nodejs-mongodb-exampleBuildConfig.- The pipeline will wait until the build has completed to trigger the next stage.
A deployment will be started using the
nodejs-mongodb-exampledeployment configuration.- The pipeline will wait until the deployment has completed to trigger the next stage.
-
If the build and deploy are successful, the
nodejs-mongodb-example:latestimage will be tagged asnodejs-mongodb-example:stage.
-
Any previously created resources with the
The agent pod is deleted, if one was required for the pipeline.
NoteThe best way to visualize the pipeline execution is by viewing it in the OpenShift Dedicated web console. You can view your pipelines by logging in to the web console and navigating to Builds
Pipelines.
5.4. Adding secrets with web console Copiar o linkLink copiado para a área de transferência!
You can add a secret to your build configuration so that it can access a private repository.
Procedure
To add a secret to your build configuration so that it can access a private repository from the OpenShift Dedicated web console:
- Create a new OpenShift Dedicated project.
- Create a secret that contains credentials for accessing a private source code repository.
- Create a build configuration.
-
On the build configuration editor page or in the
create app from builder imagepage of the web console, set the Source Secret. - Click Save.
5.5. Enabling pulling and pushing Copiar o linkLink copiado para a área de transferência!
You can enable pulling to a private registry by setting the pull secret and pushing by setting the push secret in the build configuration.
Procedure
To enable pulling to a private registry:
- Set the pull secret in the build configuration.
To enable pushing:
- Set the push secret in the build configuration.