Chapter 3. Understanding OpenShift Pipelines
Red Hat OpenShift Pipelines is a cloud-native, continuous integration and continuous delivery (CI/CD) solution based on Kubernetes resources. It uses Tekton building blocks to automate deployments across multiple platforms by abstracting away the underlying implementation details. Tekton introduces a number of standard custom resource definitions (CRDs) for defining CI/CD pipelines that are portable across Kubernetes distributions.
3.1. Key features Copy linkLink copied to clipboard!
- Red Hat OpenShift Pipelines is a serverless CI/CD system that runs pipelines with all the required dependencies in isolated containers.
- Red Hat OpenShift Pipelines are designed for decentralized teams that work on microservice-based architecture.
- Red Hat OpenShift Pipelines use standard CI/CD pipeline definitions that are easy to extend and integrate with the existing Kubernetes tools, enabling you to scale on-demand.
- You can use Red Hat OpenShift Pipelines to build images with Kubernetes tools such as Source-to-Image (S2I), Buildah, Buildpacks, and Kaniko that are portable across any Kubernetes platform.
- You can use the OpenShift Container Platform Developer console to create Tekton resources, view logs of pipeline runs, and manage pipelines in your OpenShift Container Platform namespaces.
3.2. OpenShift Pipelines Concepts Copy linkLink copied to clipboard!
This guide provides a detailed view of the various pipeline concepts.
3.2.1. Tasks Copy linkLink copied to clipboard!
Task resources are the building blocks of a pipeline and consist of sequentially executed steps. It is essentially a function of inputs and outputs. A task can run individually or as a part of the pipeline. Tasks are reusable and can be used in multiple pipelines.
Steps are a series of commands that are sequentially executed by the task and achieve a specific goal, such as building an image. Every task runs as a pod, and each step runs as a container within that pod. Because steps run within the same pod, they can access the same volumes for caching files, config maps, and secrets.
The following example shows the apply-manifests task.
This task starts the pod and runs a container inside that pod using the specified image to run the specified commands.
Starting with OpenShift Pipelines 1.6, the following defaults from the step YAML file are removed:
-
The
HOMEenvironment variable does not default to the/tekton/homedirectory -
The
workingDirfield does not default to the/workspacedirectory
Instead, the container for the step defines the HOME environment variable and the workingDir field. However, you can override the default values by specifying the custom values in the YAML file for the step.
As a temporary measure, to maintain backward compatibility with the older OpenShift Pipelines versions, you can set the following fields in the TektonConfig custom resource definition to false:
spec:
pipeline:
disable-working-directory-overwrite: false
disable-home-env-overwrite: false
spec:
pipeline:
disable-working-directory-overwrite: false
disable-home-env-overwrite: false
3.2.2. When expression Copy linkLink copied to clipboard!
When expressions guard task execution by setting criteria for the execution of tasks within a pipeline. They contain a list of components that allows a task to run only when certain criteria are met. When expressions are also supported in the final set of tasks that are specified using the finally field in the pipeline YAML file.
The key components of a when expression are as follows:
-
input: Specifies static inputs or variables such as a parameter, task result, and execution status. You must enter a valid input. If you do not enter a valid input, its value defaults to an empty string. -
operator: Specifies the relationship of an input to a set ofvalues. Enterinornotinas your operator values. -
values: Specifies an array of string values. Enter a non-empty array of static values or variables such as parameters, results, and a bound state of a workspace.
The declared when expressions are evaluated before the task is run. If the value of a when expression is True, the task is run. If the value of a when expression is False, the task is skipped.
You can use the when expressions in various use cases. For example, whether:
- The result of a previous task is as expected.
- A file in a Git repository has changed in the previous commits.
- An image exists in the registry.
- An optional workspace is available.
The following example shows the when expressions for a pipeline run. The pipeline run will execute the create-file task only if the following criteria are met: the path parameter is README.md, and the echo-file-exists task executed only if the exists result from the check-file task is yes.
- 1
- Specifies the type of Kubernetes object. In this example,
PipelineRun. - 2
- Task
create-fileused in the pipeline. - 3
whenexpression that specifies to execute theecho-file-existstask only if theexistsresult from thecheck-filetask isyes.- 4
whenexpression that specifies to skip thetask-should-be-skipped-1task only if thepathparameter isREADME.md.- 5
whenexpression that specifies to execute thefinally-task-should-be-executedtask only if the execution status of theecho-file-existstask and the task status isSucceeded, theexistsresult from thecheck-filetask isyes, and thepathparameter isREADME.md.
The Pipeline Run details page of the OpenShift Container Platform web console shows the status of the tasks and when expressions as follows:
- All the criteria are met: Tasks and the when expression symbol, which is represented by a diamond shape are green.
- Any one of the criteria are not met: Task is skipped. Skipped tasks and the when expression symbol are grey.
- None of the criteria are met: Task is skipped. Skipped tasks and the when expression symbol are grey.
- Task run fails: Failed tasks and the when expression symbol are red.
3.2.3. Finally tasks Copy linkLink copied to clipboard!
The finally tasks are the final set of tasks specified using the finally field in the pipeline YAML file. A finally task always executes the tasks within the pipeline, irrespective of whether the pipeline runs are executed successfully. The finally tasks are executed in parallel after all the pipeline tasks are run, before the corresponding pipeline exits.
You can configure a finally task to consume the results of any task within the same pipeline. This approach does not change the order in which this final task is run. It is executed in parallel with other final tasks after all the non-final tasks are executed.
The following example shows a code snippet of the clone-cleanup-workspace pipeline. This code clones the repository into a shared workspace and cleans up the workspace. After executing the pipeline tasks, the cleanup task specified in the finally section of the pipeline YAML file cleans up the workspace.
- 1
- Unique name of the pipeline.
- 2
- The shared workspace where the git repository is cloned.
- 3
- The task to clone the application repository to the shared workspace.
- 4
- The task to clean-up the shared workspace.
- 5
- A reference to the task that is to be executed in the task run.
- 6
- A shared storage volume that a task in a pipeline needs at runtime to receive input or provide output.
- 7
- A list of parameters required for a task. If a parameter does not have an implicit default value, you must explicitly set its value.
- 8
- Embedded task definition.
3.2.4. TaskRun Copy linkLink copied to clipboard!
A task run instantiates a task for execution with specific inputs, outputs, and execution parameters on a cluster. It can be invoked on its own or as part of a pipeline run for each task in a pipeline.
A task consists of one or more steps that execute container images, and each container image performs a specific piece of build work. A task run executes the steps in a task in the specified order, until all steps execute successfully or a failure occurs. A TaskRun is automatically created by a PipelineRun for each task in a pipeline.
The following example shows a task run that runs the apply-manifests task with the relevant input parameters:
- 1
- The task run API version
v1beta1. - 2
- Specifies the type of Kubernetes object. In this example,
TaskRun. - 3
- Unique name to identify this task run.
- 4
- Definition of the task run. For this task run, the task and the required workspace are specified.
- 5
- Name of the task reference used for this task run. This task run executes the
apply-manifeststask. - 6
- Workspace used by the task run.
3.2.5. Pipelines Copy linkLink copied to clipboard!
A pipeline is a collection of Task resources arranged in a specific order of execution. They are executed to construct complex workflows that automate the build, deployment and delivery of applications. You can define a CI/CD workflow for your application using pipelines containing one or more tasks.
A Pipeline resource definition consists of a number of fields or attributes, which together enable the pipeline to accomplish a specific goal. Each Pipeline resource definition must contain at least one Task resource, which ingests specific inputs and produces specific outputs. The pipeline definition can also optionally include Conditions, Workspaces, Parameters, or Resources depending on the application requirements.
The following example shows the build-and-deploy pipeline, which builds an application image from a Git repository using the buildah ClusterTask resource:
- 1
- Pipeline API version
v1beta1. - 2
- Specifies the type of Kubernetes object. In this example,
Pipeline. - 3
- Unique name of this pipeline.
- 4
- Specifies the definition and structure of the pipeline.
- 5
- Workspaces used across all the tasks in the pipeline.
- 6
- Parameters used across all the tasks in the pipeline.
- 7
- Specifies the list of tasks used in the pipeline.
- 8
- Task
build-image, which uses thebuildahClusterTaskto build application images from a given Git repository. - 9
- Task
apply-manifests, which uses a user-defined task with the same name. - 10
- Specifies the sequence in which tasks are run in a pipeline. In this example, the
apply-manifeststask is run only after thebuild-imagetask is completed.
The Red Hat OpenShift Pipelines Operator installs the Buildah cluster task and creates the pipeline service account with sufficient permission to build and push an image. The Buildah cluster task can fail when associated with a different service account with insufficient permissions.
3.2.6. PipelineRun Copy linkLink copied to clipboard!
A PipelineRun is a type of resource that binds a pipeline, workspaces, credentials, and a set of parameter values specific to a scenario to run the CI/CD workflow.
A pipeline run is the running instance of a pipeline. It instantiates a pipeline for execution with specific inputs, outputs, and execution parameters on a cluster. It also creates a task run for each task in the pipeline run.
The pipeline runs the tasks sequentially until they are complete or a task fails. The status field tracks and the progress of each task run and stores it for monitoring and auditing purposes.
The following example runs the build-and-deploy pipeline with relevant resources and parameters:
- 1
- Pipeline run API version
v1beta1. - 2
- The type of Kubernetes object. In this example,
PipelineRun. - 3
- Unique name to identify this pipeline run.
- 4
- Name of the pipeline to be run. In this example,
build-and-deploy. - 5
- The list of parameters required to run the pipeline.
- 6
- Workspace used by the pipeline run.
3.2.7. Workspaces Copy linkLink copied to clipboard!
It is recommended that you use workspaces instead of the PipelineResource CRs in Red Hat OpenShift Pipelines, as PipelineResource CRs are difficult to debug, limited in scope, and make tasks less reusable.
Workspaces declare shared storage volumes that a task in a pipeline needs at runtime to receive input or provide output. Instead of specifying the actual location of the volumes, workspaces enable you to declare the filesystem or parts of the filesystem that would be required at runtime. A task or pipeline declares the workspace and you must provide the specific location details of the volume. It is then mounted into that workspace in a task run or a pipeline run. This separation of volume declaration from runtime storage volumes makes the tasks reusable, flexible, and independent of the user environment.
With workspaces, you can:
- Store task inputs and outputs
- Share data among tasks
- Use it as a mount point for credentials held in secrets
- Use it as a mount point for configurations held in config maps
- Use it as a mount point for common tools shared by an organization
- Create a cache of build artifacts that speed up jobs
You can specify workspaces in the TaskRun or PipelineRun using:
- A read-only config map or secret
- An existing persistent volume claim shared with other tasks
- A persistent volume claim from a provided volume claim template
-
An
emptyDirthat is discarded when the task run completes
The following example shows a code snippet of the build-and-deploy pipeline, which declares a shared-workspace workspace for the build-image and apply-manifests tasks as defined in the pipeline.
- 1
- List of workspaces shared between the tasks defined in the pipeline. A pipeline can define as many workspaces as required. In this example, only one workspace named
shared-workspaceis declared. - 2
- Definition of tasks used in the pipeline. This snippet defines two tasks,
build-imageandapply-manifests, which share a common workspace. - 3
- List of workspaces used in the
build-imagetask. A task definition can include as many workspaces as it requires. However, it is recommended that a task uses at most one writable workspace. - 4
- Name that uniquely identifies the workspace used in the task. This task uses one workspace named
source. - 5
- Name of the pipeline workspace used by the task. Note that the workspace
sourcein turn uses the pipeline workspace namedshared-workspace. - 6
- List of workspaces used in the
apply-manifeststask. Note that this task shares thesourceworkspace with thebuild-imagetask.
Workspaces help tasks share data, and allow you to specify one or more volumes that each task in the pipeline requires during execution. You can create a persistent volume claim or provide a volume claim template that creates a persistent volume claim for you.
The following code snippet of the build-deploy-api-pipelinerun pipeline run uses a volume claim template to create a persistent volume claim for defining the storage volume for the shared-workspace workspace used in the build-and-deploy pipeline.
- 1
- Specifies the list of pipeline workspaces for which volume binding will be provided in the pipeline run.
- 2
- The name of the workspace in the pipeline for which the volume is being provided.
- 3
- Specifies a volume claim template that creates a persistent volume claim to define the storage volume for the workspace.
3.2.8. Triggers Copy linkLink copied to clipboard!
Use Triggers in conjunction with pipelines to create a full-fledged CI/CD system where Kubernetes resources define the entire CI/CD execution. Triggers capture the external events, such as a Git pull request, and process them to extract key pieces of information. Mapping this event data to a set of predefined parameters triggers a series of tasks that can then create and deploy Kubernetes resources and instantiate the pipeline.
For example, you define a CI/CD workflow using Red Hat OpenShift Pipelines for your application. The pipeline must start for any new changes to take effect in the application repository. Triggers automate this process by capturing and processing any change event and by triggering a pipeline run that deploys the new image with the latest changes.
Triggers consist of the following main resources that work together to form a reusable, decoupled, and self-sustaining CI/CD system:
The
TriggerBindingresource extracts the fields from an event payload and stores them as parameters.The following example shows a code snippet of the
TriggerBindingresource, which extracts the Git repository information from the received event payload:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The API version of the
TriggerBindingresource. In this example,v1beta1. - 2
- Specifies the type of Kubernetes object. In this example,
TriggerBinding. - 3
- Unique name to identify the
TriggerBindingresource. - 4
- List of parameters which will be extracted from the received event payload and passed to the
TriggerTemplateresource. In this example, the Git repository URL, name, and revision are extracted from the body of the event payload.
The
TriggerTemplateresource acts as a standard for the way resources must be created. It specifies the way parameterized data from theTriggerBindingresource should be used. A trigger template receives input from the trigger binding, and then performs a series of actions that results in creation of new pipeline resources, and initiation of a new pipeline run.The following example shows a code snippet of a
TriggerTemplateresource, which creates a pipeline run using the Git repository information received from theTriggerBindingresource you just created:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The API version of the
TriggerTemplateresource. In this example,v1beta1. - 2
- Specifies the type of Kubernetes object. In this example,
TriggerTemplate. - 3
- Unique name to identify the
TriggerTemplateresource. - 4
- Parameters supplied by the
TriggerBindingresource. - 5
- List of templates that specify the way resources must be created using the parameters received through the
TriggerBindingorEventListenerresources.
The
Triggerresource combines theTriggerBindingandTriggerTemplateresources, and optionally, theinterceptorsevent processor.Interceptors process all the events for a specific platform that runs before the
TriggerBindingresource. You can use interceptors to filter the payload, verify events, define and test trigger conditions, and implement other useful processing. Interceptors use secret for event verification. After the event data passes through an interceptor, it then goes to the trigger before you pass the payload data to the trigger binding. You can also use an interceptor to modify the behavior of the associated trigger referenced in theEventListenerspecification.The following example shows a code snippet of a
Triggerresource, namedvote-triggerthat connects theTriggerBindingandTriggerTemplateresources, and theinterceptorsevent processor.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The API version of the
Triggerresource. In this example,v1beta1. - 2
- Specifies the type of Kubernetes object. In this example,
Trigger. - 3
- Unique name to identify the
Triggerresource. - 4
- Service account name to be used.
- 5
- Interceptor name to be referenced. In this example,
github. - 6
- Desired parameters to be specified.
- 7
- Name of the
TriggerBindingresource to be connected to theTriggerTemplateresource. - 8
- Name of the
TriggerTemplateresource to be connected to theTriggerBindingresource. - 9
- Secret to be used to verify events.
The
EventListenerresource provides an endpoint, or an event sink, that listens for incoming HTTP-based events with a JSON payload. It extracts event parameters from eachTriggerBindingresource, and then processes this data to create Kubernetes resources as specified by the correspondingTriggerTemplateresource. TheEventListenerresource also performs lightweight event processing or basic filtering on the payload using eventinterceptors, which identify the type of payload and optionally modify it. Currently, pipeline triggers support five types of interceptors: Webhook Interceptors, GitHub Interceptors, GitLab Interceptors, Bitbucket Interceptors, and Common Expression Language (CEL) Interceptors.The following example shows an
EventListenerresource, which references theTriggerresource namedvote-trigger.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The API version of the
EventListenerresource. In this example,v1beta1. - 2
- Specifies the type of Kubernetes object. In this example,
EventListener. - 3
- Unique name to identify the
EventListenerresource. - 4
- Service account name to be used.
- 5
- Name of the
Triggerresource referenced by theEventListenerresource.