About OpenShift Pipelines
Introduction to OpenShift Pipelines
Abstract
Chapter 1. About Red Hat OpenShift Pipelines Copy linkLink copied to clipboard!
Red Hat OpenShift Pipelines is a cloud-native, continuous integration and continuous delivery (CI/CD) solution based on Kubernetes resources. It uses Tekton building blocks to automate deployments across multiple platforms by abstracting away the underlying implementation details. Tekton introduces a number of standard custom resource definitions (CRDs) for defining CI/CD pipelines that are portable across Kubernetes distributions.
Because Red Hat OpenShift Pipelines releases on a different cadence from OpenShift Container Platform, the Red Hat OpenShift Pipelines documentation is now available as separate documentation sets for each minor version of the product.
The Red Hat OpenShift Pipelines documentation is available at OpenShift Pipelines documentation.
You can access documentation for specific versions using the version selector drop-down list, or directly by adding the version to the URL, for example, OpenShift Pipelines version 1.21 documentation.
In addition, the Red Hat OpenShift Pipelines documentation is also available on the Red Hat Customer Portal at https://access.redhat.com/documentation/en-us/red_hat_openshift_pipelines/.
For additional information about the Red Hat OpenShift Pipelines life cycle and supported platforms, refer to the Platform Life Cycle Policy.
Chapter 2. Understanding OpenShift Pipelines Copy linkLink copied to clipboard!
Red Hat OpenShift Pipelines is a cloud-native, continuous integration and continuous delivery (CI/CD) solution based on Kubernetes resources. It uses Tekton building blocks to automate deployments across multiple platforms by abstracting away the underlying implementation details. Tekton introduces a number of standard custom resource definitions (CRDs) for defining CI/CD pipelines that are portable across Kubernetes distributions.
2.1. Key features Copy linkLink copied to clipboard!
- Red Hat OpenShift Pipelines is a serverless CI/CD system that runs pipelines with all the required dependencies in isolated containers.
- Red Hat OpenShift Pipelines are designed for decentralized teams that work on microservice-based architecture.
- Red Hat OpenShift Pipelines use standard CI/CD pipeline definitions that are easy to extend and integrate with the existing Kubernetes tools, enabling you to scale on-demand.
- You can use Red Hat OpenShift Pipelines to build images with Kubernetes tools such as Source-to-Image (S2I), Buildah, Buildpacks, and Kaniko that are portable across any Kubernetes platform.
- You can use the OpenShift Container Platform Developer console to create Tekton resources, view logs of pipeline runs, and manage pipelines in your OpenShift Container Platform namespaces.
2.2. OpenShift Pipelines Concepts Copy linkLink copied to clipboard!
This guide provides a detailed view of the various pipeline concepts.
2.2.1. Tasks Copy linkLink copied to clipboard!
You can use Task resources as the building blocks of a pipeline to define a set of sequentially executed steps. Each task functions as a reusable unit of work with specific inputs and outputs, capable of running individually or as part of a larger pipeline.
Task resources are the building blocks of a pipeline and consist of sequentially executed steps. It is essentially a function of inputs and outputs. A task can run individually or as a part of the pipeline. You can reuse tasks in many pipelines.
Steps are a series of commands that are sequentially executed by the task and achieve a specific goal, such as building an image. Every task runs as a pod, and each step runs as a container within that pod. Because steps run within the same pod, they can access the same volumes for caching files, config maps, and secrets.
The following example shows the apply-manifests task.
apiVersion: tekton.dev/v1
kind: Task
metadata:
name: apply-manifests
spec:
workspaces:
- name: source
params:
- name: manifest_dir
description: The directory in source that contains yaml manifests
type: string
default: "k8s"
steps:
- name: apply
image: image-registry.openshift-image-registry.svc:5000/openshift/cli:latest
workingDir: /workspace/source
command: ["/bin/bash", "-c"]
args:
- |-
echo Applying manifests in $(params.manifest_dir) directory
oc apply -f $(params.manifest_dir)
echo -----------------------------------
apiVersion-
The task API version,
v1. kind-
The type of Kubernetes object,
Task. metadata.name- The unique name of this task.
spec- The list of parameters and steps in the task and the workspace used by the task.
This task starts the pod and runs a container inside that pod by using the specified image to run the specified commands.
Starting with OpenShift Pipelines 1.6, the step YAML file no longer has the following defaults:
-
The
HOMEenvironment variable does not default to the/tekton/homedirectory -
The
workingDirfield does not default to the/workspacedirectory
Instead, the container for the step defines the HOME environment variable and the workingDir field. However, you can override the default values by specifying the custom values in the YAML file for the step.
As a temporary measure, to keep backward compatibility with the older OpenShift Pipelines versions, you can set the following fields in the TektonConfig custom resource definition to false:
spec:
pipeline:
disable-working-directory-overwrite: false
disable-home-env-overwrite: false
2.2.2. When expression Copy linkLink copied to clipboard!
You can use 'when' expressions to guard task execution by defining specific criteria that must be met before a task runs. These expressions allow you to control the flow of your pipeline, including the execution of tasks in the finally section, based on static inputs, variables, or results from earlier tasks.
When expressions guard task execution by setting criteria for running tasks within a pipeline. They contain a list of components that allow a task to run only when certain criteria are met. You can also include when expressions in the final set of tasks that you specify by using the finally field in the pipeline YAML file.
The key components of a when expression are as follows:
-
input: Specifies static inputs or variables such as a parameter, task result, and execution status. You must enter a valid input. If you do not enter a valid input, its value defaults to an empty string. -
operator: Specifies the relationship of an input to a set ofvalues. Enterinornotinas your operator values. -
values: Specifies an array of string values. Enter a non-empty array of static values or variables such as parameters, results, and a bound state of a workspace.
The declared when expressions evaluate before the task runs. If the when expression evaluates to True, the task runs. If the when expression evaluates to False, the task skips.
You can use the when expressions in various use cases. For example, whether:
- The result of a preceding task is as expected.
- A file in a Git repository has changed in the earlier commits.
- An image exists in the registry.
- An optional workspace is available.
The following example shows the when expressions for a pipeline run. The pipeline run will run the create-file task only if the following criteria are met: the path parameter is README.md, and the echo-file-exists task runs only if the exists result from the check-file task is yes.
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
generateName: guarded-pr-
spec:
taskRunTemplate:
serviceAccountName: pipeline
pipelineSpec:
params:
- name: path
type: string
description: The path of the file to be created
workspaces:
- name: source
description: |
This workspace is shared among all the pipeline tasks to read/write common resources
tasks:
- name: create-file
when:
- input: "$(params.path)"
operator: in
values: ["README.md"]
workspaces:
- name: source
workspace: source
taskSpec:
workspaces:
- name: source
description: The workspace to create the readme file in
steps:
- name: write-new-stuff
image: ubuntu
script: 'touch $(workspaces.source.path)/README.md'
- name: check-file
params:
- name: path
value: "$(params.path)"
workspaces:
- name: source
workspace: source
runAfter:
- create-file
taskSpec:
params:
- name: path
workspaces:
- name: source
description: The workspace to check for the file
results:
- name: exists
description: indicates whether the file exists or is missing
steps:
- name: check-file
image: alpine
script: |
if test -f $(workspaces.source.path)/$(params.path); then
printf yes | tee /tekton/results/exists
else
printf no | tee /tekton/results/exists
fi
- name: echo-file-exists
when:
- input: "$(tasks.check-file.results.exists)"
operator: in
values: ["yes"]
taskSpec:
steps:
- name: echo
image: ubuntu
script: 'echo file exists'
...
- name: task-should-be-skipped-1
when:
- input: "$(params.path)"
operator: notin
values: ["README.md"]
taskSpec:
steps:
- name: echo
image: ubuntu
script: exit 1
...
finally:
- name: finally-task-should-be-executed
when:
- input: "$(tasks.echo-file-exists.status)"
operator: in
values: ["Succeeded"]
- input: "$(tasks.status)"
operator: in
values: ["Succeeded"]
- input: "$(tasks.check-file.results.exists)"
operator: in
values: ["yes"]
- input: "$(params.path)"
operator: in
values: ["README.md"]
taskSpec:
steps:
- name: echo
image: ubuntu
script: 'echo finally done'
params:
- name: path
value: README.md
workspaces:
- name: source
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 16Mi
kind-
Specifies the type of Kubernetes object. In this example,
PipelineRun. tasks[0].name-
Task
create-fileused in the pipeline. tasks[2].name.when-
The
whenexpression that specifies to run theecho-file-existstask only if theexistsresult from thecheck-filetask isyes. tasks[3].name.when-
The
whenexpression that specifies to skip thetask-should-be-skipped-1task only if thepathparameter isREADME.md. tasks[4].name.when-
The
whenexpression that specifies to run thefinally-task-should-be-executedtask only if the execution status of theecho-file-existstask and the task status isSucceeded, theexistsresult from thecheck-filetask isyes, and thepathparameter isREADME.md.
The Pipeline Run details page of the OpenShift Container Platform web console shows the status of the tasks and when expressions as follows:
- All the criteria are met: Tasks and the when expression symbol, which is represented as a diamond shape, appear in a success state.
- Any one of the criteria are not met: The Task skips. Skipped tasks and the when expression symbol appear in a skipped state.
- None of the criteria are met: The Task skips. Skipped tasks and the when expression symbol appear in a skipped state.
- Task run fails: Failed tasks and the when expression symbol appear in a failed state.
2.2.3. Finally tasks Copy linkLink copied to clipboard!
You can use finally tasks to run a final set of tasks in your pipeline regardless of whether the earlier tasks succeed or fail. These tasks run in parallel after all other pipeline tasks finish, allowing you to perform cleanup or notification actions before the pipeline exits.
The finally tasks are the final set of tasks specified using the finally field in the pipeline YAML file. A finally task always executes the tasks within the pipeline, irrespective of whether the pipeline runs succeed.
You can configure a finally task to consume the results of any task within the same pipeline. This approach does not change the order in which this final task runs.
The following example shows a code snippet of the clone-cleanup-workspace pipeline. This code clones the repository into a shared workspace and cleans up the workspace. After the pipeline tasks finish, the cleanup task specified in the finally section of the pipeline YAML file cleans up the workspace.
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: clone-cleanup-workspace
spec:
workspaces:
- name: git-source
tasks:
- name: clone-app-repo
taskRef:
name: git-clone-from-catalog
params:
- name: url
value: https://github.com/tektoncd/community.git
- name: subdirectory
value: application
workspaces:
- name: output
workspace: git-source
finally:
- name: cleanup
taskRef:
name: cleanup-workspace
workspaces:
- name: source
workspace: git-source
- name: check-git-commit
params:
- name: commit
value: $(tasks.clone-app-repo.results.commit)
taskSpec:
params:
- name: commit
steps:
- name: check-commit-initialized
image: alpine
script: |
if [[ ! $(params.commit) ]]; then
exit 1
fi
metadata.name- Unique name of the pipeline.
spec.workspaces[0].name- The shared workspace where the pipeline copies the git repository.
spec.tasks[0].name- The task to clone the application repository to the shared workspace.
spec.finally[0].name- The task to clean-up the shared workspace.
spec.finally.taskRef- A reference to the task that runs in the task run.
spec.finally[0].name.workspaces- A shared storage volume that a task in a pipeline needs at runtime to receive input or offer output.
spec.finally[1].name.params- A list of parameters required for a task. If a parameter does not have an implicit default value, you must explicitly set its value.
spec.finally[1].name.taskSpec- Embedded task definition.
2.2.4. Task run Copy linkLink copied to clipboard!
You can use a TaskRun resource to instantiate and run a task with specific inputs, outputs, and execution parameters on a cluster. You can start a task run independently or as part of a pipeline run to run the steps defined in a task.
A TaskRun instantiates a task for execution with specific inputs, outputs, and execution parameters on a cluster. You can start it on its own or as part of a pipeline run for each task in a pipeline.
A task consists of one or more steps that run container images, and each container image performs a specific piece of build work. A task run starts the steps in a task in the specified order, until all steps run successfully or a failure occurs. A TaskRun is automatically created by a PipelineRun for each task in a pipeline.
The following example shows a task run that runs the apply-manifests task with the relevant input parameters:
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
name: apply-manifests-taskrun
spec:
taskRunTemplate:
serviceAccountName: pipeline
taskRef:
kind: Task
name: apply-manifests
workspaces:
- name: source
persistentVolumeClaim:
claimName: source-pvc
apiVersion-
The task run API version
v1. kind-
Specifies the type of Kubernetes object. In this example,
TaskRun. metadata.name- Unique name to identify this task run.
spec- Definition of the task run. For this task run, the task and the required workspace you define.
spec.taskRef-
Name of the task reference used for this task run. This task run runs the
apply-manifeststask. spec.workspaces- Workspace used by the task run.
2.2.5. Pipelines Copy linkLink copied to clipboard!
You can use a Pipeline resource to arrange a collection of tasks in a specific order of execution. By defining a pipeline, you construct complex workflows that automate the build, deployment, and delivery of your applications.
A Pipeline is a collection of Task resources arranged in a specific order of execution. You run them to construct complex workflows that automate the build, deployment and delivery of applications by using one or more tasks to define a CI/CD workflow for your application.
A Pipeline resource definition consists of several fields or attributes, which together enable the pipeline to run a specific goal. Each Pipeline resource definition must contain at least one Task resource, which obtains specific inputs and produces specific outputs. The pipeline definition can also optionally include several Conditions, Workspaces, Parameters, or Resources depending on the application requirements.
The following example shows the build-and-deploy pipeline, which builds an application image from a Git repository by using the buildah task provided in the openshift-pipelines namespace:
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: build-and-deploy
spec:
workspaces:
- name: shared-workspace
params:
- name: deployment-name
type: string
description: name of the deployment to be patched
- name: git-url
type: string
description: url of the git repo for the code of deployment
- name: git-revision
type: string
description: revision to be used from repo of the code for deployment
default: "pipelines-1.21"
- name: IMAGE
type: string
description: image to be built from the code
tasks:
- name: fetch-repository
taskRef:
resolver: cluster
params:
- name: kind
value: task
- name: name
value: git-clone
- name: namespace
value: openshift-pipelines
workspaces:
- name: output
workspace: shared-workspace
params:
- name: URL
value: $(params.git-url)
- name: SUBDIRECTORY
value: ""
- name: DELETE_EXISTING
value: "true"
- name: REVISION
value: $(params.git-revision)
- name: build-image
taskRef:
resolver: cluster
params:
- name: kind
value: task
- name: name
value: buildah
- name: namespace
value: openshift-pipelines
workspaces:
- name: source
workspace: shared-workspace
params:
- name: TLSVERIFY
value: "false"
- name: IMAGE
value: $(params.IMAGE)
runAfter:
- fetch-repository
- name: apply-manifests
taskRef:
name: apply-manifests
workspaces:
- name: source
workspace: shared-workspace
runAfter:
- build-image
- name: update-deployment
taskRef:
name: update-deployment
workspaces:
- name: source
workspace: shared-workspace
params:
- name: deployment
value: $(params.deployment-name)
- name: IMAGE
value: $(params.IMAGE)
runAfter:
- apply-manifests
apiVersion-
Pipeline API version
v1. kind-
Specifies the type of Kubernetes object. In this example,
Pipeline. metadata.name- Unique name of this pipeline.
spec- Specifies the definition and structure of the pipeline.
spec.workspaces- Workspaces used across all the tasks in the pipeline.
spec.params- Parameters used across all the tasks in the pipeline.
tasks[0].name- Specifies the list of tasks used in the pipeline.
tasks[1].name-
Task
build-image, which uses thebuildahtask provided in theopenshift-pipelinesnamespace to build application images from a given Git repository. tasks[2].name-
Task
apply-manifests, which uses a user-defined task with the same name. tasks[2].name.runAfter-
Specifies the sequence in which tasks run in a pipeline. In this example, the
apply-manifeststask runs only after thebuild-imagetask finishes.
The Red Hat OpenShift Pipelines Operator installs the Buildah task in the openshift-pipelines namespace and creates the pipeline service account with enough permissions to build and push an image. The Buildah task can fail when associated with a different service account with insufficient permissions.
2.2.6. Pipeline run Copy linkLink copied to clipboard!
You can use a PipelineRun resource to instantiate and run a pipeline with specific inputs, outputs, and credentials. This resource binds a pipeline to a workspace and parameter values, enabling you to run your CI/CD workflow for a specific scenario.
A PipelineRun is a type of resource that binds a pipeline, workspaces, credentials, and a set of parameter values specific to a scenario to run the CI/CD workflow.
A pipeline run is the running instance of a pipeline. It also creates a task run for each task in the pipeline run.
The pipeline executes the tasks sequentially until they are complete or a task fails. The status field tracks and the progress of each task run and stores it for monitoring and auditing purposes.
The following example runs the build-and-deploy pipeline with relevant resources and parameters:
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
name: build-deploy-api-pipelinerun
spec:
pipelineRef:
name: build-and-deploy
params:
- name: deployment-name
value: vote-api
- name: git-url
value: https://github.com/openshift-pipelines/vote-api.git
- name: IMAGE
value: image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/vote-api
workspaces:
- name: shared-workspace
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
apiVersion-
Pipeline run API version
v1. kind-
The type of Kubernetes object. In this example,
PipelineRun. metadata.name- Unique name to identify this pipeline run.
spec.pipelineRef.name-
Name of the pipeline to run. In this example,
build-and-deploy. spec.params- The list of parameters required to run the pipeline.
spec.workspaces- Workspace used by the pipeline run.
2.2.7. Pod templates Copy linkLink copied to clipboard!
You can use a pod template in a PipelineRun or TaskRun custom resource (CR) to configure the pods that run your tasks. This allows you to set specific parameters, such as security contexts or user IDs, for every pod created during the pipeline or task run.
Optionally, you can define a pod template in a PipelineRun or TaskRun custom resource (CR). You can use any parameters available for a Pod CR in the pod template. When creating pods for running the pipeline or task, OpenShift Pipelines sets these parameters for every pod.
For example, you can use a pod template to make the pod run as a user and not as root.
For a pipeline run, you can define a pod template in the pipelineRunTemplate.podTemplate spec, as in the following example:
Example PipelineRun CR with a pod template
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
name: mypipelinerun
spec:
pipelineRef:
name: mypipeline
taskRunTemplate:
podTemplate:
securityContext:
runAsNonRoot: true
runAsUser: 1001
In the earlier API version v1beta1, the pod template for a PipelineRun CR defined podTemplate directly in the spec: section. This format is not supported in the v1 API.
For a task run, you can define a pod template in the podTemplate spec, as in the following example:
Example TaskRun CR with a pod template
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
name: mytaskrun
namespace: default
spec:
taskRef:
name: mytask
podTemplate:
schedulerName: volcano
securityContext:
runAsNonRoot: true
runAsUser: 1001
2.2.8. Workspaces Copy linkLink copied to clipboard!
You can use workspaces to declare shared storage volumes that tasks in a pipeline need for input or output at runtime. By separating volume declaration from runtime storage, workspaces allow you to specify the filesystem location independently, making your tasks reusable and flexible across different environments.
We recommend that you use workspaces instead of the PipelineResource CRs in Red Hat OpenShift Pipelines, as PipelineResource CRs are difficult to debug, limited in scope, and make tasks less reusable.
Instead of specifying the actual location of the volumes, workspaces allow you to declare the filesystem or parts of the filesystem that you need at runtime. A task or pipeline declares the workspace, and you must give the specific location details of the volume. It is then mounted into that workspace in a task run or a pipeline run. This separation of volume declaration from runtime storage volumes makes the tasks reusable, flexible, and independent of the user environment.
With workspaces, you can:
- Store task inputs and outputs
- Share data among tasks
- Use it as a mount point for credentials held in secrets
- Use it as a mount point for configurations held in config maps
- Use it as a mount point for common tools shared by an organization
- Create a cache of build artifacts that accelerate jobs
You can specify workspaces in the TaskRun or PipelineRun using:
- A read-only config map or secret
- An existing persistent volume claim shared with other tasks
- A persistent volume claim from a volume claim template
-
An
emptyDirthat the system discards when the task run completes
The following example shows a code snippet of the build-and-deploy pipeline, which declares a shared-workspace workspace for the build-image and apply-manifests tasks you define in the pipeline.
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: build-and-deploy
spec:
workspaces:
- name: shared-workspace
params:
...
tasks:
- name: build-image
taskRef:
resolver: cluster
params:
- name: kind
value: task
- name: name
value: buildah
- name: namespace
value: openshift-pipelines
workspaces:
- name: source
workspace: shared-workspace
params:
- name: TLSVERIFY
value: "false"
- name: IMAGE
value: $(params.IMAGE)
runAfter:
- fetch-repository
- name: apply-manifests
taskRef:
name: apply-manifests
workspaces:
- name: source
workspace: shared-workspace
runAfter:
- build-image
...
spec.workspaces-
List of workspaces shared between the tasks defined in the pipeline. A pipeline can define as many workspaces as you need. In this example, the pipeline declares only one workspace named
shared-workspace. tasks-
Definition of tasks used in the pipeline. This snippet defines two tasks,
build-imageandapply-manifests, which share a common workspace. tasks.workspaces-
List of workspaces used in the
build-imagetask. A task definition can include as many workspaces as it requires. However, we recommend that a task uses at most one writable workspace. tasks.workspaces[0].name-
Name that uniquely identifies the workspace you use in the task. This task uses one workspace named
source. tasks.workspaces[0].name.workspace-
Name of the pipeline workspace used by the task. Note that the workspace
sourcein turn uses the pipeline workspace namedshared-workspace. tasks[1].name.workspaces-
List of workspaces used in the
apply-manifeststask. Note that this task shares thesourceworkspace with thebuild-imagetask.
Workspaces help tasks share data, and let you specify one or more volumes that each task in the pipeline requires during execution. You can create a persistent volume claim or give a volume claim template that creates a persistent volume claim for you.
The following code snippet of the build-deploy-api-pipelinerun pipeline run uses a volume claim template to create a persistent volume claim for defining the storage volume for the shared-workspace workspace used in the build-and-deploy pipeline.
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
name: build-deploy-api-pipelinerun
spec:
pipelineRef:
name: build-and-deploy
params:
...
workspaces:
- name: shared-workspace
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
workspaces- Specifies the list of pipeline workspaces for which volume binding you will offer in the pipeline run.
workspaces[0].name- The name of the pipeline workspace that receives the offered volume.
workspaces[0].name.volumeClaimTemplates- Specifies a volume claim template that creates a persistent volume claim to define the storage volume for the workspace.
2.2.9. Step actions Copy linkLink copied to clipboard!
You can use a StepAction custom resource (CR) to define a reusable action that a step performs. By referencing a StepAction object from a step, you can share and reuse action definitions across many tasks or reference actions from external sources.
A step is a part of a task. If you define a step in a task, you cannot reference this step from another task.
This CR has the action that a step performs. You can reference a StepAction object from a step to create a step that performs the action. You can also use resolvers to reference a StepAction definition that is available from an external source.
The following examples shows a StepAction CR named apply-manifests-action. This step action applies manifests from a source tree to your OpenShift Container Platform environment:
apiVersion: tekton.dev/v1
kind: StepAction
metadata:
name: apply-manifests-action
spec:
params:
- name: working_dir
description: The working directory where the source is located
type: string
default: "/workspace/source"
- name: manifest_dir
description: The directory in source that contains yaml manifests
default: "k8s"
results:
- name: output
description: The output of the oc apply command
image: image-registry.openshift-image-registry.svc:5000/openshift/cli:latest
env:
- name: MANIFEST_DIR
value: $(params.manifest_dir)
workingDir: $(params.working_dir)
script: |
#!/usr/bin/env bash
oc apply -f "$MANIFEST_DIR" | tee $(results.output)
spec.params[n].name.type-
The
typespecification for a parameter is optional.
The StepAction CR does not include definitions of workspaces. Instead, the step action expects that the task that includes the action also provides the mounted source tree, typically using a workspace.
A StepAction object can define parameters and results. When you reference this object, you must specify the values for the parameters of the StepAction object in the definition of the step. The results of the StepAction object automatically become the results of the step.
To avoid malicious attacks that use the shell, the StepAction CR does not support using parameter values in a script value. Instead, you must use the env: section to define environment variables that contain the parameter values.
The following example task includes a step that references the apply-manifests-action step action, provides the necessary parameters, and uses the result:
apiVersion: tekton.dev/v1
kind: Task
metadata:
name: apply-manifests-with-action
spec:
workspaces:
- name: source
params:
- name: manifest_dir
description: The directory in source that contains yaml manifests
type: string
default: "k8s"
steps:
- name: apply
ref:
name: apply-manifests-action
params:
- name: working_dir
value: "/workspace/source"
- name: manifest_dir
value: $(params.manifest_dir)
- name: display_result
script: 'echo $(step.apply.results.output)'
2.2.10. Triggers Copy linkLink copied to clipboard!
You can use Triggers in conjunction with pipelines to create a comprehensive CI/CD system driven by Kubernetes resources. Triggers capture external events, such as Git pull requests, and process them to extract information, enabling you to automatically instantiate pipelines and deploy resources based on event data.
For example, you define a CI/CD workflow by using Red Hat OpenShift Pipelines for your application. The pipeline must start for any new changes to take effect in the application repository. Triggers automate this process by capturing and processing any change event and by triggering a pipeline run that deploys the new image with the latest changes.
Triggers consist of the following main resources that work together to form a reusable, decoupled, and self-sustaining CI/CD system:
The
TriggerBindingresource extracts the fields from an event payload and stores them as parameters.The following example shows a code snippet of the
TriggerBindingresource, from which you extract the Git repository information from the received event payload:apiVersion: triggers.tekton.dev/v1 kind: TriggerBinding metadata: name: vote-app spec: params: - name: git-repo-url value: $(body.repository.url) - name: git-repo-name value: $(body.repository.name) - name: git-revision value: $(body.head_commit.id)apiVersion-
The API version of the
TriggerBindingresource. In this example,v1. kind-
Specifies the type of Kubernetes object. In this example,
TriggerBinding. metadata.name-
A unique name to identify the
TriggerBindingresource. spec.params-
The list of parameters extracted from the received event payload and passed to the
TriggerTemplateresource. In this example, theTriggerTemplateextracts the Git repository URL, name, and revision from the body of the event payload.
The
TriggerTemplateresource acts as a standard for creating resources. It specifies how the parameterized data from theTriggerBindingresource defines the new resources. A trigger template receives input from the trigger binding, then performs a series of actions that results in the creation of new pipeline resources and the initiation of a new pipeline run.The following example shows a code snippet of a
TriggerTemplateresource, which creates a pipeline run by using the Git repository information received from theTriggerBindingresource you just created:apiVersion: triggers.tekton.dev/v1 kind: TriggerTemplate metadata: name: vote-app spec: params: - name: git-repo-url description: The git repository url - name: git-revision description: The git revision default: pipelines-1.21 - name: git-repo-name description: The name of the deployment to be created / patched resourcetemplates: - apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: build-deploy-$(tt.params.git-repo-name)-$(uid) spec: taskRunTemplate: serviceAccountName: pipeline pipelineRef: name: build-and-deploy params: - name: deployment-name value: $(tt.params.git-repo-name) - name: git-url value: $(tt.params.git-repo-url) - name: git-revision value: $(tt.params.git-revision) - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/$(tt.params.git-repo-name) workspaces: - name: shared-workspace volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 500MiapiVersion-
The API version of the
TriggerTemplateresource. In this example,v1. kind-
Specifies the type of Kubernetes object. In this example,
TriggerTemplate. metadata.name-
Unique name to identify the
TriggerTemplateresource. spec.params-
Parameters supplied by the
TriggerBindingresource. resourcetemplates-
List of templates that specify how you create resources by using the parameters received through the
TriggerBindingorEventListenerresources.
The
Triggerresource combines theTriggerBindingandTriggerTemplateresources, and optionally, theinterceptorsevent processor.Interceptors process all the events for a specific platform that runs before the
TriggerBindingresource. You can use interceptors to filter the payload, verify events, define and test trigger conditions, and implement other useful processing. Interceptors use secret for event verification. After the event data passes through an interceptor, it then goes to the trigger before you pass the payload data to the trigger binding. You can also use an interceptor to change the behavior of the associated trigger referenced in theEventListenerspecification.The following example shows a code snippet of a
Triggerresource, namedvote-triggerthat connects theTriggerBindingandTriggerTemplateresources, and theinterceptorsevent processor.apiVersion: triggers.tekton.dev/v1 kind: Trigger metadata: name: vote-trigger spec: taskRunTemplate: serviceAccountName: pipeline interceptors: - ref: name: "github" params: - name: "secretRef" value: secretName: github-secret secretKey: secretToken - name: "eventTypes" value: ["push"] bindings: - ref: vote-app template: ref: vote-app --- apiVersion: v1 kind: Secret metadata: name: github-secret type: Opaque stringData: secretToken: "1234567"apiVersion-
The API version of the
Triggerresource. In this example,v1. kind-
Specifies the type of Kubernetes object. In this example,
Trigger. metadata.name-
A unique name to identify the
Triggerresource. spec.taskRunTemplate.serviceAccountName- The service account name to use.
interceptors.ref.name-
Interceptor name to reference. In this example,
github. interceptors.params- The required parameters to specify.
bindings.ref-
The name of the
TriggerBindingresource to connect to theTriggerTemplateresource. template.ref-
The name of the
TriggerTemplateresource to connect to theTriggerBindingresource. kind- The Secret to use to verify events.
The
EventListenerresource provides an endpoint, or an event sink, that listens for incoming HTTP-based events with a JSON payload. You extract event parameters from eachTriggerBindingresource, and then process this data to create Kubernetes resources as specified by the correspondingTriggerTemplateresource. TheEventListenerresource also performs lightweight event processing or basic filtering on the payload by using eventinterceptors, which identify the type of payload and optionally change it. Currently, pipeline triggers support five types of interceptors:- Webhook Interceptors
- GitHub Interceptors
- GitLab Interceptors
- Bitbucket Interceptors, and
Common Expression Language (CEL) Interceptors.
The following example shows an
EventListenerresource, which references theTriggerresource namedvote-trigger.apiVersion: triggers.tekton.dev/v1 kind: EventListener metadata: name: vote-app spec: taskRunTemplate: serviceAccountName: pipeline triggers: - triggerRef: vote-triggerapiVersion-
The API version of the
EventListenerresource. In this example,v1. kind-
Specifies the type of Kubernetes object. In this example,
EventListener. metadata.name-
A unique name to identify the
EventListenerresource. spec.taskRunTemplate.serviceAccountName- The service account name to use.
spec.triggers.triggerRef-
The name of the
Triggerresource that theEventListenerresource references.