Questo contenuto non è disponibile nella lingua selezionata.
Chapter 4. Pipelines
4.1. Red Hat OpenShift Pipelines release notes
Red Hat OpenShift Pipelines is a cloud-native CI/CD experience based on the Tekton project which provides:
- Standard Kubernetes-native pipeline definitions (CRDs).
- Serverless pipelines with no CI server management overhead.
- Extensibility to build images using any Kubernetes tool, such as S2I, Buildah, JIB, and Kaniko.
- Portability across any Kubernetes distribution.
- Powerful CLI for interacting with pipelines.
- Integrated user experience with the Developer perspective of the OpenShift Container Platform web console.
For an overview of Red Hat OpenShift Pipelines, see Understanding OpenShift Pipelines.
4.1.1. Compatibility and support matrix
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use.
In the table, features are marked with the following statuses:
TP | Technology Preview |
GA | General Availability |
Red Hat OpenShift Pipelines Version | Component Version | OpenShift Version | Support Status | ||||||
---|---|---|---|---|---|---|---|---|---|
Operator | Pipelines | Triggers | CLI | Catalog | Chains | Hub | Pipelines as Code | ||
1.10 | 0.44.x | 0.23.x | 0.30.x | NA | 0.15.x (TP) | 1.12.x (TP) | 0.17.x (GA) | 4.10, 4.11, 4.12, 4.13 | GA |
1.9 | 0.41.x | 0.22.x | 0.28.x | NA | 0.13.x (TP) | 1.11.x (TP) | 0.15.x (GA) | 4.10, 4.11, 4.12, 4.13 | GA |
1.8 | 0.37.x | 0.20.x | 0.24.x | NA | 0.9.0 (TP) | 1.8.x (TP) | 0.10.x (TP) | 4.10, 4.11, 4.12 | GA |
1.7 | 0.33.x | 0.19.x | 0.23.x | 0.33 | 0.8.0 (TP) | 1.7.0 (TP) | 0.5.x (TP) | 4.9, 4.10, 4.11 | GA |
1.6 | 0.28.x | 0.16.x | 0.21.x | 0.28 | N/A | N/A | N/A | 4.9 | GA |
1.5 | 0.24.x | 0.14.x (TP) | 0.19.x | 0.24 | N/A | N/A | N/A | 4.8 | GA |
1.4 | 0.22.x | 0.12.x (TP) | 0.17.x | 0.22 | N/A | N/A | N/A | 4.7 | GA |
Additionally, support for running Red Hat OpenShift Pipelines on ARM hardware is in Technology Preview.
For questions and feedback, you can send an email to the product team at pipelines-interest@redhat.com.
4.1.2. Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
4.1.3. Release notes for Red Hat OpenShift Pipelines General Availability 1.10
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10 is available on OpenShift Container Platform 4.11, 4.12, and 4.13.
4.1.3.1. New features
In addition to fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.10.
4.1.3.1.1. Pipelines
-
With this update, you can specify environment variables in a
PipelineRun
orTaskRun
pod template to override or append the variables that are configured in a task or step. Also, you can specify environment variables in a default pod template to use those variables globally for allPipelineRuns
andTaskRuns
. This update also adds a new default configuration namedforbidden-envs
to filter environment variables while propagating from pod templates. With this update, custom tasks in pipelines are enabled by default.
NoteTo disable this update, set the
enable-custom-tasks
flag tofalse
in thefeature-flags
config custom resource.-
This update supports the
v1beta1.CustomRun
API version for custom tasks. This update adds support for the
PipelineRun
reconciler to create a custom run. For example, customTaskRuns
created fromPipelineRuns
can now use thev1beta1.CustomRun
API version instead ofv1alpha1.Run
, if thecustom-task-version
feature flag is set tov1beta1
, instead of the default valuev1alpha1
.NoteYou need to update the custom task controller to listen for the
*v1beta1.CustomRun
API version instead of*v1alpha1.Run
in order to respond tov1beta1.CustomRun
requests.-
This update adds a new
retries
field to thev1beta1.TaskRun
andv1.TaskRun
specifications.
4.1.3.1.2. Triggers
-
With this update, triggers support the creation of
Pipelines
,Tasks
,PipelineRuns
, andTaskRuns
objects of thev1
API version along withCustomRun
objects of thev1beta1
API version. With this update, GitHub Interceptor blocks a pull request trigger from being executed unless invoked by an owner or with a configurable comment by an owner.
NoteTo enable or disable this update, set the value of the
githubOwners
parameter totrue
orfalse
in the GitHub Interceptor configuration file.-
With this update, GitHub Interceptor has the ability to add a comma delimited list of all files that have changed for the push and pull request events. The list of changed files is added to the
changed_files
property of the event payload in the top-level extensions field. -
This update changes the
MinVersion
of TLS totls.VersionTLS12
so that triggers run on OpenShift Container Platform when the Federal Information Processing Standards (FIPS) mode is enabled.
4.1.3.1.3. CLI
-
This update adds support to pass a Container Storage Interface (CSI) file as a workspace at the time of starting a
Task
,ClusterTask
orPipeline
. -
This update adds
v1
API support to all CLI commands associated with task, pipeline, pipeline run, and task run resources. Tekton CLI works with bothv1beta1
andv1
APIs for these resources. -
This update adds support for an object type parameter in the
start
anddescribe
commands.
4.1.3.1.4. Operator
-
This update adds a
default-forbidden-env
parameter in optional pipeline properties. The parameter includes forbidden environment variables that should not be propagated if provided through pod templates. -
This update adds support for custom logos in Tekton Hub UI. To add a custom logo, set the value of the
customLogo
parameter to base64 encoded URI of logo in the Tekton Hub CR. - This update increments the version number of the git-clone task to 0.9.
4.1.3.1.5. Tekton Chains
Tekton Chains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
-
This update adds annotations and labels to the
PipelineRun
andTaskRun
attestations. -
This update adds a new format named
slsa/v1
, which generates the same provenance as the one generated when requesting in thein-toto
format. - With this update, Sigstore features are moved out from the experimental features.
-
With this update, the
predicate.materials
function includes image URI and digest information from all steps and sidecars for aTaskRun
object.
4.1.3.1.6. Tekton Hub
Tekton Hub is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
-
This update supports installing, upgrading, or downgrading Tekton resources of the
v1
API version on the cluster. - This update supports adding a custom logo in place of the Tekton Hub logo in UI.
-
This update extends the
tkn hub install
command functionality by adding a--type artifact
flag, which fetches resources from the Artifact Hub and installs them on your cluster. - This update adds support tier, catalog, and org information as labels to the resources being installed from Artifact Hub to your cluster.
4.1.3.1.7. Pipelines as Code
-
This update enhances incoming webhook support. For a GitHub application installed on the OpenShift Container Platform cluster, you do not need to provide the
git_provider
specification for an incoming webhook. Instead, Pipelines as Code detects the secret and use it for the incoming webhook. - With this update, you can use the same token to fetch remote tasks from the same host on GitHub with a non-default branch.
-
With this update, Pipelines as Code supports Tekton
v1
templates. You can havev1
andv1beta1
templates, which Pipelines as Code reads for PR generation. The PR is created asv1
on cluster. -
Before this update, OpenShift console UI would use a hardcoded pipeline run template as a fallback template when a runtime template was not found in the OpenShift namespace. This update in the
pipelines-as-code
config map provides a new default pipeline run template named,pipelines-as-code-template-default
for the console to use. - With this update, Pipelines as Code supports Tekton Pipelines 0.44.0 minimal status.
-
With this update, Pipelines as Code supports Tekton
v1
API, which means Pipelines as Code is now compatible with Tekton v0.44 and later. - With this update, you can configure custom console dashboards in addition to configuring a console for OpenShift and Tekton dashboards for k8s.
-
With this update, Pipelines as Code detects the installation of a GitHub application initiated using the
tkn pac create repo
command and does not require a GitHub webhook if it was installed globally. -
Before this update, if there was an error on a
PipelineRun
execution and not on the tasks attached toPipelineRun
, Pipelines as Code would not report the failure properly. With this update, Pipelines as Code reports the error properly on the GitHub checks when aPipelineRun
could not be created. -
With this update, Pipelines as Code includes a
target_namespace
variable, which expands to the currently running namespace where thePipelineRun
is executed. - With this update, Pipelines as Code lets you bypass GitHub enterprise questions in the CLI bootstrap GitHub application.
- With this update, Pipelines as Code does not report errors when the repository CR was not found.
- With this update, Pipelines as Code reports an error if multiple pipeline runs with the same name were found.
4.1.3.2. Breaking changes
-
With this update, the prior version of the
tkn
command is not compatible with Red Hat OpenShift Pipelines 1.10. -
This update removes support for
Cluster
andCloudEvent
pipeline resources from Tekton CLI. You cannot create pipeline resources by using thetkn pipelineresource create
command. Also, pipeline resources are no longer supported in thestart
command of a task, cluster task, or pipeline. -
This update removes
tekton
as a provenance format from Tekton Chains.
4.1.3.3. Deprecated and removed features
-
In Red Hat OpenShift Pipelines 1.10, the
ClusterTask
commands are now deprecated and are planned to be removed in a future release. Thetkn task create
command is also deprecated with this update. -
In Red Hat OpenShift Pipelines 1.10, the flags
-i
and-o
that were used with thetkn task start
command are now deprecated because thev1
API does not support pipeline resources. -
In Red Hat OpenShift Pipelines 1.10, the flag
-r
that was used with thetkn pipeline start
command is deprecated because thev1
API does not support pipeline resources. -
The Red Hat OpenShift Pipelines 1.10 update sets the
openshiftDefaultEmbeddedStatus
parameter toboth
withfull
andminimal
embedded status. The flag to change the default embedded status is also deprecated and will be removed. In addition, the pipeline default embedded status will be changed tominimal
in a future release.
4.1.3.4. Known issues
This update includes the following backward incompatible changes:
-
Removal of the
PipelineResources
cluster -
Removal of the
PipelineResources
cloud event
-
Removal of the
If the pipelines metrics feature does not work after a cluster upgrade, run the following command as a workaround:
$ oc get tektoninstallersets.operator.tekton.dev | awk '/pipeline-main-static/ {print $1}' | xargs oc delete tektoninstallersets
- With this update, usage of external databases, such as the Crunchy PostgreSQL is not supported on IBM Power, IBM Z, and {linuxoneProductName}. Instead, use the default Tekton Hub database.
4.1.3.5. Fixed issues
-
Before this update, the
opc pac
command generated a runtime error instead of showing any help. This update fixes theopc pac
command to show the help message. -
Before this update, running the
tkn pac create repo
command needed the webhook details for creating a repository. With this update, thetkn-pac create repo
command does not configure a webhook when your GitHub application is installed. -
Before this update, Pipelines as Code would not report a pipeline run creation error when Tekton Pipelines had issues creating the
PipelineRun
resource. For example, a non-existing task in a pipeline run would show no status. With this update, Pipelines as Code shows the proper error message coming from Tekton Pipelines along with the task that is missing. - This update fixes UI page redirection after a successful authentication. Now, you are redirected to the same page where you had attempted to log in to Tekton Hub.
-
This update fixes the
list
command with these flags,--all-namespaces
and--output=yaml
, for a cluster task, an individual task, and a pipeline. -
This update removes the forward slash in the end of the
repo.spec.url
URL so that it matches the URL coming from GitHub. -
Before this update, the
marshalJSON
function would not marshal a list of objects. With this update, themarshalJSON
function marshals the list of objects. - With this update, Pipelines as Code lets you bypass GitHub enterprise questions in the CLI bootstrap GitHub application.
- This update fixes the GitHub collaborator check when your repository has more than 100 users.
-
With this update, the
sign
andverify
commands for a task or pipeline now work without a kubernetes configuration file. - With this update, Tekton Operator cleans leftover pruner cron jobs if pruner has been skipped on a namespace.
-
Before this update, the API
ConfigMap
object would not be updated with a user configured value for a catalog refresh interval. This update fixes theCATALOG_REFRESH_INTERVAL
API in the Tekon Hub CR. This update fixes reconciling of
PipelineRunStatus
when changing theEmbeddedStatus
feature flag. This update resets the following parameters:-
The
status.runs
andstatus.taskruns
parameters tonil
withminimal EmbeddedStatus
-
The
status.childReferences
parameter tonil
withfull EmbeddedStatus
-
The
-
This update adds a conversion configuration to the
ResolutionRequest
CRD. This update properly configures conversion from thev1alpha1.ResolutionRequest
request to thev1beta1.ResolutionRequest
request. - This update checks for duplicate workspaces associated with a pipeline task.
- This update fixes the default value for enabling resolvers in the code.
-
This update fixes
TaskRef
andPipelineRef
names conversion by using a resolver.
4.1.3.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.10.1
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10.1 is available on OpenShift Container Platform 4.11, 4.12, and 4.13.
4.1.3.6.1. Fixed issues for Pipelines as Code
-
Before this update, if the source branch information coming from payload included
refs/heads/
but the user-configured target branch only included the branch name,main
, in a CEL expression, the push request would fail. With this update, Pipelines as Code passes the push request and triggers a pipeline if either the base branch or target branch hasrefs/heads/
in the payload. -
Before this update, when a
PipelineRun
object could not be created, the error received from the Tekton controller was not reported to the user. With this update, Pipelines as Code reports the error messages to the GitHub interface so that users can troubleshoot the errors. Pipelines as Code also reports the errors that occurred during pipeline execution. - With this update, Pipelines as Code does not echo a secret to the GitHub checks interface when it failed to create the secret on the OpenShift Container Platform cluster because of an infrastructure issue.
- This update removes the deprecated APIs that are no longer in use from Red Hat OpenShift Pipelines.
4.1.3.7. Release notes for Red Hat OpenShift Pipelines General Availability 1.10.2
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10.2 is available on OpenShift Container Platform 4.11, 4.12, and 4.13.
4.1.3.7.1. Fixed issues
Before this update, an issue in the Tekton Operator prevented the user from setting the value of the enable-api-fields
flag to beta
. This update fixes the issue. Now, you can set the value of the enable-api-fields
flag to beta
in the TektonConfig
CR.
4.1.3.8. Release notes for Red Hat OpenShift Pipelines General Availability 1.10.3
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10.3 is available on OpenShift Container Platform 4.11, 4.12, and 4.13.
4.1.3.8.1. Fixed issues
Before this update, the Tekton Operator did not expose the performance configuration fields for any customizations. With this update, as a cluster administrator, you can customize the following performance configuration fields in the TektonConfig
CR based on your needs:
-
disable-ha
-
buckets
-
kube-api-qps
-
kube-api-burst
-
threads-per-controller
4.1.3.9. Release notes for Red Hat OpenShift Pipelines General Availability 1.10.4
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10.4 is available on OpenShift Container Platform 4.11, 4.12, and 4.13.
4.1.3.9.1. Fixed issues
-
This update fixes the bundle resolver conversion issue for the
PipelineRef
field in a pipeline run. Now, the conversion feature sets the value of thekind
field toPipeline
after conversion. -
Before this update, the
pipelinerun.timeouts
field was reset to thetimeouts.pipeline
value, ignoring thetimeouts.tasks
andtimeouts.finally
values. This update fixes the issue and sets the correct default timeout value for aPipelineRun
resource. - Before this update, the controller logs contained unnecessary data. This update fixes the issue.
4.1.3.10. Release notes for Red Hat OpenShift Pipelines General Availability 1.10.5
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10.5 is available on OpenShift Container Platform 4.10 in addition to 4.11, 4.12, and 4.13.
Red Hat OpenShift Pipelines 1.10.5 is only available in the pipelines-1.10
channel on OpenShift Container Platform 4.10, 4.11, 4.12, and 4.13. It is not available in the latest
channel for any OpenShift Container Platform version.
4.1.3.10.1. Fixed issues
-
Before this update, huge pipeline runs were not getting listed or deleted using the
oc
andtkn
commands. This update mitigates this issue by compressing the huge annotations that were causing this problem. Remember that if the pipeline runs are still too huge after compression, then the same error still recurs. -
Before this update, only the pod template specified in the
pipelineRun.spec.taskRunSpecs[].podTemplate
object would be considered for a pipeline run. With this update, the pod template specified in thepipelineRun.spec.podTemplate
object is also considered and merged with the template specified in thepipelineRun.spec.taskRunSpecs[].podTemplate
object.
4.1.4. Release notes for Red Hat OpenShift Pipelines General Availability 1.9
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.9 is available on OpenShift Container Platform 4.11, 4.12, and 4.13.
4.1.4.1. New features
In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.9.
4.1.4.1.1. Pipelines
- With this update, you can specify pipeline parameters and results in arrays and object dictionary forms.
- This update provides support for Container Storage Interface (CSI) and projected volumes for your workspace.
-
With this update, you can specify the
stdoutConfig
andstderrConfig
parameters when defining pipeline steps. Defining these parameters helps to capture standard output and standard error, associated with steps, to local files. -
With this update, you can add variables in the
steps[].onError
event handler, for example,$(params.CONTINUE)
. -
With this update, you can use the output from the
finally
task in thePipelineResults
definition. For example,$(finally.<pipelinetask-name>.result.<result-name>)
, where<pipelinetask-name>
denotes the pipeline task name and<result-name>
denotes the result name. - This update supports task-level resource requirements for a task run.
- With this update, you do not need to recreate parameters that are shared, based on their names, between a pipeline and the defined tasks. This update is part of a developer preview feature.
- This update adds support for remote resolution, such as built-in git, cluster, bundle, and hub resolvers.
4.1.4.1.2. Triggers
-
This update adds the
Interceptor
CRD to defineNamespacedInterceptor
. You can useNamespacedInterceptor
in thekind
section of interceptors reference in triggers or in theEventListener
specification. -
This update enables
CloudEvents
. - With this update, you can configure the webhook port number when defining a trigger.
-
This update supports using trigger
eventID
as input toTriggerBinding
. This update supports validation and rotation of certificates for the
ClusterInterceptor
server.-
Triggers perform certificate validation for core interceptors and rotate a new certificate to
ClusterInterceptor
when its certificate expires.
-
Triggers perform certificate validation for core interceptors and rotate a new certificate to
4.1.4.1.3. CLI
-
This update supports showing annotations in the
describe
command. -
This update supports showing pipeline, tasks, and timeout in the
pr describe
command. -
This update adds flags to provide pipeline, tasks, and timeout in the
pipeline start
command. -
This update supports showing the presence of workspace, optional or mandatory, in the
describe
command of a task and pipeline. -
This update adds the
timestamps
flag to show logs with a timestamp. -
This update adds a new flag
--ignore-running-pipelinerun
, which ignores the deletion ofTaskRun
associated withPipelineRun
. -
This update adds support for experimental commands. This update also adds experimental subcommands,
sign
andverify
to thetkn
CLI tool. - This update makes the Z shell (Zsh) completion feature usable without generating any files.
This update introduces a new CLI tool called
opc
. It is anticipated that an upcoming release will replace thetkn
CLI tool withopc
.Important-
The new CLI tool
opc
is a Technology Preview feature. -
opc
will be a replacement fortkn
with additional Red Hat OpenShift Pipelines specific features, which do not necessarily fit intkn
.
-
The new CLI tool
4.1.4.1.4. Operator
With this update, Pipelines as Code is installed by default. You can disable Pipelines as Code by using the
-p
flag:$ oc patch tektonconfig config --type="merge" -p '{"spec": {"platforms": {"openshift":{"pipelinesAsCode": {"enable": false}}}}}'
-
With this update, you can also modify Pipelines as Code configurations in the
TektonConfig
CRD. - With this update, if you disable the developer perspective, the Operator does not install developer console related custom resources.
-
This update includes
ClusterTriggerBinding
support for Bitbucket Server and Bitbucket Cloud and helps you to reuse aTriggerBinding
across your entire cluster.
4.1.4.1.5. Resolvers
Resolvers is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
With this update, you can configure pipeline resolvers in the
TektonConfig
CRD. You can enable or disable these pipeline resolvers:enable-bundles-resolver
,enable-cluster-resolver
,enable-git-resolver
, andenable-hub-resolver
.apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: enable-bundles-resolver: true enable-cluster-resolver: true enable-git-resolver: true enable-hub-resolver: true ...
You can also provide resolver specific configurations in
TektonConfig
. For example, you can define the following fields in themap[string]string
format to set configurations for individual resolvers:apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: bundles-resolver-config: default-service-account: pipelines cluster-resolver-config: default-namespace: test git-resolver-config: server-url: localhost.com hub-resolver-config: default-tekton-hub-catalog: tekton ...
4.1.4.1.6. Tekton Chains
Tekton Chains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
-
Before this update, only Open Container Initiative (OCI) images were supported as outputs of
TaskRun
in the in-toto provenance agent. This update adds in-toto provenance metadata as outputs with these suffixes,ARTIFACT_URI
andARTIFACT_DIGEST
. -
Before this update, only
TaskRun
attestations were supported. This update adds support forPipelineRun
attestations as well. -
This update adds support for Tekton Chains to get the
imgPullSecret
parameter from the pod template. This update helps you to configure repository authentication based on each pipeline run or task run without modifying the service account.
4.1.4.1.7. Tekton Hub
Tekton Hub is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
With this update, as an administrator, you can use an external database, such as Crunchy PostgreSQL with Tekton Hub, instead of using the default Tekton Hub database. This update helps you to perform the following actions:
- Specify the coordinates of an external database to be used with Tekton Hub
- Disable the default Tekton Hub database deployed by the Operator
This update removes the dependency of
config.yaml
from external Git repositories and moves the complete configuration data into the APIConfigMap
. This update helps an administrator to perform the following actions:- Add the configuration data, such as categories, catalogs, scopes, and defaultScopes in the Tekton Hub custom resource.
- Modify Tekton Hub configuration data on the cluster. All modifications are preserved upon Operator upgrades.
- Update the list of catalogs for Tekton Hub
Change the categories for Tekton Hub
NoteIf you do not add any configuration data, you can use the default data in the API
ConfigMap
for Tekton Hub configurations.
4.1.4.1.8. Pipelines as Code
-
This update adds support for concurrency limit in the
Repository
CRD to define the maximum number ofPipelineRuns
running for a repository at a time. ThePipelineRuns
from a pull request or a push event are queued in alphabetical order. -
This update adds a new command
tkn pac logs
for showing the logs of the latest pipeline run for a repository. This update supports advanced event matching on file path for push and pull requests to GitHub and GitLab. For example, you can use the Common Expression Language (CEL) to run a pipeline only if a path has changed for any markdown file in the
docs
directory.... annotations: pipelinesascode.tekton.dev/on-cel-expression: | event == "pull_request" && "docs/*.md".pathChanged()
-
With this update, you can reference a remote pipeline in the
pipelineRef:
object using annotations. -
With this update, you can auto-configure new GitHub repositories with Pipelines as Code, which sets up a namespace and creates a
Repository
CRD for your GitHub repository. -
With this update, Pipelines as Code generates metrics for
PipelineRuns
with provider information. This update provides the following enhancements for the
tkn-pac
plugin:- Detects running pipelines correctly
- Fixes showing duration when there is no failure completion time
-
Shows an error snippet and highlights the error regular expression pattern in the
tkn-pac describe
command -
Adds the
use-real-time
switch to thetkn-pac ls
andtkn-pac describe
commands -
Imports the
tkn-pac
logs documentation -
Shows
pipelineruntimeout
as a failure in thetkn-pac ls
andtkn-pac describe
commands. -
Show a specific pipeline run failure with the
--target-pipelinerun
option.
- With this update, you can view the errors for your pipeline run in the form of a version control system (VCS) comment or a small snippet in the GitHub checks.
- With this update, Pipelines as Code optionally can detect errors inside the tasks if they are of a simple format and add those tasks as annotations in GitHub. This update is part of a developer preview feature.
This update adds the following new commands:
-
tkn-pac webhook add
: Adds a webhook to project repository settings and updates thewebhook.secret
key in the existingk8s Secret
object without updating the repository. -
tkn-pac webhook update-token
: Updates provider token for an existingk8s Secret
object without updating the repository.
-
-
This update enhances functionality of the
tkn-pac create repo
command, which creates and configures webhooks for GitHub, GitLab, and BitbucketCloud along with creating repositories. -
With this update, the
tkn-pac describe
command shows the latest fifty events in a sorted order. -
This update adds the
--last
option to thetkn-pac logs
command. -
With this update, the
tkn-pac resolve
command prompts for a token on detecting agit_auth_secret
in the file template. - With this update, Pipelines as Code hides secrets from log snippets to avoid exposing secrets in the GitHub interface.
-
With this update, the secrets automatically generated for
git_auth_secret
are an owner reference withPipelineRun
. The secrets get cleaned with thePipelineRun
, not after the pipeline run execution. -
This update adds support to cancel a pipeline run with the
/cancel
comment. Before this update, the GitHub apps token scoping was not defined and tokens would be used on every repository installation. With this update, you can scope the GitHub apps token to the target repository using the following parameters:
-
secret-github-app-token-scoped
: Scopes the app token to the target repository, not to every repository the app installation has access to. -
secret-github-app-scope-extra-repos
: Customizes the scoping of the app token with an additional owner or repository.
-
- With this update, you can use Pipelines as Code with your own Git repositories that are hosted on GitLab.
- With this update, you can access pipeline execution details in the form of kubernetes events in your namespace. These details help you to troubleshoot pipeline errors without needing access to admin namespaces.
- This update supports authentication of URLs in the Pipelines as Code resolver with the Git provider.
-
With this update, you can set the name of the hub catalog by using a setting in the
pipelines-as-code
config map. -
With this update, you can set the maximum and default limits for the
max-keep-run
parameter. - This update adds documents on how to inject custom Secure Sockets Layer (SSL) certificates in Pipelines as Code to let you connect to provider instance with custom certificates.
-
With this update, the
PipelineRun
resource definition has the log URL included as an annotation. For example, thetkn-pac describe
command shows the log link when describing aPipelineRun
. -
With this update,
tkn-pac
logs show repository name, instead ofPipelineRun
name.
4.1.4.2. Breaking changes
-
With this update, the
Conditions
custom resource definition (CRD) type has been removed. As an alternative, use theWhenExpressions
instead. -
With this update, support for
tekton.dev/v1alpha1
API pipeline resources, such as Pipeline, PipelineRun, Task, Clustertask, and TaskRun has been removed. -
With this update, the
tkn-pac setup
command has been removed. Instead, use thetkn-pac webhook add
command to re-add a webhook to an existing Git repository. And use thetkn-pac webhook update-token
command to update the personal provider access token for an existing Secret object in the Git repository. -
With this update, a namespace that runs a pipeline with default settings does not apply the
pod-security.kubernetes.io/enforce:privileged
label to a workload.
4.1.4.3. Deprecated and removed features
-
In the Red Hat OpenShift Pipelines 1.9.0 release,
ClusterTasks
are deprecated and planned to be removed in a future release. As an alternative, you can useCluster Resolver
. -
In the Red Hat OpenShift Pipelines 1.9.0 release, the use of the
triggers
and thenamespaceSelector
fields in a singleEventListener
specification is deprecated and planned to be removed in a future release. You can use these fields in differentEventListener
specifications successfully. -
In the Red Hat OpenShift Pipelines 1.9.0 release, the
tkn pipelinerun describe
command does not display timeouts for thePipelineRun
resource. -
In the Red Hat OpenShift Pipelines 1.9.0 release, the PipelineResource` custom resource (CR) is deprecated. The
PipelineResource
CR was a Tech Preview feature and part of thetekton.dev/v1alpha1
API. - In the Red Hat OpenShift Pipelines 1.9.0 release, custom image parameters from cluster tasks are deprecated. As an alternative, you can copy a cluster task and use your custom image in it.
4.1.4.4. Known issues
-
The
chains-secret
andchains-config
config maps are removed after you uninstall the Red Hat OpenShift Pipelines Operator. As they contain user data, they should be preserved and not deleted.
When running the
tkn pac
set of commands on Windows, you may receive the following error message:Command finished with error: not supported by Windows.
Workaround: Set the
NO_COLOR
environment variable totrue
.Running the
tkn pac resolve -f <filename> | oc create -f
command may not provide expected results, if thetkn pac resolve
command uses a templated parameter value to function.Workaround: To mitigate this issue, save the output of
tkn pac resolve
in a temporary file by running thetkn pac resolve -f <filename> -o tempfile.yaml
command and then run theoc create -f tempfile.yaml
command. For example,tkn pac resolve -f <filename> -o /tmp/pull-request-resolved.yaml && oc create -f /tmp/pull-request-resolved.yaml
.
4.1.4.5. Fixed issues
- Before this update, after replacing an empty array, the original array returned an empty string rendering the paramaters inside it invalid. With this update, this issue is resolved and the original array returns as empty.
- Before this update, if duplicate secrets were present in a service account for a pipelines run, it resulted in failure in task pod creation. With this update, this issue is resolved and the task pod is created successfully even if duplicate secrets are present in a service account.
-
Before this update, by looking at the TaskRun’s
spec.StatusMessage
field, users could not distinguish whether theTaskRun
had been cancelled by the user or by aPipelineRun
that was part of it. With this update, this issue is resolved and users can distinguish the status of theTaskRun
by looking at the TaskRun’sspec.StatusMessage
field. - Before this update, webhook validation was removed on deletion of old versions of invalid objects. With this update, this issue is resolved.
Before this update, if you set the
timeouts.pipeline
parameter to0
, you could not set thetimeouts.tasks
parameter or thetimeouts.finally
parameters. This update resolves the issue. Now, when you set thetimeouts.pipeline
parameter value, you can set the value of either the`timeouts.tasks` parameter or thetimeouts.finally
parameter. For example:yaml kind: PipelineRun spec: timeouts: pipeline: "0" # No timeout tasks: "0h3m0s"
- Before this update, a race condition could occur if another tool updated labels or annotations on a PipelineRun or TaskRun. With this update, this issue is resolved and you can merge labels or annotations.
- Before this update, log keys did not have the same keys as in pipelines controllers. With this update, this issue has been resolved and the log keys have been updated to match the log stream of pipeline controllers. The keys in logs have been changed from "ts" to "timestamp", from "level" to "severity", and from "message" to "msg".
- Before this update, if a PipelineRun was deleted with an unknown status, an error message was not generated. With this update, this issue is resolved and an error message is generated.
-
Before this update, to access bundle commands like
list
andpush
, it was required to use thekubeconfig
file . With this update, this issue has been resolved and thekubeconfig
file is not required to access bundle commands. - Before this update, if the parent PipelineRun was running while deleting TaskRuns, then TaskRuns would be deleted. With this update, this issue is resolved and TaskRuns are not getting deleted if the parent PipelineRun is running.
- Before this update, if the user attempted to build a bundle with more objects than the pipeline controller permitted, the Tekton CLI did not display an error message. With this update, this issue is resolved and the Tekton CLI displays an error message if the user attempts to build a bundle with more objects than the limit permitted in the pipeline controller.
-
Before this update, if namespaces were removed from the cluster, then the operator did not remove namespaces from the
ClusterInterceptor ClusterRoleBinding
subjects. With this update, this issue has been resolved, and the operator removes the namespaces from theClusterInterceptor ClusterRoleBinding
subjects. -
Before this update, the default installation of the Red Hat OpenShift Pipelines Operator resulted in the
pipelines-scc-rolebinding security context constraint
(SCC) role binding resource remaining in the cluster. With this update, the default installation of the Red Hat OpenShift Pipelines Operator results in thepipelines-scc-rolebinding security context constraint
(SCC) role binding resource resource being removed from the cluster.
-
Before this update, Pipelines as Code did not get updated values from the Pipelines as Code
ConfigMap
object. With this update, this issue is fixed and the Pipelines as CodeConfigMap
object looks for any new changes. -
Before this update, Pipelines as Code controller did not wait for the
tekton.dev/pipeline
label to be updated and added thecheckrun id
label, which would cause race conditions. With this update, the Pipelines as Code controller waits for thetekton.dev/pipeline
label to be updated and then adds thecheckrun id
label, which helps to avoid race conditions. -
Before this update, the
tkn-pac create repo
command did not override aPipelineRun
if it already existed in the git repository. With this update,tkn-pac create
command is fixed to override aPipelineRun
if it exists in the git repository and this resolves the issue successfully. -
Before this update, the
tkn pac describe
command did not display reasons for every message. With this update, this issue is fixed and thetkn pac describe
command displays reasons for every message. -
Before this update, a pull request failed if the user in the annotation provided values by using a regex form, for example,
refs/head/rel-*
. The pull request failed because it was missingrefs/heads
in its base branch. With this update, the prefix is added and checked that it matches. This resolves the issue and the pull request is successful.
4.1.4.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.9.1
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.9.1 is available on OpenShift Container Platform 4.11, 4.12, and 4.13.
4.1.4.7. Fixed issues
-
Before this update, the
tkn pac repo list
command did not run on Microsoft Windows. This update fixes the issue, and now you can run thetkn pac repo list
command on Microsoft Windows. - Before this update, Pipelines as Code watcher did not receive all the configuration change events. With this update, the Pipelines as Code watcher is updated, and now the Pipelines as Code watcher does not miss the configuration change events.
-
Before this update, the pods created by Pipelines as Code, such as
TaskRuns
orPipelineRuns
could not access custom certificates exposed by the user in the cluster. This update fixes the issue, and you can now access custom certificates from theTaskRuns
orPipelineRuns
pods in the cluster. -
Before this update, on a cluster enabled with FIPS, the
tekton-triggers-core-interceptors
core interceptor used in theTrigger
resource did not function after the Pipelines Operator was upgraded to version 1.9. This update resolves the issue. Now, OpenShift uses MInTLS 1.2 for all its components. As a result, thetekton-triggers-core-interceptors
core interceptor updates to TLS version 1.2and its functionality runs accurately. Before this update, when using a pipeline run with an internal OpenShift image registry, the URL to the image had to be hardcoded in the pipeline run definition. For example:
... - name: IMAGE_NAME value: 'image-registry.openshift-image-registry.svc:5000/<test_namespace>/<test_pipelinerun>' ...
When using a pipeline run in the context of Pipelines as Code, such hardcoded values prevented the pipeline run definitions from being used in different clusters and namespaces.
With this update, you can use the dynamic template variables instead of hardcoding the values for namespaces and pipeline run names to generalize pipeline run definitions. For example:
... - name: IMAGE_NAME value: 'image-registry.openshift-image-registry.svc:5000/{{ target_namespace }}/$(context.pipelineRun.name)' ...
- Before this update, Pipelines as Code used the same GitHub token to fetch a remote task available in the same host only on the default GitHub branch. This update resolves the issue. Now Pipelines as Code uses the same GitHub token to fetch a remote task from any GitHub branch.
4.1.4.8. Known issues
The value for
CATALOG_REFRESH_INTERVAL
, a field in the Hub APIConfigMap
object used in the Tekton Hub CR, is not getting updated with a custom value provided by the user.Workaround: None. You can track the issue SRVKP-2854.
4.1.4.9. Breaking changes
- With this update, an OLM misconfiguration issue has been introduced, which prevents the upgrade of the OpenShift Container Platform. This issue will be fixed in a future release.
4.1.4.10. Release notes for Red Hat OpenShift Pipelines General Availability 1.9.2
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.9.2 is available on OpenShift Container Platform 4.11, 4.12, and 4.13.
4.1.4.11. Fixed issues
- Before this update, an OLM misconfiguration issue had been introduced in the previous version of the release, which prevented the upgrade of OpenShift Container Platform. With this update, this misconfiguration issue has been fixed.
4.1.4.12. Release notes for Red Hat OpenShift Pipelines General Availability 1.9.3
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.9.3 is available on OpenShift Container Platform 4.10 in addition to 4.11, 4.12, and 4.13.
4.1.4.13. Fixed issues
- This update fixes the performance issues for huge pipelines. Now, the CPU usage is reduced by 61% and the memory usage is reduced by 44%.
-
Before this update, a pipeline run would fail if a task did not run because of its
when
expression. This update fixes the issue by preventing the validation of a skipped task result in pipeline results. Now, the pipeline result is not emitted and the pipeline run does not fail because of a missing result. -
This update fixes the
pipelineref.bundle
conversion to the bundle resolver for thev1beta1
API. Now, the conversion feature sets the value of thekind
field toPipeline
after conversion. -
Before this update, an issue in the Pipelines Operator prevented the user from setting the value of the
spec.pipeline.enable-api-fields
field tobeta
. This update fixes the issue. Now, you can set the value tobeta
along withalpha
andstable
in theTektonConfig
custom resource. - Before this update, when Pipelines as Code could not create a secret due to a cluster error, it would show the temporary token on the GitHub check run, which is public. This update fixes the issue. Now, the token is no longer displayed on the GitHub checks interface when the creation of the secret fails.
4.1.4.14. Known issues
- There is currently a known issue with the stop option for pipeline runs in the OpenShift Container Platform web console. The stop option in the Actions drop-down list is not working as expected and does not cancel the pipeline run.
There is currently a known issue with upgrading to Pipelines version 1.9.x due to a failing custom resource definition conversion.
Workaround: Before upgrading to Pipelines version 1.9.x, perform the step mentioned in the solution on the Red Hat Customer Portal.
4.1.5. Release notes for Red Hat OpenShift Pipelines General Availability 1.8
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.8 is available on OpenShift Container Platform 4.10, 4.11, and 4.12.
4.1.5.1. New features
In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.8.
4.1.5.1.1. Pipelines
-
With this update, you can run Red Hat OpenShift Pipelines GA 1.8 and later on an OpenShift Container Platform cluster that is running on ARM hardware. This includes support for
ClusterTask
resources and thetkn
CLI tool.
Running Red Hat OpenShift Pipelines on ARM hardware is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
-
This update implements
Step
andSidecar
overrides forTaskRun
resources. This update adds minimal
TaskRun
andRun
statuses withinPipelineRun
statuses.To enable this feature, in the
TektonConfig
custom resource definition, in thepipeline
section, you must set theenable-api-fields
field toalpha
.With this update, the graceful termination of pipeline runs feature is promoted from an alpha feature to a stable feature. As a result, the previously deprecated
PipelineRunCancelled
status remains deprecated and is planned to be removed in a future release.Because this feature is available by default, you no longer need to set the
pipeline.enable-api-fields
field toalpha
in theTektonConfig
custom resource definition.With this update, you can specify the workspace for a pipeline task by using the name of the workspace. This change makes it easier to specify a shared workspace for a pair of
Pipeline
andPipelineTask
resources. You can also continue to map workspaces explicitly.To enable this feature, in the
TektonConfig
custom resource definition, in thepipeline
section, you must set theenable-api-fields
field toalpha
.- With this update, parameters in embedded specifications are propagated without mutations.
-
With this update, you can specify the required metadata of a
Task
resource referenced by aPipelineRun
resource by using annotations and labels. This way,Task
metadata that depends on the execution context is available during the pipeline run. -
This update adds support for object or dictionary types in
params
andresults
values. This change affects backward compatibility and sometimes breaks forward compatibility, such as using an earlier client with a later Red Hat OpenShift Pipelines version. This update changes theArrayOrStruct
structure, which affects projects that use the Go language API as a library. -
This update adds a
SkippingReason
value to theSkippedTasks
field of thePipelineRun
status fields so that users know why a given PipelineTask was skipped. This update supports an alpha feature in which you can use an
array
type for emitting results from aTask
object. The result type is changed fromstring
toArrayOrString
. For example, a task can specify a type to produce an array result:kind: Task apiVersion: tekton.dev/v1beta1 metadata: name: write-array annotations: description: | A simple task that writes array spec: results: - name: array-results type: array description: The array results ...
Additionally, you can run a task script to populate the results with an array:
$ echo -n "[\"hello\",\"world\"]" | tee $(results.array-results.path)
To enable this feature, in the
TektonConfig
custom resource definition, in thepipeline
section, you must set theenable-api-fields
field toalpha
.This feature is in progress and is part of TEP-0076.
4.1.5.1.2. Triggers
This update transitions the
TriggerGroups
field in theEventListener
specification from an alpha feature to a stable feature. Using this field, you can specify a set of interceptors before selecting and running a group of triggers.Because this feature is available by default, you no longer need to set the
pipeline.enable-api-fields
field toalpha
in theTektonConfig
custom resource definition.-
With this update, the
Trigger
resource supports end-to-end secure connections by running theClusterInterceptor
server using HTTPS.
4.1.5.1.3. CLI
-
With this update, you can use the
tkn taskrun export
command to export a live task run from a cluster to a YAML file, which you can use to import the task run to another cluster. -
With this update, you can add the
-o name
flag to thetkn pipeline start
command to print the name of the pipeline run right after it starts. -
This update adds a list of available plug-ins to the output of the
tkn --help
command. -
With this update, while deleting a pipeline run or task run, you can use both the
--keep
and--keep-since
flags together. -
With this update, you can use
Cancelled
as the value of thespec.status
field rather than the deprecatedPipelineRunCancelled
value.
4.1.5.1.4. Operator
- With this update, as an administrator, you can configure your local Tekton Hub instance to use a custom database rather than the default database.
With this update, as a cluster administrator, if you enable your local Tekton Hub instance, it periodically refreshes the database so that changes in the catalog appear in the Tekton Hub web console. You can adjust the period between refreshes.
Previously, to add the tasks and pipelines in the catalog to the database, you performed that task manually or set up a cron job to do it for you.
- With this update, you can install and run a Tekton Hub instance with minimal configuration. This way, you can start working with your teams to decide which additional customizations they might want.
-
This update adds
GIT_SSL_CAINFO
to thegit-clone
task so you can clone secured repositories.
4.1.5.1.5. Tekton Chains
Tekton Chains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
- With this update, you can log in to a vault by using OIDC rather than a static token. This change means that Spire can generate the OIDC credential so that only trusted workloads are allowed to log in to the vault. Additionally, you can pass the vault address as a configuration value rather than inject it as an environment variable.
-
The
chains-config
config map for Tekton Chains in theopenshift-pipelines
namespace is automatically reset to default after upgrading the Red Hat OpenShift Pipelines Operator because directly updating the config map is not supported when installed by using the Red Hat OpenShift Pipelines Operator. However, with this update, you can configure Tekton Chains by using theTektonChain
custom resource. This feature enables your configuration to persist after upgrading, unlike thechains-config
config map, which gets overwritten during upgrades.
4.1.5.1.6. Tekton Hub
Tekton Hub is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
With this update, if you install a fresh instance of Tekton Hub by using the Operator, the Tekton Hub login is disabled by default. To enable the login and rating features, you must create the Hub API secret while installing Tekton Hub.
NoteBecause Tekton Hub login was enabled by default in Red Hat OpenShift Pipelines 1.7, if you upgrade the Operator, the login is enabled by default in Red Hat OpenShift Pipelines 1.8. To disable this login, see Disabling Tekton Hub login after upgrading from OpenShift Pipelines 1.7.x -→ 1.8.x
With this update, as an administrator, you can configure your local Tekton Hub instance to use a custom PostgreSQL 13 database rather than the default database. To do so, create a
Secret
resource namedtekton-hub-db
. For example:apiVersion: v1 kind: Secret metadata: name: tekton-hub-db labels: app: tekton-hub-db type: Opaque stringData: POSTGRES_HOST: <hostname> POSTGRES_DB: <database_name> POSTGRES_USER: <username> POSTGRES_PASSWORD: <password> POSTGRES_PORT: <listening_port_number>
- With this update, you no longer need to log in to the Tekton Hub web console to add resources from the catalog to the database. Now, these resources are automatically added when the Tekton Hub API starts running for the first time.
- This update automatically refreshes the catalog every 30 minutes by calling the catalog refresh API job. This interval is user-configurable.
4.1.5.1.7. Pipelines as Code
Pipelines as Code (PAC) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
-
With this update, as a developer, you get a notification from the
tkn-pac
CLI tool if you try to add a duplicate repository to a Pipelines as Code run. When you entertkn pac create repository
, each repository must have a unique URL. This notification also helps prevent hijacking exploits. -
With this update, as a developer, you can use the new
tkn-pac setup cli
command to add a Git repository to Pipelines as Code by using the webhook mechanism. This way, you can use Pipelines as Code even when using GitHub Apps is not feasible. This capability includes support for repositories on GitHub, GitLab, and BitBucket. With this update, Pipelines as Code supports GitLab integration with features such as the following:
- ACL (Access Control List) on project or group
-
/ok-to-test
support from allowed users -
/retest
support.
With this update, you can perform advanced pipeline filtering with Common Expression Language (CEL). With CEL, you can match pipeline runs with different Git provider events by using annotations in the
PipelineRun
resource. For example:... annotations: pipelinesascode.tekton.dev/on-cel-expression: | event == "pull_request" && target_branch == "main" && source_branch == "wip"
-
Previously, as a developer, you could have only one pipeline run in your
.tekton
directory for each Git event, such as a pull request. With this update, you can have multiple pipeline runs in your.tekton
directory. The web console displays the status and reports of the runs. The pipeline runs operate in parallel and report back to the Git provider interface. -
With this update, you can test or retest a pipeline run by commenting
/test
or/retest
on a pull request. You can also specify the pipeline run by name. For example, you can enter/test <pipelinerun_name>
or/retest <pipelinerun-name>
. -
With this update, you can delete a repository custom resource and its associated secrets by using the new
tkn-pac delete repository
command.
4.1.5.2. Breaking changes
This update changes the default metrics level of
TaskRun
andPipelineRun
resources to the following values:apiVersion: v1 kind: ConfigMap metadata: name: config-observability namespace: tekton-pipelines labels: app.kubernetes.io/instance: default app.kubernetes.io/part-of: tekton-pipelines data: _example: | ... metrics.taskrun.level: "task" metrics.taskrun.duration-type: "histogram" metrics.pipelinerun.level: "pipeline" metrics.pipelinerun.duration-type: "histogram"
-
With this update, if an annotation or label is present in both
Pipeline
andPipelineRun
resources, the value in theRun
type takes precedence. The same is true if an annotation or label is present inTask
andTaskRun
resources. -
In Red Hat OpenShift Pipelines 1.8, the previously deprecated
PipelineRun.Spec.ServiceAccountNames
field has been removed. Use thePipelineRun.Spec.TaskRunSpecs
field instead. -
In Red Hat OpenShift Pipelines 1.8, the previously deprecated
TaskRun.Status.ResourceResults.ResourceRef
field has been removed. Use theTaskRun.Status.ResourceResults.ResourceName
field instead. -
In Red Hat OpenShift Pipelines 1.8, the previously deprecated
Conditions
resource type has been removed. Remove theConditions
resource fromPipeline
resource definitions that include it. Usewhen
expressions inPipelineRun
definitions instead.
-
For Tekton Chains, the
tekton-provenance
format has been removed in this release. Use thein-toto
format by setting"artifacts.taskrun.format": "in-toto"
in theTektonChain
custom resource instead.
Red Hat OpenShift Pipelines 1.7.x shipped with Pipelines as Code 0.5.x. The current update ships with Pipelines as Code 0.10.x. This change creates a new route in the
openshift-pipelines
namespace for the new controller. You must update this route in GitHub Apps or webhooks that use Pipelines as Code. To fetch the route, use the following command:$ oc get route -n openshift-pipelines pipelines-as-code-controller \ --template='https://{{ .spec.host }}'
-
With this update, Pipelines as Code renames the default secret keys for the
Repository
custom resource definition (CRD). In your CRD, replacetoken
withprovider.token
, and replacesecret
withwebhook.secret
. -
With this update, Pipelines as Code replaces a special template variable with one that supports multiple pipeline runs for private repositories. In your pipeline runs, replace
secret: pac-git-basic-auth-{{repo_owner}}-{{repo_name}}
withsecret: {{ git_auth_secret }}
. With this update, Pipelines as Code updates the following commands in the
tkn-pac
CLI tool:-
Replace
tkn pac repository create
withtkn pac create repository
. -
Replace
tkn pac repository delete
withtkn pac delete repository
. -
Replace
tkn pac repository list
withtkn pac list
.
-
Replace
4.1.5.3. Deprecated and removed features
Starting with OpenShift Container Platform 4.11, the
preview
andstable
channels for installing and upgrading the Red Hat OpenShift Pipelines Operator are removed. To install and upgrade the Operator, use the appropriatepipelines-<version>
channel, or thelatest
channel for the most recent stable version. For example, to install the Pipelines Operator version1.8.x
, use thepipelines-1.8
channel.NoteIn OpenShift Container Platform 4.10 and earlier versions, you can use the
preview
andstable
channels for installing and upgrading the Operator.Support for the
tekton.dev/v1alpha1
API version, which was deprecated in Red Hat OpenShift Pipelines GA 1.6, is planned to be removed in the upcoming Red Hat OpenShift Pipelines GA 1.9 release.This change affects the pipeline component, which includes the
TaskRun
,PipelineRun
,Task
,Pipeline
, and similartekton.dev/v1alpha1
resources. As an alternative, update existing resources to useapiVersion: tekton.dev/v1beta1
as described in Migrating From Tekton v1alpha1 to Tekton v1beta1.Bug fixes and support for the
tekton.dev/v1alpha1
API version are provided only through the end of the current GA 1.8 lifecycle.ImportantFor the Tekton Operator, the
operator.tekton.dev/v1alpha1
API version is not deprecated. You do not need to make changes to this value.-
In Red Hat OpenShift Pipelines 1.8, the
PipelineResource
custom resource (CR) is available but no longer supported. ThePipelineResource
CR was a Tech Preview feature and part of thetekton.dev/v1alpha1
API, which had been deprecated and planned to be removed in the upcoming Red Hat OpenShift Pipelines GA 1.9 release. -
In Red Hat OpenShift Pipelines 1.8, the
Condition
custom resource (CR) is removed. TheCondition
CR was part of thetekton.dev/v1alpha1
API, which has been deprecated and is planned to be removed in the upcoming Red Hat OpenShift Pipelines GA 1.9 release. -
In Red Hat OpenShift Pipelines 1.8, the
gcr.io
image forgsutil
has been removed. This removal might break clusters withPipeline
resources that depend on this image. Bug fixes and support are provided only through the end of the Red Hat OpenShift Pipelines 1.7 lifecycle.
-
In Red Hat OpenShift Pipelines 1.8, the
PipelineRun.Status.TaskRuns
andPipelineRun.Status.Runs
fields are deprecated and are planned to be removed in a future release. See TEP-0100: Embedded TaskRuns and Runs Status in PipelineRuns. In Red Hat OpenShift Pipelines 1.8, the
pipelineRunCancelled
state is deprecated and planned to be removed in a future release. Graceful termination ofPipelineRun
objects is now promoted from an alpha feature to a stable feature. (See TEP-0058: Graceful Pipeline Run Termination.) As an alternative, you can use theCancelled
state, which replaces thepipelineRunCancelled
state.You do not need to make changes to your
Pipeline
andTask
resources. If you have tools that cancel pipeline runs, you must update tools in the next release. This change also affects tools such as the CLI, IDE extensions, and so on, so that they support the newPipelineRun
statuses.Because this feature is available by default, you no longer need to set the
pipeline.enable-api-fields
field toalpha
in theTektonConfig
custom resource definition.In Red Hat OpenShift Pipelines 1.8, the
timeout
field inPipelineRun
has been deprecated. Instead, use thePipelineRun.Timeouts
field, which is now promoted from an alpha feature to a stable feature.Because this feature is available by default, you no longer need to set the
pipeline.enable-api-fields
field toalpha
in theTektonConfig
custom resource definition.-
In Red Hat OpenShift Pipelines 1.8,
init
containers are omitted from theLimitRange
object’s default request calculations.
4.1.5.4. Known issues
The
s2i-nodejs
pipeline cannot use thenodejs:14-ubi8-minimal
image stream to perform source-to-image (S2I) builds. Using that image stream produces anerror building at STEP "RUN /usr/libexec/s2i/assemble": exit status 127
message.Workaround: Use
nodejs:14-ubi8
rather than thenodejs:14-ubi8-minimal
image stream.
When you run Maven and Jib-Maven cluster tasks, the default container image is supported only on Intel (x86) architecture. Therefore, tasks will fail on ARM, IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) clusters.
Workaround: Specify a custom image by setting the
MAVEN_IMAGE
parameter value tomaven:3.6.3-adoptopenjdk-11
.TipBefore you install tasks that are based on the Tekton Catalog on ARM, IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) using
tkn hub
, verify if the task can be executed on these platforms. To check ifppc64le
ands390x
are listed in the "Platforms" section of the task information, you can run the following command:tkn hub info task <name>
-
On ARM, IBM Power Systems, IBM Z, and LinuxONE, the
s2i-dotnet
cluster task is unsupported.
-
Implicit parameter mapping incorrectly passes parameters from the top-level
Pipeline
orPipelineRun
definitions to thetaskRef
tasks. Mapping should only occur from a top-level resource to tasks with in-linetaskSpec
specifications. This issue only affects clusters where this feature was enabled by setting theenable-api-fields
field toalpha
in thepipeline
section of theTektonConfig
custom resource definition.
4.1.5.5. Fixed issues
- Before this update, the metrics for pipeline runs in the Developer view of the web console were incomplete and outdated. With this update, the issue has been fixed so that the metrics are correct.
-
Before this update, if a pipeline had two parallel tasks that failed and one of them had
retries=2
, the final tasks never ran, and the pipeline timed out and failed to run. For example, thepipelines-operator-subscription
task failed intermittently with the following error message:Unable to connect to the server: EOF
. With this update, the issue has been fixed so that the final tasks always run. -
Before this update, if a pipeline run stopped because a task run failed, other task runs might not complete their retries. As a result, no
finally
tasks were scheduled, which caused the pipeline to hang. This update resolves the issue.TaskRuns
andRun
objects can retry when a pipeline run has stopped, even by graceful stopping, so that pipeline runs can complete. -
This update changes how resource requirements are calculated when one or more
LimitRange
objects are present in the namespace where aTaskRun
object exists. The scheduler now considersstep
containers and excludes all other app containers, such as sidecar containers, when factoring requests fromLimitRange
objects. -
Before this update, under specific conditions, the flag package might incorrectly parse a subcommand immediately following a double dash flag terminator,
--
. In that case, it ran the entrypoint subcommand rather than the actual command. This update fixes this flag-parsing issue so that the entrypoint runs the correct command. -
Before this update, the controller might generate multiple panics if pulling an image failed, or its pull status was incomplete. This update fixes the issue by checking the
step.ImageID
value rather than thestatus.TaskSpec
value. -
Before this update, canceling a pipeline run that contained an unscheduled custom task produced a
PipelineRunCouldntCancel
error. This update fixes the issue. You can cancel a pipeline run that contains an unscheduled custom task without producing that error. Before this update, if the
<NAME>
in$params["<NAME>"]
or$params['<NAME>']
contained a dot character (.
), any part of the name to the right of the dot was not extracted. For example, from$params["org.ipsum.lorem"]
, onlyorg
was extracted.This update fixes the issue so that
$params
fetches the complete value. For example,$params["org.ipsum.lorem"]
and$params['org.ipsum.lorem']
are valid and the entire value of<NAME>
,org.ipsum.lorem
, is extracted.It also throws an error if
<NAME>
is not enclosed in single or double quotes. For example,$params.org.ipsum.lorem
is not valid and generates a validation error.
-
With this update,
Trigger
resources support custom interceptors and ensure that the port of the custom interceptor service is the same as the port in theClusterInterceptor
definition file.
-
Before this update, the
tkn version
command for Tekton Chains and Operator components did not work correctly. This update fixes the issue so that the command works correctly and returns version information for those components. -
Before this update, if you ran a
tkn pr delete --ignore-running
command and a pipeline run did not have astatus.condition
value, thetkn
CLI tool produced a null-pointer error (NPE). This update fixes the issue so that the CLI tool now generates an error and correctly ignores pipeline runs that are still running. -
Before this update, if you used the
tkn pr delete --keep <value>
ortkn tr delete --keep <value>
commands, and the number of pipeline runs or task runs was less than the value, the command did not return an error as expected. This update fixes the issue so that the command correctly returns an error under those conditions. -
Before this update, if you used the
tkn pr delete
ortkn tr delete
commands with the-p
or-t
flags together with the--ignore-running
flag, the commands incorrectly deleted running or pending resources. This update fixes the issue so that these commands correctly ignore running or pending resources.
-
With this update, you can configure Tekton Chains by using the
TektonChain
custom resource. This feature enables your configuration to persist after upgrading, unlike thechains-config
config map, which gets overwritten during upgrades. -
With this update,
ClusterTask
resources no longer run as root by default, except for thebuildah
ands2i
cluster tasks. -
Before this update, tasks on Red Hat OpenShift Pipelines 1.7.1 failed when using
init
as a first argument followed by two or more arguments. With this update, the flags are parsed correctly, and the task runs are successful. Before this update, installation of the Red Hat OpenShift Pipelines Operator on OpenShift Container Platform 4.9 and 4.10 failed due to an invalid role binding, with the following error message:
error updating rolebinding openshift-operators-prometheus-k8s-read-binding: RoleBinding.rbac.authorization.k8s.io "openshift-operators-prometheus-k8s-read-binding" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"Role", Name:"openshift-operator-read"}: cannot change roleRef
This update fixes the issue so that the failure no longer occurs.
-
Previously, upgrading the Red Hat OpenShift Pipelines Operator caused the
pipeline
service account to be recreated, which meant that the secrets linked to the service account were lost. This update fixes the issue. During upgrades, the Operator no longer recreates thepipeline
service account. As a result, secrets attached to thepipeline
service account persist after upgrades, and the resources (tasks and pipelines) continue to work correctly. -
With this update, Pipelines as Code pods run on infrastructure nodes if infrastructure node settings are configured in the
TektonConfig
custom resource (CR). Previously, with the resource pruner, each namespace Operator created a command that ran in a separate container. This design consumed too many resources in clusters with a high number of namespaces. For example, to run a single command, a cluster with 1000 namespaces produced 1000 containers in a pod.
This update fixes the issue. It passes the namespace-based configuration to the job so that all the commands run in one container in a loop.
-
In Tekton Chains, you must define a secret called
signing-secrets
to hold the key used for signing tasks and images. However, before this update, updating the Red Hat OpenShift Pipelines Operator reset or overwrote this secret, and the key was lost. This update fixes the issue. Now, if the secret is configured after installing Tekton Chains through the Operator, the secret persists, and it is not overwritten by upgrades. Before this update, all S2I build tasks failed with an error similar to the following message:
Error: error writing "0 0 4294967295\n" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted time="2022-03-04T09:47:57Z" level=error msg="error writing \"0 0 4294967295\\n\" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted" time="2022-03-04T09:47:57Z" level=error msg="(unable to determine exit status)"
With this update, the
pipelines-scc
security context constraint (SCC) is compatible with theSETFCAP
capability necessary forBuildah
andS2I
cluster tasks. As a result, theBuildah
andS2I
build tasks can run successfully.To successfully run the
Buildah
cluster task andS2I
build tasks for applications written in various languages and frameworks, add the following snippet for appropriatesteps
objects such asbuild
andpush
:securityContext: capabilities: add: ["SETFCAP"]
- Before this update, installing the Red Hat OpenShift Pipelines Operator took longer than expected. This update optimizes some settings to speed up the installation process.
-
With this update, Buildah and S2I cluster tasks have fewer steps than in previous versions. Some steps have been combined into a single step so that they work better with
ResourceQuota
andLimitRange
objects and do not require more resources than necessary. -
This update upgrades the Buildah,
tkn
CLI tool, andskopeo
CLI tool versions in cluster tasks. -
Before this update, the Operator failed when creating RBAC resources if any namespace was in a
Terminating
state. With this update, the Operator ignores namespaces in aTerminating
state and creates the RBAC resources. -
Before this update, pods for the prune cronjobs were not scheduled on infrastructure nodes, as expected. Instead, they were scheduled on worker nodes or not scheduled at all. With this update, these types of pods can now be scheduled on infrastructure nodes if configured in the
TektonConfig
custom resource (CR).
4.1.5.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.8.1
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.8.1 is available on OpenShift Container Platform 4.10, 4.11, and 4.12.
4.1.5.6.1. Known issues
By default, the containers have restricted permissions for enhanced security. The restricted permissions apply to all controller pods in the Red Hat OpenShift Pipelines Operator, and to some cluster tasks. Due to restricted permissions, the
git-clone
cluster task fails under certain configurations.Workaround: None. You can track the issue SRVKP-2634.
When installer sets are in a failed state, the status of the
TektonConfig
custom resource is incorrectly displayed asTrue
instead ofFalse
.Example: Failed installer sets
$ oc get tektoninstallerset NAME READY REASON addon-clustertasks-nx5xz False Error addon-communityclustertasks-cfb2p True addon-consolecli-ftrb8 True addon-openshift-67dj2 True addon-pac-cf7pz True addon-pipelines-fvllm True addon-triggers-b2wtt True addon-versioned-clustertasks-1-8-hqhnw False Error pipeline-w75ww True postpipeline-lrs22 True prepipeline-ldlhw True rhosp-rbac-4dmgb True trigger-hfg64 True validating-mutating-webhoook-28rf7 True
Example: Incorrect
TektonConfig
status$ oc get tektonconfig config NAME VERSION READY REASON config 1.8.1 True
4.1.5.6.2. Fixed issues
-
Before this update, the pruner deleted task runs of running pipelines and displayed the following warning:
some tasks were indicated completed without ancestors being done
. With this update, the pruner retains the task runs that are part of running pipelines. -
Before this update,
pipeline-1.8
was the default channel for installing the Red Hat OpenShift Pipelines Operator 1.8.x. With this update,latest
is the default channel. - Before this update, the Pipelines as Code controller pods did not have access to certificates exposed by the user. With this update, Pipelines as Code can now access routes and Git repositories guarded by a self-signed or a custom certificate.
- Before this update, the task failed with RBAC errors after upgrading from Red Hat OpenShift Pipelines 1.7.2 to 1.8.0. With this update, the tasks run successfully without any RBAC errors.
-
Before this update, using the
tkn
CLI tool, you could not remove task runs and pipeline runs that contained aresult
object whose type wasarray
. With this update, you can use thetkn
CLI tool to remove task runs and pipeline runs that contain aresult
object whose type isarray
. -
Before this update, if a pipeline specification contained a task with an
ENV_VARS
parameter ofarray
type, the pipeline run failed with the following error:invalid input params for task func-buildpacks: param types don’t match the user-specified type: [ENV_VARS]
. With this update, pipeline runs with such pipeline and task specifications do not fail. -
Before this update, cluster administrators could not provide a
config.json
file to theBuildah
cluster task for accessing a container registry. With this update, cluster administrators can provide theBuildah
cluster task with aconfig.json
file by using thedockerconfig
workspace.
4.1.5.7. Release notes for Red Hat OpenShift Pipelines General Availability 1.8.2
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.8.2 is available on OpenShift Container Platform 4.10, 4.11, and 4.12.
4.1.5.7.1. Fixed issues
-
Before this update, the
git-clone
task failed when cloning a repository using SSH keys. With this update, the role of the non-root user in thegit-init
task is removed, and the SSH program looks in the$HOME/.ssh/
directory for the correct keys.
4.1.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.7
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.7 is available on OpenShift Container Platform 4.9, 4.10, and 4.11.
4.1.6.1. New features
In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.7.
4.1.6.1.1. Pipelines
With this update,
pipelines-<version>
is the default channel to install the Red Hat OpenShift Pipelines Operator. For example, the default channel to install the Pipelines Operator version1.7
ispipelines-1.7
. Cluster administrators can also use thelatest
channel to install the most recent stable version of the Operator.NoteThe
preview
andstable
channels will be deprecated and removed in a future release.When you run a command in a user namespace, your container runs as
root
(user id0
) but has user privileges on the host. With this update, to run pods in the user namespace, you must pass the annotations that CRI-O expects.-
To add these annotations for all users, run the
oc edit clustertask buildah
command and edit thebuildah
cluster task. - To add the annotations to a specific namespace, export the cluster task as a task to that namespace.
-
To add these annotations for all users, run the
Before this update, if certain conditions were not met, the
when
expression skipped aTask
object and its dependent tasks. With this update, you can scope thewhen
expression to guard theTask
object only, not its dependent tasks. To enable this update, set thescope-when-expressions-to-task
flag totrue
in theTektonConfig
CRD.NoteThe
scope-when-expressions-to-task
flag is deprecated and will be removed in a future release. As a best practice for Pipelines, usewhen
expressions scoped to the guardedTask
only.-
With this update, you can use variable substitution in the
subPath
field of a workspace within a task. With this update, you can reference parameters and results by using a bracket notation with single or double quotes. Prior to this update, you could only use the dot notation. For example, the following are now equivalent:
$(param.myparam)
,$(param['myparam'])
, and$(param["myparam"])
.You can use single or double quotes to enclose parameter names that contain problematic characters, such as
"."
. For example,$(param['my.param'])
and$(param["my.param"])
.
-
With this update, you can include the
onError
parameter of a step in the task definition without enabling theenable-api-fields
flag.
4.1.6.1.2. Triggers
-
With this update, the
feature-flag-triggers
config map has a new fieldlabels-exclusion-pattern
. You can set the value of this field to a regular expression (regex) pattern. The controller filters out labels that match the regex pattern from propagating from the event listener to the resources created for the event listener. -
With this update, the
TriggerGroups
field is added to theEventListener
specification. Using this field, you can specify a set of interceptors to run before selecting and running a group of triggers. To enable this feature, in theTektonConfig
custom resource definition, in thepipeline
section, you must set theenable-api-fields
field toalpha
. -
With this update,
Trigger
resources support custom runs defined by aTriggerTemplate
template. -
With this update, Triggers support emitting Kubernetes events from an
EventListener
pod. -
With this update, count metrics are available for the following objects:
ClusterInteceptor
,EventListener
,TriggerTemplate
,ClusterTriggerBinding
, andTriggerBinding
. -
This update adds the
ServicePort
specification to Kubernetes resource. You can use this specification to modify which port exposes the event listener service. The default port is8080
. -
With this update, you can use the
targetURI
field in theEventListener
specification to send cloud events during trigger processing. To enable this feature, in theTektonConfig
custom resource definition, in thepipeline
section, you must set theenable-api-fields
field toalpha
. -
With this update, the
tekton-triggers-eventlistener-roles
object now has apatch
verb, in addition to thecreate
verb that already exists. -
With this update, the
securityContext.runAsUser
parameter is removed from event listener deployment.
4.1.6.1.3. CLI
With this update, the
tkn [pipeline | pipelinerun] export
command exports a pipeline or pipeline run as a YAML file. For example:Export a pipeline named
test_pipeline
in theopenshift-pipelines
namespace:$ tkn pipeline export test_pipeline -n openshift-pipelines
Export a pipeline run named
test_pipeline_run
in theopenshift-pipelines
namespace:$ tkn pipelinerun export test_pipeline_run -n openshift-pipelines
-
With this update, the
--grace
option is added to thetkn pipelinerun cancel
. Use the--grace
option to terminate a pipeline run gracefully instead of forcing the termination. To enable this feature, in theTektonConfig
custom resource definition, in thepipeline
section, you must set theenable-api-fields
field toalpha
. This update adds the Operator and Chains versions to the output of the
tkn version
command.ImportantTekton Chains is a Technology Preview feature.
-
With this update, the
tkn pipelinerun describe
command displays all canceled task runs, when you cancel a pipeline run. Before this fix, only one task run was displayed. -
With this update, you can skip supplying the asking specifications for optional workspace when you run the
tkn [t | p | ct] start
command skips with the--skip-optional-workspace
flag. You can also skip it when running in interactive mode. With this update, you can use the
tkn chains
command to manage Tekton Chains. You can also use the--chains-namespace
option to specify the namespace where you want to install Tekton Chains.ImportantTekton Chains is a Technology Preview feature.
4.1.6.1.4. Operator
With this update, you can use the Red Hat OpenShift Pipelines Operator to install and deploy Tekton Hub and Tekton Chains.
ImportantTekton Chains and deployment of Tekton Hub on a cluster are Technology Preview features.
With this update, you can find and use Pipelines as Code (PAC) as an add-on option.
ImportantPipelines as Code is a Technology Preview feature.
With this update, you can now disable the installation of community cluster tasks by setting the
communityClusterTasks
parameter tofalse
. For example:... spec: profile: all targetNamespace: openshift-pipelines addon: params: - name: clusterTasks value: "true" - name: pipelineTemplates value: "true" - name: communityClusterTasks value: "false" ...
With this update, you can disable the integration of Tekton Hub with the Developer perspective by setting the
enable-devconsole-integration
flag in theTektonConfig
custom resource tofalse
. For example:... hub: params: - name: enable-devconsole-integration value: "true" ...
-
With this update, the
operator-config.yaml
config map enables the output of thetkn version
command to display of the Operator version. -
With this update, the version of the
argocd-task-sync-and-wait
tasks is modified tov0.2
. -
With this update to the
TektonConfig
CRD, theoc get tektonconfig
command displays the OPerator version. - With this update, service monitor is added to the Triggers metrics.
4.1.6.1.5. Hub
Deploying Tekton Hub on a cluster is a Technology Preview feature.
Tekton Hub helps you discover, search, and share reusable tasks and pipelines for your CI/CD workflows. A public instance of Tekton Hub is available at hub.tekton.dev.
Staring with Red Hat OpenShift Pipelines 1.7, cluster administrators can also install and deploy a custom instance of Tekton Hub on enterprise clusters. You can curate a catalog with reusable tasks and pipelines specific to your organization.
4.1.6.1.6. Chains
Tekton Chains is a Technology Preview feature.
Tekton Chains is a Kubernetes Custom Resource Definition (CRD) controller. You can use it to manage the supply chain security of the tasks and pipelines created using Red Hat OpenShift Pipelines.
By default, Tekton Chains monitors the task runs in your OpenShift Container Platform cluster. Chains takes snapshots of completed task runs, converts them to one or more standard payload formats, and signs and stores all artifacts.
Tekton Chains supports the following features:
-
You can sign task runs, task run results, and OCI registry images with cryptographic key types and services such as
cosign
. -
You can use attestation formats such as
in-toto
. - You can securely store signatures and signed artifacts using OCI repository as a storage backend.
4.1.6.1.7. Pipelines as Code (PAC)
Pipelines as Code is a Technology Preview feature.
With Pipelines as Code, cluster administrators and users with the required privileges can define pipeline templates as part of source code Git repositories. When triggered by a source code push or a pull request for the configured Git repository, the feature runs the pipeline and reports status.
Pipelines as Code supports the following features:
- Pull request status. When iterating over a pull request, the status and control of the pull request is exercised on the platform hosting the Git repository.
- GitHub checks the API to set the status of a pipeline run, including rechecks.
- GitHub pull request and commit events.
-
Pull request actions in comments, such as
/retest
. - Git events filtering, and a separate pipeline for each event.
- Automatic task resolution in Pipelines for local tasks, Tekton Hub, and remote URLs.
- Use of GitHub blobs and objects API for retrieving configurations.
-
Access Control List (ACL) over a GitHub organization, or using a Prow-style
OWNER
file. -
The
tkn pac
plugin for thetkn
CLI tool, which you can use to manage Pipelines as Code repositories and bootstrapping. - Support for GitHub Application, GitHub Webhook, Bitbucket Server, and Bitbucket Cloud.
4.1.6.2. Deprecated features
-
Breaking change: This update removes the
disable-working-directory-overwrite
anddisable-home-env-overwrite
fields from theTektonConfig
custom resource (CR). As a result, theTektonConfig
CR no longer automatically sets the$HOME
environment variable andworkingDir
parameter. You can still set the$HOME
environment variable andworkingDir
parameter by using theenv
andworkingDir
fields in theTask
custom resource definition (CRD).
-
The
Conditions
custom resource definition (CRD) type is deprecated and planned to be removed in a future release. Instead, use the recommendedWhen
expression.
-
Breaking change: The
Triggers
resource validates the templates and generates an error if you do not specify theEventListener
andTriggerBinding
values.
4.1.6.3. Known issues
When you run Maven and Jib-Maven cluster tasks, the default container image is supported only on Intel (x86) architecture. Therefore, tasks will fail on ARM, IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) clusters. As a workaround, you can specify a custom image by setting the
MAVEN_IMAGE
parameter value tomaven:3.6.3-adoptopenjdk-11
.TipBefore you install tasks that are based on the Tekton Catalog on ARM, IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) using
tkn hub
, verify if the task can be executed on these platforms. To check ifppc64le
ands390x
are listed in the "Platforms" section of the task information, you can run the following command:tkn hub info task <name>
-
On IBM Power Systems, IBM Z, and LinuxONE, the
s2i-dotnet
cluster task is unsupported. You cannot use the
nodejs:14-ubi8-minimal
image stream because doing so generates the following errors:STEP 7: RUN /usr/libexec/s2i/assemble /bin/sh: /usr/libexec/s2i/assemble: No such file or directory subprocess exited with status 127 subprocess exited with status 127 error building at STEP "RUN /usr/libexec/s2i/assemble": exit status 127 time="2021-11-04T13:05:26Z" level=error msg="exit status 127"
-
Implicit parameter mapping incorrectly passes parameters from the top-level
Pipeline
orPipelineRun
definitions to thetaskRef
tasks. Mapping should only occur from a top-level resource to tasks with in-linetaskSpec
specifications. This issue only affects clusters where this feature was enabled by setting theenable-api-fields
field toalpha
in thepipeline
section of theTektonConfig
custom resource definition.
4.1.6.4. Fixed issues
-
With this update, if metadata such as
labels
andannotations
are present in bothPipeline
andPipelineRun
object definitions, the values in thePipelineRun
type takes precedence. You can observe similar behavior forTask
andTaskRun
objects. -
With this update, if the
timeouts.tasks
field or thetimeouts.finally
field is set to0
, then thetimeouts.pipeline
is also set to0
. -
With this update, the
-x
set flag is removed from scripts that do not use a shebang. The fix reduces potential data leak from script execution. -
With this update, any backslash character present in the usernames in Git credentials is escaped with an additional backslash in the
.gitconfig
file.
-
With this update, the
finalizer
property of theEventListener
object is not necessary for cleaning up logging and config maps. - With this update, the default HTTP client associated with the event listener server is removed, and a custom HTTP client added. As a result, the timeouts have improved.
- With this update, the Triggers cluster role now works with owner references.
- With this update, the race condition in the event listener does not happen when multiple interceptors return extensions.
-
With this update, the
tkn pr delete
command does not delete the pipeline runs with theignore-running
flag.
- With this update, the Operator pods do not continue restarting when you modify any add-on parameters.
-
With this update, the
tkn serve
CLI pod is scheduled on infra nodes, if not configured in the subscription and config custom resources. - With this update, cluster tasks with specified versions are not deleted during upgrade.
4.1.6.5. Release notes for Red Hat OpenShift Pipelines General Availability 1.7.1
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.7.1 is available on OpenShift Container Platform 4.9, 4.10, and 4.11.
4.1.6.5.1. Fixed issues
- Before this update, upgrading the Red Hat OpenShift Pipelines Operator deleted the data in the database associated with Tekton Hub and installed a new database. With this update, an Operator upgrade preserves the data.
- Before this update, only cluster administrators could access pipeline metrics in the OpenShift Container Platform console. With this update, users with other cluster roles also can access the pipeline metrics.
-
Before this update, pipeline runs failed for pipelines containing tasks that emit large termination messages. The pipeline runs failed because the total size of termination messages of all containers in a pod cannot exceed 12 KB. With this update, the
place-tools
andstep-init
initialization containers that uses the same image are merged to reduce the number of containers running in each tasks’s pod. The solution reduces the chance of failed pipeline runs by minimizing the number of containers running in a task’s pod. However, it does not remove the limitation of the maximum allowed size of a termination message. -
Before this update, attempts to access resource URLs directly from the Tekton Hub web console resulted in an Nginx
404
error. With this update, the Tekton Hub web console image is fixed to allow accessing resource URLs directly from the Tekton Hub web console. - Before this update, for each namespace the resource pruner job created a separate container to prune resources. With this update, the resource pruner job runs commands for all namespaces as a loop in one container.
4.1.6.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.7.2
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.7.2 is available on OpenShift Container Platform 4.9, 4.10, and the upcoming version.
4.1.6.6.1. Known issues
-
The
chains-config
config map for Tekton Chains in theopenshift-pipelines
namespace is automatically reset to default after upgrading the Red Hat OpenShift Pipelines Operator. Currently, there is no workaround for this issue.
4.1.6.6.2. Fixed issues
-
Before this update, tasks on Pipelines 1.7.1 failed on using
init
as the first argument, followed by two or more arguments. With this update, the flags are parsed correctly and the task runs are successful. Before this update, installation of the Red Hat OpenShift Pipelines Operator on OpenShift Container Platform 4.9 and 4.10 failed due to invalid role binding, with the following error message:
error updating rolebinding openshift-operators-prometheus-k8s-read-binding: RoleBinding.rbac.authorization.k8s.io "openshift-operators-prometheus-k8s-read-binding" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"Role", Name:"openshift-operator-read"}: cannot change roleRef
With this update, the Red Hat OpenShift Pipelines Operator installs with distinct role binding namespaces to avoid conflict with installation of other Operators.
Before this update, upgrading the Operator triggered a reset of the
signing-secrets
secret key for Tekton Chains to its default value. With this update, the custom secret key persists after you upgrade the Operator.NoteUpgrading to Red Hat OpenShift Pipelines 1.7.2 resets the key. However, when you upgrade to future releases, the key is expected to persist.
Before this update, all S2I build tasks failed with an error similar to the following message:
Error: error writing "0 0 4294967295\n" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted time="2022-03-04T09:47:57Z" level=error msg="error writing \"0 0 4294967295\\n\" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted" time="2022-03-04T09:47:57Z" level=error msg="(unable to determine exit status)"
With this update, the
pipelines-scc
security context constraint (SCC) is compatible with theSETFCAP
capability necessary forBuildah
andS2I
cluster tasks. As a result, theBuildah
andS2I
build tasks can run successfully.To successfully run the
Buildah
cluster task andS2I
build tasks for applications written in various languages and frameworks, add the following snippet for appropriatesteps
objects such asbuild
andpush
:securityContext: capabilities: add: ["SETFCAP"]
4.1.6.7. Release notes for Red Hat OpenShift Pipelines General Availability 1.7.3
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.7.3 is available on OpenShift Container Platform 4.9, 4.10, and 4.11.
4.1.6.7.1. Fixed issues
-
Before this update, the Operator failed when creating RBAC resources if any namespace was in a
Terminating
state. With this update, the Operator ignores namespaces in aTerminating
state and creates the RBAC resources. -
Previously, upgrading the Red Hat OpenShift Pipelines Operator caused the
pipeline
service account to be recreated, which meant that the secrets linked to the service account were lost. This update fixes the issue. During upgrades, the Operator no longer recreates thepipeline
service account. As a result, secrets attached to thepipeline
service account persist after upgrades, and the resources (tasks and pipelines) continue to work correctly.
4.1.7. Release notes for Red Hat OpenShift Pipelines General Availability 1.6
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.6 is available on OpenShift Container Platform 4.9.
4.1.7.1. New features
In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.6.
-
With this update, you can configure a pipeline or task
start
command to return a YAML or JSON-formatted string by using the--output <string>
, where<string>
isyaml
orjson
. Otherwise, without the--output
option, thestart
command returns a human-friendly message that is hard for other programs to parse. Returning a YAML or JSON-formatted string is useful for continuous integration (CI) environments. For example, after a resource is created, you can useyq
orjq
to parse the YAML or JSON-formatted message about the resource and wait until that resource is terminated without using theshowlog
option. -
With this update, you can authenticate to a registry using the
auth.json
authentication file of Podman. For example, you can usetkn bundle push
to push to a remote registry using Podman instead of Docker CLI. -
With this update, if you use the
tkn [taskrun | pipelinerun] delete --all
command, you can preserve runs that are younger than a specified number of minutes by using the new--keep-since <minutes>
option. For example, to keep runs that are less than five minutes old, you entertkn [taskrun | pipelinerun] delete -all --keep-since 5
. -
With this update, when you delete task runs or pipeline runs, you can use the
--parent-resource
and--keep-since
options together. For example, thetkn pipelinerun delete --pipeline pipelinename --keep-since 5
command preserves pipeline runs whose parent resource is namedpipelinename
and whose age is five minutes or less. Thetkn tr delete -t <taskname> --keep-since 5
andtkn tr delete --clustertask <taskname> --keep-since 5
commands work similarly for task runs. -
This update adds support for the triggers resources to work with
v1beta1
resources.
-
This update adds an
ignore-running
option to thetkn pipelinerun delete
andtkn taskrun delete
commands. -
This update adds a
create
subcommand to thetkn task
andtkn clustertask
commands. -
With this update, when you use the
tkn pipelinerun delete --all
command, you can use the new--label <string>
option to filter the pipeline runs by label. Optionally, you can use the--label
option with=
and==
as equality operators, or!=
as an inequality operator. For example, thetkn pipelinerun delete --all --label asdf
andtkn pipelinerun delete --all --label==asdf
commands both delete all the pipeline runs that have theasdf
label. - With this update, you can fetch the version of installed Tekton components from the config map or, if the config map is not present, from the deployment controller.
-
With this update, triggers support the
feature-flags
andconfig-defaults
config map to configure feature flags and to set default values respectively. -
This update adds a new metric,
eventlistener_event_count
, that you can use to count events received by theEventListener
resource. This update adds
v1beta1
Go API types. With this update, triggers now support thev1beta1
API version.With the current release, the
v1alpha1
features are now deprecated and will be removed in a future release. Begin using thev1beta1
features instead.
In the current release, auto-prunning of resources is enabled by default. In addition, you can configure auto-prunning of task run and pipeline run for each namespace separately, by using the following new annotations:
-
operator.tekton.dev/prune.schedule
: If the value of this annotation is different from the value specified at theTektonConfig
custom resource definition, a new cron job in that namespace is created. -
operator.tekton.dev/prune.skip
: When set totrue
, the namespace for which it is configured will not be prunned. -
operator.tekton.dev/prune.resources
: This annotation accepts a comma-separated list of resources. To prune a single resource such as a pipeline run, set this annotation to"pipelinerun"
. To prune multiple resources, such as task run and pipeline run, set this annotation to"taskrun, pipelinerun"
. -
operator.tekton.dev/prune.keep
: Use this annotation to retain a resource without prunning. operator.tekton.dev/prune.keep-since
: Use this annotation to retain resources based on their age. The value for this annotation must be equal to the age of the resource in minutes. For example, to retain resources which were created not more than five days ago, setkeep-since
to7200
.NoteThe
keep
andkeep-since
annotations are mutually exclusive. For any resource, you must configure only one of them.-
operator.tekton.dev/prune.strategy
: Set the value of this annotation to eitherkeep
orkeep-since
.
-
-
Administrators can disable the creation of the
pipeline
service account for the entire cluster, and prevent privilege escalation by misusing the associated SCC, which is very similar toanyuid
. -
You can now configure feature flags and components by using the
TektonConfig
custom resource (CR) and the CRs for individual components, such asTektonPipeline
andTektonTriggers
. This level of granularity helps customize and test alpha features such as the Tekton OCI bundle for individual components. -
You can now configure optional
Timeouts
field for thePipelineRun
resource. For example, you can configure timeouts separately for a pipeline run, each task run, and thefinally
tasks. -
The pods generated by the
TaskRun
resource now sets theactiveDeadlineSeconds
field of the pods. This enables OpenShift to consider them as terminating, and allows you to use specifically scopedResourceQuota
object for the pods. - You can use configmaps to eliminate metrics tags or labels type on a task run, pipeline run, task, and pipeline. In addition, you can configure different types of metrics for measuring duration, such as a histogram, gauge, or last value.
-
You can define requests and limits on a pod coherently, as Tekton now fully supports the
LimitRange
object by considering theMin
,Max
,Default
, andDefaultRequest
fields. The following alpha features are introduced:
A pipeline run can now stop after running the
finally
tasks, rather than the previous behavior of stopping the execution of all task run directly. This update adds the followingspec.status
values:-
StoppedRunFinally
will stop the currently running tasks after they are completed, and then run thefinally
tasks. -
CancelledRunFinally
will immediately cancel the running tasks, and then run thefinally
tasks. Cancelled
will retain the previous behavior provided by thePipelineRunCancelled
status.NoteThe
Cancelled
status replaces the deprecatedPipelineRunCancelled
status, which will be removed in thev1
version.
-
-
You can now use the
oc debug
command to put a task run into debug mode, which pauses the execution and allows you to inspect specific steps in a pod. -
When you set the
onError
field of a step tocontinue
, the exit code for the step is recorded and passed on to subsequent steps. However, the task run does not fail and the execution of the rest of the steps in the task continues. To retain the existing behavior, you can set the value of theonError
field tostopAndFail
. - Tasks can now accept more parameters than are actually used. When the alpha feature flag is enabled, the parameters can implicitly propagate to inlined specs. For example, an inlined task can access parameters of its parent pipeline run, without explicitly defining each parameter for the task.
-
If you enable the flag for the alpha features, the conditions under
When
expressions will only apply to the task with which it is directly associated, and not the dependents of the task. To apply theWhen
expressions to the associated task and its dependents, you must associate the expression with each dependent task separately. Note that, going forward, this will be the default behavior of theWhen
expressions in any new API versions of Tekton. The existing default behavior will be deprecated in favor of this update.
The current release enables you to configure node selection by specifying the
nodeSelector
andtolerations
values in theTektonConfig
custom resource (CR). The Operator adds these values to all the deployments that it creates.-
To configure node selection for the Operator’s controller and webhook deployment, you edit the
config.nodeSelector
andconfig.tolerations
fields in the specification for theSubscription
CR, after installing the Operator. -
To deploy the rest of the control plane pods of OpenShift Pipelines on an infrastructure node, update the
TektonConfig
CR with thenodeSelector
andtolerations
fields. The modifications are then applied to all the pods created by Operator.
-
To configure node selection for the Operator’s controller and webhook deployment, you edit the
4.1.7.2. Deprecated features
-
In CLI 0.21.0, support for all
v1alpha1
resources forclustertask
,task
,taskrun
,pipeline
, andpipelinerun
commands are deprecated. These resources are now deprecated and will be removed in a future release.
In Tekton Triggers v0.16.0, the redundant
status
label is removed from the metrics for theEventListener
resource.ImportantBreaking change: The
status
label has been removed from theeventlistener_http_duration_seconds_*
metric. Remove queries that are based on thestatus
label.-
With the current release, the
v1alpha1
features are now deprecated and will be removed in a future release. With this update, you can begin using thev1beta1
Go API types instead. Triggers now supports thev1beta1
API version. With the current release, the
EventListener
resource sends a response before the triggers finish processing.ImportantBreaking change: With this change, the
EventListener
resource stops responding with a201 Created
status code when it creates resources. Instead, it responds with a202 Accepted
response code.The current release removes the
podTemplate
field from theEventListener
resource.ImportantBreaking change: The
podTemplate
field, which was deprecated as part of #1100, has been removed.The current release removes the deprecated
replicas
field from the specification for theEventListener
resource.ImportantBreaking change: The deprecated
replicas
field has been removed.
In Red Hat OpenShift Pipelines 1.6, the values of
HOME="/tekton/home"
andworkingDir="/workspace"
are removed from the specification of theStep
objects.Instead, Red Hat OpenShift Pipelines sets
HOME
andworkingDir
to the values defined by the containers running theStep
objects. You can override these values in the specification of yourStep
objects.To use the older behavior, you can change the
disable-working-directory-overwrite
anddisable-home-env-overwrite
fields in theTektonConfig
CR tofalse
:apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: disable-working-directory-overwrite: false disable-home-env-overwrite: false ...
ImportantThe
disable-working-directory-overwrite
anddisable-home-env-overwrite
fields in theTektonConfig
CR are now deprecated and will be removed in a future release.
4.1.7.3. Known issues
-
When you run Maven and Jib-Maven cluster tasks, the default container image is supported only on Intel (x86) architecture. Therefore, tasks will fail on IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) clusters. As a workaround, you can specify a custom image by setting the
MAVEN_IMAGE
parameter value tomaven:3.6.3-adoptopenjdk-11
. -
On IBM Power Systems, IBM Z, and LinuxONE, the
s2i-dotnet
cluster task is unsupported. -
Before you install tasks based on the Tekton Catalog on IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) using
tkn hub
, verify if the task can be executed on these platforms. To check ifppc64le
ands390x
are listed in the "Platforms" section of the task information, you can run the following command:tkn hub info task <name>
You cannot use the
nodejs:14-ubi8-minimal
image stream because doing so generates the following errors:STEP 7: RUN /usr/libexec/s2i/assemble /bin/sh: /usr/libexec/s2i/assemble: No such file or directory subprocess exited with status 127 subprocess exited with status 127 error building at STEP "RUN /usr/libexec/s2i/assemble": exit status 127 time="2021-11-04T13:05:26Z" level=error msg="exit status 127"
4.1.7.4. Fixed issues
-
The
tkn hub
command is now supported on IBM Power Systems, IBM Z, and LinuxONE.
-
Before this update, the terminal was not available after the user ran a
tkn
command, and the pipeline run was done, even ifretries
were specified. Specifying a timeout in the task run or pipeline run had no effect. This update fixes the issue so that the terminal is available after running the command. -
Before this update, running
tkn pipelinerun delete --all
would delete all resources. This update prevents the resources in the running state from getting deleted. -
Before this update, using the
tkn version --component=<component>
command did not return the component version. This update fixes the issue so that this command returns the component version. -
Before this update, when you used the
tkn pr logs
command, it displayed the pipelines output logs in the wrong task order. This update resolves the issue so that logs of completedPipelineRuns
are listed in the appropriateTaskRun
execution order.
-
Before this update, editing the specification of a running pipeline might prevent the pipeline run from stopping when it was complete. This update fixes the issue by fetching the definition only once and then using the specification stored in the status for verification. This change reduces the probability of a race condition when a
PipelineRun
or aTaskRun
refers to aPipeline
orTask
that changes while it is running. -
When
expression values can now have array parameter references, such as:values: [$(params.arrayParam[*])]
.
4.1.7.5. Release notes for Red Hat OpenShift Pipelines General Availability 1.6.1
4.1.7.5.1. Known issues
After upgrading to Red Hat OpenShift Pipelines 1.6.1 from an older version, Pipelines might enter an inconsistent state where you are unable to perform any operations (create/delete/apply) on Tekton resources (tasks and pipelines). For example, while deleting a resource, you might encounter the following error:
Error from server (InternalError): Internal error occurred: failed calling webhook "validation.webhook.pipeline.tekton.dev": Post "https://tekton-pipelines-webhook.openshift-pipelines.svc:443/resource-validation?timeout=10s": service "tekton-pipelines-webhook" not found.
4.1.7.5.2. Fixed issues
The
SSL_CERT_DIR
environment variable (/tekton-custom-certs
) set by Red Hat OpenShift Pipelines will not override the following default system directories with certificate files:-
/etc/pki/tls/certs
-
/etc/ssl/certs
-
/system/etc/security/cacerts
-
- The Horizontal Pod Autoscaler can manage the replica count of deployments controlled by the Red Hat OpenShift Pipelines Operator. From this release onward, if the count is changed by an end user or an on-cluster agent, the Red Hat OpenShift Pipelines Operator will not reset the replica count of deployments managed by it. However, the replicas will be reset when you upgrade the Red Hat OpenShift Pipelines Operator.
-
The pod serving the
tkn
CLI will now be scheduled on nodes, based on the node selector and toleration limits specified in theTektonConfig
custom resource.
4.1.7.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.6.2
4.1.7.6.1. Known issues
-
When you create a new project, the creation of the
pipeline
service account is delayed, and removal of existing cluster tasks and pipeline templates takes more than 10 minutes.
4.1.7.6.2. Fixed issues
-
Before this update, multiple instances of Tekton installer sets were created for a pipeline after upgrading to Red Hat OpenShift Pipelines 1.6.1 from an older version. With this update, the Operator ensures that only one instance of each type of
TektonInstallerSet
exists after an upgrade. - Before this update, all the reconcilers in the Operator used the component version to decide resource recreation during an upgrade to Red Hat OpenShift Pipelines 1.6.1 from an older version. As a result, those resources were not recreated whose component versions did not change in the upgrade. With this update, the Operator uses the Operator version instead of the component version to decide resource recreation during an upgrade.
- Before this update, the pipelines webhook service was missing in the cluster after an upgrade. This was due to an upgrade deadlock on the config maps. With this update, a mechanism is added to disable webhook validation if the config maps are absent in the cluster. As a result, the pipelines webhook service persists in the cluster after an upgrade.
- Before this update, cron jobs for auto-pruning got recreated after any configuration change to the namespace. With this update, cron jobs for auto-pruning get recreated only if there is a relevant annotation change in the namespace.
The upstream version of Tekton Pipelines is revised to
v0.28.3
, which has the following fixes:-
Fix
PipelineRun
orTaskRun
objects to allow label or annotation propagation. For implicit params:
-
Do not apply the
PipelineSpec
parameters to theTaskRefs
object. -
Disable implicit param behavior for the
Pipeline
objects.
-
Do not apply the
-
Fix
4.1.7.7. Release notes for Red Hat OpenShift Pipelines General Availability 1.6.3
4.1.7.7.1. Fixed issues
Before this update, the Red Hat OpenShift Pipelines Operator installed pod security policies from components such as Pipelines and Triggers. However, the pod security policies shipped as part of the components were deprecated in an earlier release. With this update, the Operator stops installing pod security policies from components. As a result, the following upgrade paths are affected:
- Upgrading from Pipelines 1.6.1 or 1.6.2 to Pipelines 1.6.3 deletes the pod security policies, including those from the Pipelines and Triggers components.
Upgrading from Pipelines 1.5.x to 1.6.3 retains the pod security policies installed from components. As a cluster administrator, you can delete them manually.
NoteWhen you upgrade to future releases, the Red Hat OpenShift Pipelines Operator will automatically delete all obsolete pod security policies.
- Before this update, only cluster administrators could access pipeline metrics in the OpenShift Container Platform console. With this update, users with other cluster roles also can access the pipeline metrics.
- Before this update, role-based access control (RBAC) issues with the Pipelines Operator caused problems upgrading or installing components. This update improves the reliability and consistency of installing various Red Hat OpenShift Pipelines components.
-
Before this update, setting the
clusterTasks
andpipelineTemplates
fields tofalse
in theTektonConfig
CR slowed the removal of cluster tasks and pipeline templates. This update improves the speed of lifecycle management of Tekton resources such as cluster tasks and pipeline templates.
4.1.7.8. Release notes for Red Hat OpenShift Pipelines General Availability 1.6.4
4.1.7.8.1. Known issues
After upgrading from Red Hat OpenShift Pipelines 1.5.2 to 1.6.4, accessing the event listener routes returns a
503
error.Workaround: Modify the target port in the YAML file for the event listener’s route.
Extract the route name for the relevant namespace.
$ oc get route -n <namespace>
Edit the route to modify the value of the
targetPort
field.$ oc edit route -n <namespace> <el-route_name>
Example: Existing event listener route
... spec: host: el-event-listener-q8c3w5-test-upgrade1.apps.ve49aws.aws.ospqa.com port: targetPort: 8000 to: kind: Service name: el-event-listener-q8c3w5 weight: 100 wildcardPolicy: None ...
Example: Modified event listener route
... spec: host: el-event-listener-q8c3w5-test-upgrade1.apps.ve49aws.aws.ospqa.com port: targetPort: http-listener to: kind: Service name: el-event-listener-q8c3w5 weight: 100 wildcardPolicy: None ...
4.1.7.8.2. Fixed issues
-
Before this update, the Operator failed when creating RBAC resources if any namespace was in a
Terminating
state. With this update, the Operator ignores namespaces in aTerminating
state and creates the RBAC resources. - Before this update, the task runs failed or restarted due to absence of annotation specifying the release version of the associated Tekton controller. With this update, the inclusion of the appropriate annotations are automated, and the tasks run without failure or restarts.
4.1.8. Release notes for Red Hat OpenShift Pipelines General Availability 1.5
Red Hat OpenShift Pipelines General Availability (GA) 1.5 is now available on OpenShift Container Platform 4.8.
4.1.8.1. Compatibility and support matrix
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use.
In the table, features are marked with the following statuses:
TP | Technology Preview |
GA | General Availability |
Note the following scope of support on the Red Hat Customer Portal for these features:
Feature | Version | Support Status |
---|---|---|
Pipelines | 0.24 | GA |
CLI | 0.19 | GA |
Catalog | 0.24 | GA |
Triggers | 0.14 | TP |
Pipeline resources | - | TP |
For questions and feedback, you can send an email to the product team at pipelines-interest@redhat.com.
4.1.8.2. New features
In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.5.
Pipeline run and task runs will be automatically pruned by a cron job in the target namespace. The cron job uses the
IMAGE_JOB_PRUNER_TKN
environment variable to get the value oftkn image
. With this enhancement, the following fields are introduced to theTektonConfig
custom resource:... pruner: resources: - pipelinerun - taskrun schedule: "*/5 * * * *" # cron schedule keep: 2 # delete all keeping n ...
In OpenShift Container Platform, you can customize the installation of the Tekton Add-ons component by modifying the values of the new parameters
clusterTasks
andpipelinesTemplates
in theTektonConfig
custom resource:apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: profile: all targetNamespace: openshift-pipelines addon: params: - name: clusterTasks value: "true" - name: pipelineTemplates value: "true" ...
The customization is allowed if you create the add-on using
TektonConfig
, or directly by using Tekton Add-ons. However, if the parameters are not passed, the controller adds parameters with default values.Note-
If add-on is created using the
TektonConfig
custom resource, and you change the parameter values later in theAddon
custom resource, then the values in theTektonConfig
custom resource overwrites the changes. -
You can set the value of the
pipelinesTemplates
parameter totrue
only when the value of theclusterTasks
parameter istrue
.
-
If add-on is created using the
The
enableMetrics
parameter is added to theTektonConfig
custom resource. You can use it to disable the service monitor, which is part of Tekton Pipelines for OpenShift Container Platform.apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: profile: all targetNamespace: openshift-pipelines pipeline: params: - name: enableMetrics value: "true" ...
- Eventlistener OpenCensus metrics, which captures metrics at process level, is added.
- Triggers now has label selector; you can configure triggers for an event listener using labels.
The
ClusterInterceptor
custom resource definition for registering interceptors is added, which allows you to register newInterceptor
types that you can plug in. In addition, the following relevant changes are made:-
In the trigger specifications, you can configure interceptors using a new API that includes a
ref
field to refer to a cluster interceptor. In addition, you can use theparams
field to add parameters that pass on to the interceptors for processing. -
The bundled interceptors CEL, GitHub, GitLab, and BitBucket, have been migrated. They are implemented using the new
ClusterInterceptor
custom resource definition. -
Core interceptors are migrated to the new format, and any new triggers created using the old syntax automatically switch to the new
ref
orparams
based syntax.
-
In the trigger specifications, you can configure interceptors using a new API that includes a
-
To disable prefixing the name of the task or step while displaying logs, use the
--prefix
option forlog
commands. -
To display the version of a specific component, use the new
--component
flag in thetkn version
command. -
The
tkn hub check-upgrade
command is added, and other commands are revised to be based on the pipeline version. In addition, catalog names are displayed in thesearch
command output. -
Support for optional workspaces are added to the
start
command. -
If the plugins are not present in the
plugins
directory, they are searched in the current path. The
tkn start [task | clustertask | pipeline]
command starts interactively and ask for theparams
value, even when you specify the default parameters are specified. To stop the interactive prompts, pass the--use-param-defaults
flag at the time of invoking the command. For example:$ tkn pipeline start build-and-deploy \ -w name=shared-workspace,volumeClaimTemplateFile=https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/01_pipeline/03_persistent_volume_claim.yaml \ -p deployment-name=pipelines-vote-api \ -p git-url=https://github.com/openshift/pipelines-vote-api.git \ -p IMAGE=image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/pipelines-vote-api \ --use-param-defaults
-
The
version
field is added in thetkn task describe
command. -
The option to automatically select resources such as
TriggerTemplate
, orTriggerBinding
, orClusterTriggerBinding
, orEventlistener
, is added in thedescribe
command, if only one is present. -
In the
tkn pr describe
command, a section for skipped tasks is added. -
Support for the
tkn clustertask logs
is added. -
The YAML merge and variable from
config.yaml
is removed. In addition, therelease.yaml
file can now be more easily consumed by tools such askustomize
andytt
. - The support for resource names to contain the dot character (".") is added.
-
The
hostAliases
array in thePodTemplate
specification is added to the pod-level override of hostname resolution. It is achieved by modifying the/etc/hosts
file. -
A variable
$(tasks.status)
is introduced to access the aggregate execution status of tasks. - An entry-point binary build for Windows is added.
4.1.8.3. Deprecated features
In the
when
expressions, support for fields written is PascalCase is removed. Thewhen
expressions only support fields written in lowercase.NoteIf you had applied a pipeline with
when
expressions in Tekton Pipelinesv0.16
(Operatorv1.2.x
), you have to reapply it.When you upgrade the Red Hat OpenShift Pipelines Operator to
v1.5
, theopenshift-client
and theopenshift-client-v-1-5-0
cluster tasks have theSCRIPT
parameter. However, theARGS
parameter and thegit
resource are removed from the specification of theopenshift-client
cluster task. This is a breaking change, and only those cluster tasks that do not have a specific version in thename
field of theClusterTask
resource upgrade seamlessly.To prevent the pipeline runs from breaking, use the
SCRIPT
parameter after the upgrade because it moves the values previously specified in theARGS
parameter into theSCRIPT
parameter of the cluster task. For example:... - name: deploy params: - name: SCRIPT value: oc rollout status <deployment-name> runAfter: - build taskRef: kind: ClusterTask name: openshift-client ...
When you upgrade from Red Hat OpenShift Pipelines Operator
v1.4
tov1.5
, the profile names in which theTektonConfig
custom resource is installed now change.Table 4.3. Profiles for TektonConfig custom resource Profiles in Pipelines 1.5 Corresponding profile in Pipelines 1.4 Installed Tekton components All (default profile)
All (default profile)
Pipelines, Triggers, Add-ons
Basic
Default
Pipelines, Triggers
Lite
Basic
Pipelines
NoteIf you used
profile: all
in theconfig
instance of theTektonConfig
custom resource, no change is necessary in the resource specification.However, if the installed Operator is either in the Default or the Basic profile before the upgrade, you must edit the
config
instance of theTektonConfig
custom resource after the upgrade. For example, if the configuration wasprofile: basic
before the upgrade, ensure that it isprofile: lite
after upgrading to Pipelines 1.5.The
disable-home-env-overwrite
anddisable-working-dir-overwrite
fields are now deprecated and will be removed in a future release. For this release, the default value of these flags is set totrue
for backward compatibility.NoteIn the next release (Red Hat OpenShift Pipelines 1.6), the
HOME
environment variable will not be automatically set to/tekton/home
, and the default working directory will not be set to/workspace
for task runs. These defaults collide with any value set by image Dockerfile of the step.-
The
ServiceType
andpodTemplate
fields are removed from theEventListener
spec. - The controller service account no longer requests cluster-wide permission to list and watch namespaces.
The status of the
EventListener
resource has a new condition calledReady
.NoteIn the future, the other status conditions for the
EventListener
resource will be deprecated in favor of theReady
status condition.-
The
eventListener
andnamespace
fields in theEventListener
response are deprecated. Use theeventListenerUID
field instead. The
replicas
field is deprecated from theEventListener
spec. Instead, thespec.replicas
field is moved tospec.resources.kubernetesResource.replicas
in theKubernetesResource
spec.NoteThe
replicas
field will be removed in a future release.-
The old method of configuring the core interceptors is deprecated. However, it continues to work until it is removed in a future release. Instead, interceptors in a
Trigger
resource are now configured using a newref
andparams
based syntax. The resulting default webhook automatically switch the usages of the old syntax to the new syntax for new triggers. -
Use
rbac.authorization.k8s.io/v1
instead of the deprecatedrbac.authorization.k8s.io/v1beta1
for theClusterRoleBinding
resource. -
In cluster roles, the cluster-wide write access to resources such as
serviceaccounts
,secrets
,configmaps
, andlimitranges
are removed. In addition, cluster-wide access to resources such asdeployments
,statefulsets
, anddeployment/finalizers
are removed. -
The
image
custom resource definition in thecaching.internal.knative.dev
group is not used by Tekton anymore, and is excluded in this release.
4.1.8.4. Known issues
The git-cli cluster task is built off the alpine/git base image, which expects
/root
as the user’s home directory. However, this is not explicitly set in thegit-cli
cluster task.In Tekton, the default home directory is overwritten with
/tekton/home
for every step of a task, unless otherwise specified. This overwriting of the$HOME
environment variable of the base image causes thegit-cli
cluster task to fail.This issue is expected to be fixed in the upcoming releases. For Red Hat OpenShift Pipelines 1.5 and earlier versions, you can use any one of the following workarounds to avoid the failure of the
git-cli
cluster task:Set the
$HOME
environment variable in the steps, so that it is not overwritten.-
[OPTIONAL] If you installed Red Hat OpenShift Pipelines using the Operator, then clone the
git-cli
cluster task into a separate task. This approach ensures that the Operator does not overwrite the changes made to the cluster task. -
Execute the
oc edit clustertasks git-cli
command. Add the expected
HOME
environment variable to the YAML of the step:... steps: - name: git env: - name: HOME value: /root image: $(params.BASE_IMAGE) workingDir: $(workspaces.source.path) ...
WarningFor Red Hat OpenShift Pipelines installed by the Operator, if you do not clone the
git-cli
cluster task into a separate task before changing theHOME
environment variable, then the changes are overwritten during Operator reconciliation.
-
[OPTIONAL] If you installed Red Hat OpenShift Pipelines using the Operator, then clone the
Disable overwriting the
HOME
environment variable in thefeature-flags
config map.-
Execute the
oc edit -n openshift-pipelines configmap feature-flags
command. Set the value of the
disable-home-env-overwrite
flag totrue
.Warning- If you installed Red Hat OpenShift Pipelines using the Operator, then the changes are overwritten during Operator reconciliation.
-
Modifying the default value of the
disable-home-env-overwrite
flag can break other tasks and cluster tasks, as it changes the default behavior for all tasks.
-
Execute the
Use a different service account for the
git-cli
cluster task, as the overwriting of theHOME
environment variable happens when the default service account for pipelines is used.- Create a new service account.
- Link your Git secret to the service account you just created.
- Use the service account while executing a task or a pipeline.
-
On IBM Power Systems, IBM Z, and LinuxONE, the
s2i-dotnet
cluster task and thetkn hub
command are unsupported. -
When you run Maven and Jib-Maven cluster tasks, the default container image is supported only on Intel (x86) architecture. Therefore, tasks will fail on IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) clusters. As a workaround, you can specify a custom image by setting the
MAVEN_IMAGE
parameter value tomaven:3.6.3-adoptopenjdk-11
.
4.1.8.5. Fixed issues
-
The
when
expressions indag
tasks are not allowed to specify the context variable accessing the execution status ($(tasks.<pipelineTask>.status)
) of any other task. -
Use Owner UIDs instead of Owner names, as it helps avoid race conditions created by deleting a
volumeClaimTemplate
PVC, in situations where aPipelineRun
resource is quickly deleted and then recreated. -
A new Dockerfile is added for
pullrequest-init
forbuild-base
image triggered by non-root users. -
When a pipeline or task is executed with the
-f
option and theparam
in its definition does not have atype
defined, a validation error is generated instead of the pipeline or task run failing silently. -
For the
tkn start [task | pipeline | clustertask]
commands, the description of the--workspace
flag is now consistent. - While parsing the parameters, if an empty array is encountered, the corresponding interactive help is displayed as an empty string now.
4.1.9. Release notes for Red Hat OpenShift Pipelines General Availability 1.4
Red Hat OpenShift Pipelines General Availability (GA) 1.4 is now available on OpenShift Container Platform 4.7.
In addition to the stable and preview Operator channels, the Red Hat OpenShift Pipelines Operator 1.4.0 comes with the ocp-4.6, ocp-4.5, and ocp-4.4 deprecated channels. These deprecated channels and support for them will be removed in the following release of Red Hat OpenShift Pipelines.
4.1.9.1. Compatibility and support matrix
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use.
In the table, features are marked with the following statuses:
TP | Technology Preview |
GA | General Availability |
Note the following scope of support on the Red Hat Customer Portal for these features:
Feature | Version | Support Status |
---|---|---|
Pipelines | 0.22 | GA |
CLI | 0.17 | GA |
Catalog | 0.22 | GA |
Triggers | 0.12 | TP |
Pipeline resources | - | TP |
For questions and feedback, you can send an email to the product team at pipelines-interest@redhat.com.
4.1.9.2. New features
In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.4.
The custom tasks have the following enhancements:
- Pipeline results can now refer to results produced by custom tasks.
- Custom tasks can now use workspaces, service accounts, and pod templates to build more complex custom tasks.
The
finally
task has the following enhancements:-
The
when
expressions are supported infinally
tasks, which provides efficient guarded execution and improved reusability of tasks. A
finally
task can be configured to consume the results of any task within the same pipeline.NoteSupport for
when
expressions andfinally
tasks are unavailable in the OpenShift Container Platform 4.7 web console.
-
The
-
Support for multiple secrets of the type
dockercfg
ordockerconfigjson
is added for authentication at runtime. -
Functionality to support sparse-checkout with the
git-clone
task is added. This enables you to clone only a subset of the repository as your local copy, and helps you to restrict the size of the cloned repositories. - You can create pipeline runs in a pending state without actually starting them. In clusters that are under heavy load, this allows Operators to have control over the start time of the pipeline runs.
-
Ensure that you set the
SYSTEM_NAMESPACE
environment variable manually for the controller; this was previously set by default. -
A non-root user is now added to the build-base image of pipelines so that
git-init
can clone repositories as a non-root user. - Support to validate dependencies between resolved resources before a pipeline run starts is added. All result variables in the pipeline must be valid, and optional workspaces from a pipeline can only be passed to tasks expecting it for the pipeline to start running.
- The controller and webhook runs as a non-root group, and their superfluous capabilities have been removed to make them more secure.
-
You can use the
tkn pr logs
command to see the log streams for retried task runs. -
You can use the
--clustertask
option in thetkn tr delete
command to delete all the task runs associated with a particular cluster task. -
Support for using Knative service with the
EventListener
resource is added by introducing a newcustomResource
field. - An error message is displayed when an event payload does not use the JSON format.
-
The source control interceptors such as GitLab, BitBucket, and GitHub, now use the new
InterceptorRequest
orInterceptorResponse
type interface. -
A new CEL function
marshalJSON
is implemented so that you can encode a JSON object or an array to a string. -
An HTTP handler for serving the CEL and the source control core interceptors is added. It packages four core interceptors into a single HTTP server that is deployed in the
tekton-pipelines
namespace. TheEventListener
object forwards events over the HTTP server to the interceptor. Each interceptor is available at a different path. For example, the CEL interceptor is available on the/cel
path. The
pipelines-scc
Security Context Constraint (SCC) is used with the defaultpipeline
service account for pipelines. This new service account is similar toanyuid
, but with a minor difference as defined in the YAML for SCC of OpenShift Container Platform 4.7:fsGroup: type: MustRunAs
4.1.9.3. Deprecated features
-
The
build-gcs
sub-type in the pipeline resource storage, and thegcs-fetcher
image, are not supported. -
In the
taskRun
field of cluster tasks, the labeltekton.dev/task
is removed. -
For webhooks, the value
v1beta1
corresponding to the fieldadmissionReviewVersions
is removed. -
The
creds-init
helper image for building and deploying is removed. In the triggers spec and binding, the deprecated field
template.name
is removed in favor oftemplate.ref
. You should update alleventListener
definitions to use theref
field.NoteUpgrade from Pipelines 1.3.x and earlier versions to Pipelines 1.4.0 breaks event listeners because of the unavailability of the
template.name
field. For such cases, use Pipelines 1.4.1 to avail the restoredtemplate.name
field.-
For
EventListener
custom resources/objects, the fieldsPodTemplate
andServiceType
are deprecated in favor ofResource
. - The deprecated spec style embedded bindings is removed.
-
The
spec
field is removed from thetriggerSpecBinding
. - The event ID representation is changed from a five-character random string to a UUID.
4.1.9.4. Known issues
- In the Developer perspective, the pipeline metrics and triggers features are available only on OpenShift Container Platform 4.7.6 or later versions.
-
On IBM Power Systems, IBM Z, and LinuxONE, the
tkn hub
command is not supported. -
When you run Maven and Jib Maven cluster tasks on an IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) clusters, set the
MAVEN_IMAGE
parameter value tomaven:3.6.3-adoptopenjdk-11
. Triggers throw error resulting from bad handling of the JSON format, if you have the following configuration in the trigger binding:
params: - name: github_json value: $(body)
To resolve the issue:
-
If you are using triggers v0.11.0 and above, use the
marshalJSON
CEL function, which takes a JSON object or array and returns the JSON encoding of that object or array as a string. If you are using older triggers version, add the following annotation in the trigger template:
annotations: triggers.tekton.dev/old-escape-quotes: "true"
-
If you are using triggers v0.11.0 and above, use the
- When upgrading from Pipelines 1.3.x to 1.4.x, you must recreate the routes.
4.1.9.5. Fixed issues
-
Previously, the
tekton.dev/task
label was removed from the task runs of cluster tasks, and thetekton.dev/clusterTask
label was introduced. The problems resulting from that change is resolved by fixing theclustertask describe
anddelete
commands. In addition, thelastrun
function for tasks is modified, to fix the issue of thetekton.dev/task
label being applied to the task runs of both tasks and cluster tasks in older versions of pipelines. -
When doing an interactive
tkn pipeline start pipelinename
, aPipelineResource
is created interactively. Thetkn p start
command prints the resource status if the resource status is notnil
. -
Previously, the
tekton.dev/task=name
label was removed from the task runs created from cluster tasks. This fix modifies thetkn clustertask start
command with the--last
flag to check for thetekton.dev/task=name
label in the created task runs. -
When a task uses an inline task specification, the corresponding task run now gets embedded in the pipeline when you run the
tkn pipeline describe
command, and the task name is returned as embedded. -
The
tkn version
command is fixed to display the version of the installed Tekton CLI tool, without a configuredkubeConfiguration namespace
or access to a cluster. -
If an argument is unexpected or more than one arguments are used, the
tkn completion
command gives an error. -
Previously, pipeline runs with the
finally
tasks nested in a pipeline specification would lose thosefinally
tasks, when converted to thev1alpha1
version and restored back to thev1beta1
version. This error occurring during conversion is fixed to avoid potential data loss. Pipeline runs with thefinally
tasks nested in a pipeline specification is now serialized and stored on the alpha version, only to be deserialized later. -
Previously, there was an error in the pod generation when a service account had the
secrets
field as{}
. The task runs failed withCouldntGetTask
because the GET request with an empty secret name returned an error, indicating that the resource name may not be empty. This issue is fixed by avoiding an empty secret name in thekubeclient
GET request. -
Pipelines with the
v1beta1
API versions can now be requested along with thev1alpha1
version, without losing thefinally
tasks. Applying the returnedv1alpha1
version will store the resource asv1beta1
, with thefinally
section restored to its original state. -
Previously, an unset
selfLink
field in the controller caused an error in the Kubernetes v1.20 clusters. As a temporary fix, theCloudEvent
source field is set to a value that matches the current source URI, without the value of the auto-populatedselfLink
field. -
Previously, a secret name with dots such as
gcr.io
led to a task run creation failure. This happened because of the secret name being used internally as part of a volume mount name. The volume mount name conforms to the RFC1123 DNS label and disallows dots as part of the name. This issue is fixed by replacing the dot with a dash that results in a readable name. -
Context variables are now validated in the
finally
tasks. -
Previously, when the task run reconciler was passed a task run that did not have a previous status update containing the name of the pod it created, the task run reconciler listed the pods associated with the task run. The task run reconciler used the labels of the task run, which were propagated to the pod, to find the pod. Changing these labels while the task run was running, caused the code to not find the existing pod. As a result, duplicate pods were created. This issue is fixed by changing the task run reconciler to only use the
tekton.dev/taskRun
Tekton-controlled label when finding the pod. - Previously, when a pipeline accepted an optional workspace and passed it to a pipeline task, the pipeline run reconciler stopped with an error if the workspace was not provided, even if a missing workspace binding is a valid state for an optional workspace. This issue is fixed by ensuring that the pipeline run reconciler does not fail to create a task run, even if an optional workspace is not provided.
- The sorted order of step statuses matches the order of step containers.
-
Previously, the task run status was set to
unknown
when a pod encountered theCreateContainerConfigError
reason, which meant that the task and the pipeline ran until the pod timed out. This issue is fixed by setting the task run status tofalse
, so that the task is set as failed when the pod encounters theCreateContainerConfigError
reason. -
Previously, pipeline results were resolved on the first reconciliation, after a pipeline run was completed. This could fail the resolution resulting in the
Succeeded
condition of the pipeline run being overwritten. As a result, the final status information was lost, potentially confusing any services watching the pipeline run conditions. This issue is fixed by moving the resolution of pipeline results to the end of a reconciliation, when the pipeline run is put into aSucceeded
orTrue
condition. - Execution status variable is now validated. This avoids validating task results while validating context variables to access execution status.
- Previously, a pipeline result that contained an invalid variable would be added to the pipeline run with the literal expression of the variable intact. Therefore, it was difficult to assess whether the results were populated correctly. This issue is fixed by filtering out the pipeline run results that reference failed task runs. Now, a pipeline result that contains an invalid variable will not be emitted by the pipeline run at all.
-
The
tkn eventlistener describe
command is fixed to avoid crashing without a template. It also displays the details about trigger references. -
Upgrades from Pipelines 1.3.x and earlier versions to Pipelines 1.4.0 breaks event listeners because of the unavailability of
template.name
. In Pipelines 1.4.1, thetemplate.name
has been restored to avoid breaking event listeners in triggers. -
In Pipelines 1.4.1, the
ConsoleQuickStart
custom resource has been updated to align with OpenShift Container Platform 4.7 capabilities and behavior.
4.1.10. Release notes for Red Hat OpenShift Pipelines Technology Preview 1.3
4.1.10.1. New features
Red Hat OpenShift Pipelines Technology Preview (TP) 1.3 is now available on OpenShift Container Platform 4.7. Red Hat OpenShift Pipelines TP 1.3 is updated to support:
- Tekton Pipelines 0.19.0
-
Tekton
tkn
CLI 0.15.0 - Tekton Triggers 0.10.2
- cluster tasks based on Tekton Catalog 0.19.0
- IBM Power Systems on OpenShift Container Platform 4.7
- IBM Z and LinuxONE on OpenShift Container Platform 4.7
In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.3.
4.1.10.1.1. Pipelines
- Tasks that build images, such as S2I and Buildah tasks, now emit a URL of the image built that includes the image SHA.
-
Conditions in pipeline tasks that reference custom tasks are disallowed because the
Condition
custom resource definition (CRD) has been deprecated. -
Variable expansion is now added in the
Task
CRD for the following fields:spec.steps[].imagePullPolicy
andspec.sidecar[].imagePullPolicy
. -
You can disable the built-in credential mechanism in Tekton by setting the
disable-creds-init
feature-flag totrue
. -
Resolved when expressions are now listed in the
Skipped Tasks
and theTask Runs
sections in theStatus
field of thePipelineRun
configuration. -
The
git init
command can now clone recursive submodules. -
A
Task
CR author can now specify a timeout for a step in theTask
spec. -
You can now base the entry point image on the
distroless/static:nonroot
image and give it a mode to copy itself to the destination, without relying on thecp
command being present in the base image. -
You can now use the configuration flag
require-git-ssh-secret-known-hosts
to disallow omitting known hosts in the Git SSH secret. When the flag value is set totrue
, you must include theknown_host
field in the Git SSH secret. The default value for the flag isfalse
. - The concept of optional workspaces is now introduced. A task or pipeline might declare a workspace optional and conditionally change their behavior based on its presence. A task run or pipeline run might also omit that workspace, thereby modifying the task or pipeline behavior. The default task run workspaces are not added in place of an omitted optional workspace.
- Credentials initialization in Tekton now detects an SSH credential that is used with a non-SSH URL, and vice versa in Git pipeline resources, and logs a warning in the step containers.
- The task run controller emits a warning event if the affinity specified by the pod template is overwritten by the affinity assistant.
- The task run reconciler now records metrics for cloud events that are emitted once a task run is completed. This includes retries.
4.1.10.1.2. Pipelines CLI
-
Support for
--no-headers flag
is now added to the following commands:tkn condition list
,tkn triggerbinding list
,tkn eventlistener list
,tkn clustertask list
,tkn clustertriggerbinding list
. -
When used together, the
--last
or--use
options override the--prefix-name
and--timeout
options. -
The
tkn eventlistener logs
command is now added to view theEventListener
logs. -
The
tekton hub
commands are now integrated into thetkn
CLI. -
The
--nocolour
option is now changed to--no-color
. -
The
--all-namespaces
flag is added to the following commands:tkn triggertemplate list
,tkn condition list
,tkn triggerbinding list
,tkn eventlistener list
.
4.1.10.1.3. Triggers
-
You can now specify your resource information in the
EventListener
template. -
It is now mandatory for
EventListener
service accounts to have thelist
andwatch
verbs, in addition to theget
verb for all the triggers resources. This enables you to useListers
to fetch data fromEventListener
,Trigger
,TriggerBinding
,TriggerTemplate
, andClusterTriggerBinding
resources. You can use this feature to create aSink
object rather than specifying multiple informers, and directly make calls to the API server. -
A new
Interceptor
interface is added to support immutable input event bodies. Interceptors can now add data or fields to a newextensions
field, and cannot modify the input bodies making them immutable. The CEL interceptor uses this newInterceptor
interface. -
A
namespaceSelector
field is added to theEventListener
resource. Use it to specify the namespaces from where theEventListener
resource can fetch theTrigger
object for processing events. To use thenamespaceSelector
field, the service account for theEventListener
resource must have a cluster role. -
The triggers
EventListener
resource now supports end-to-end secure connection to theeventlistener
pod. -
The escaping parameters behavior in the
TriggerTemplates
resource by replacing"
with\"
is now removed. -
A new
resources
field, supporting Kubernetes resources, is introduced as part of theEventListener
spec. - A new functionality for the CEL interceptor, with support for upper and lower-casing of ASCII strings, is added.
-
You can embed
TriggerBinding
resources by using thename
andvalue
fields in a trigger, or an event listener. -
The
PodSecurityPolicy
configuration is updated to run in restricted environments. It ensures that containers must run as non-root. In addition, the role-based access control for using the pod security policy is moved from cluster-scoped to namespace-scoped. This ensures that the triggers cannot use other pod security policies that are unrelated to a namespace. -
Support for embedded trigger templates is now added. You can either use the
name
field to refer to an embedded template or embed the template inside thespec
field.
4.1.10.2. Deprecated features
-
Pipeline templates that use
PipelineResources
CRDs are now deprecated and will be removed in a future release. -
The
template.name
field is deprecated in favor of thetemplate.ref
field and will be removed in a future release. -
The
-c
shorthand for the--check
command has been removed. In addition, globaltkn
flags are added to theversion
command.
4.1.10.3. Known issues
-
CEL overlays add fields to a new top-level
extensions
function, instead of modifying the incoming event body.TriggerBinding
resources can access values within this newextensions
function using the$(extensions.<key>)
syntax. Update your binding to use the$(extensions.<key>)
syntax instead of the$(body.<overlay-key>)
syntax. -
The escaping parameters behavior by replacing
"
with\"
is now removed. If you need to retain the old escaping parameters behavior add thetekton.dev/old-escape-quotes: true"
annotation to yourTriggerTemplate
specification. -
You can embed
TriggerBinding
resources by using thename
andvalue
fields inside a trigger or an event listener. However, you cannot specify bothname
andref
fields for a single binding. Use theref
field to refer to aTriggerBinding
resource and thename
field for embedded bindings. -
An interceptor cannot attempt to reference a
secret
outside the namespace of anEventListener
resource. You must include secrets in the namespace of the `EventListener`resource. -
In Triggers 0.9.0 and later, if a body or header based
TriggerBinding
parameter is missing or malformed in an event payload, the default values are used instead of displaying an error. -
Tasks and pipelines created with
WhenExpression
objects using Tekton Pipelines 0.16.x must be reapplied to fix their JSON annotations. - When a pipeline accepts an optional workspace and gives it to a task, the pipeline run stalls if the workspace is not provided.
- To use the Buildah cluster task in a disconnected environment, ensure that the Dockerfile uses an internal image stream as the base image, and then use it in the same manner as any S2I cluster task.
4.1.10.4. Fixed issues
-
Extensions added by a CEL Interceptor are passed on to webhook interceptors by adding the
Extensions
field within the event body. -
The activity timeout for log readers is now configurable using the
LogOptions
field. However, the default behavior of timeout in 10 seconds is retained. -
The
log
command ignores the--follow
flag when a task run or pipeline run is complete, and reads available logs instead of live logs. -
References to the following Tekton resources:
EventListener
,TriggerBinding
,ClusterTriggerBinding
,Condition
, andTriggerTemplate
are now standardized and made consistent across all user-facing messages intkn
commands. -
Previously, if you started a canceled task run or pipeline run with the
--use-taskrun <canceled-task-run-name>
,--use-pipelinerun <canceled-pipeline-run-name>
or--last
flags, the new run would be canceled. This bug is now fixed. -
The
tkn pr desc
command is now enhanced to ensure that it does not fail in case of pipeline runs with conditions. -
When you delete a task run using the
tkn tr delete
command with the--task
option, and a cluster task exists with the same name, the task runs for the cluster task also get deleted. As a workaround, filter the task runs by using theTaskRefKind
field. -
The
tkn triggertemplate describe
command would display only part of theapiVersion
value in the output. For example, onlytriggers.tekton.dev
was displayed instead oftriggers.tekton.dev/v1alpha1
. This bug is now fixed. - The webhook, under certain conditions, would fail to acquire a lease and not function correctly. This bug is now fixed.
- Pipelines with when expressions created in v0.16.3 can now be run in v0.17.1 and later. After an upgrade, you do not need to reapply pipeline definitions created in previous versions because both the uppercase and lowercase first letters for the annotations are now supported.
-
By default, the
leader-election-ha
field is now enabled for high availability. When thedisable-ha
controller flag is set totrue
, it disables high availability support. - Issues with duplicate cloud events are now fixed. Cloud events are now sent only when a condition changes the state, reason, or message.
-
When a service account name is missing from a
PipelineRun
orTaskRun
spec, the controller uses the service account name from theconfig-defaults
config map. If the service account name is also missing in theconfig-defaults
config map, the controller now sets it todefault
in the spec. - Validation for compatibility with the affinity assistant is now supported when the same persistent volume claim is used for multiple workspaces, but with different subpaths.
4.1.11. Release notes for Red Hat OpenShift Pipelines Technology Preview 1.2
4.1.11.1. New features
Red Hat OpenShift Pipelines Technology Preview (TP) 1.2 is now available on OpenShift Container Platform 4.6. Red Hat OpenShift Pipelines TP 1.2 is updated to support:
- Tekton Pipelines 0.16.3
-
Tekton
tkn
CLI 0.13.1 - Tekton Triggers 0.8.1
- cluster tasks based on Tekton Catalog 0.16
- IBM Power Systems on OpenShift Container Platform 4.6
- IBM Z and LinuxONE on OpenShift Container Platform 4.6
In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.2.
4.1.11.1.1. Pipelines
This release of Red Hat OpenShift Pipelines adds support for a disconnected installation.
NoteInstallations in restricted environments are currently not supported on IBM Power Systems, IBM Z, and LinuxONE.
-
You can now use the
when
field, instead ofconditions
resource, to run a task only when certain criteria are met. The key components ofWhenExpression
resources areInput
,Operator
, andValues
. If all the when expressions evaluate toTrue
, then the task is run. If any of the when expressions evaluate toFalse
, the task is skipped. - Step statuses are now updated if a task run is canceled or times out.
-
Support for Git Large File Storage (LFS) is now available to build the base image used by
git-init
. -
You can now use the
taskSpec
field to specify metadata, such as labels and annotations, when a task is embedded in a pipeline. -
Cloud events are now supported by pipeline runs. Retries with
backoff
are now enabled for cloud events sent by the cloud event pipeline resource. -
You can now set a default
Workspace
configuration for any workspace that aTask
resource declares, but that aTaskRun
resource does not explicitly provide. -
Support is available for namespace variable interpolation for the
PipelineRun
namespace andTaskRun
namespace. -
Validation for
TaskRun
objects is now added to check that not more than one persistent volume claim workspace is used when aTaskRun
resource is associated with an Affinity Assistant. If more than one persistent volume claim workspace is used, the task run fails with aTaskRunValidationFailed
condition. Note that by default, the Affinity Assistant is disabled in Red Hat OpenShift Pipelines, so you will need to enable the assistant to use it.
4.1.11.1.2. Pipelines CLI
The
tkn task describe
,tkn taskrun describe
,tkn clustertask describe
,tkn pipeline describe
, andtkn pipelinerun describe
commands now:-
Automatically select the
Task
,TaskRun
,ClusterTask
,Pipeline
andPipelineRun
resource, respectively, if only one of them is present. -
Display the results of the
Task
,TaskRun
,ClusterTask
,Pipeline
andPipelineRun
resource in their outputs, respectively. -
Display workspaces declared in the
Task
,TaskRun
,ClusterTask
,Pipeline
andPipelineRun
resource in their outputs, respectively.
-
Automatically select the
-
You can now use the
--prefix-name
option with thetkn clustertask start
command to specify a prefix for the name of a task run. -
Interactive mode support has now been provided to the
tkn clustertask start
command. -
You can now specify
PodTemplate
properties supported by pipelines using local or remote file definitions forTaskRun
andPipelineRun
objects. -
You can now use the
--use-params-defaults
option with thetkn clustertask start
command to use the default values set in theClusterTask
configuration and create the task run. -
The
--use-param-defaults
flag for thetkn pipeline start
command now prompts the interactive mode if the default values have not been specified for some of the parameters.
4.1.11.1.3. Triggers
-
The Common Expression Language (CEL) function named
parseYAML
has been added to parse a YAML string into a map of strings. - Error messages for parsing CEL expressions have been improved to make them more granular while evaluating expressions and when parsing the hook body for creating the evaluation environment.
- Support is now available for marshaling boolean values and maps if they are used as the values of expressions in a CEL overlay mechanism.
The following fields have been added to the
EventListener
object:-
The
replicas
field enables the event listener to run more than one pod by specifying the number of replicas in the YAML file. -
The
NodeSelector
field enables theEventListener
object to schedule the event listener pod to a specific node.
-
The
-
Webhook interceptors can now parse the
EventListener-Request-URL
header to extract parameters from the original request URL being handled by the event listener. - Annotations from the event listener can now be propagated to the deployment, services, and other pods. Note that custom annotations on services or deployment are overwritten, and hence, must be added to the event listener annotations so that they are propagated.
-
Proper validation for replicas in the
EventListener
specification is now available for cases when a user specifies thespec.replicas
values asnegative
orzero
. -
You can now specify the
TriggerCRD
object inside theEventListener
spec as a reference using theTriggerRef
field to create theTriggerCRD
object separately and then bind it inside theEventListener
spec. -
Validation and defaults for the
TriggerCRD
object are now available.
4.1.11.2. Deprecated features
-
$(params)
parameters are now removed from thetriggertemplate
resource and replaced by$(tt.params)
to avoid confusion between theresourcetemplate
andtriggertemplate
resource parameters. -
The
ServiceAccount
reference of the optionalEventListenerTrigger
-based authentication level has changed from an object reference to aServiceAccountName
string. This ensures that theServiceAccount
reference is in the same namespace as theEventListenerTrigger
object. -
The
Conditions
custom resource definition (CRD) is now deprecated; use theWhenExpressions
CRD instead. -
The
PipelineRun.Spec.ServiceAccountNames
object is being deprecated and replaced by thePipelineRun.Spec.TaskRunSpec[].ServiceAccountName
object.
4.1.11.3. Known issues
- This release of Red Hat OpenShift Pipelines adds support for a disconnected installation. However, some images used by the cluster tasks must be mirrored for them to work in disconnected clusters.
-
Pipelines in the
openshift
namespace are not deleted after you uninstall the Red Hat OpenShift Pipelines Operator. Use theoc delete pipelines -n openshift --all
command to delete the pipelines. Uninstalling the Red Hat OpenShift Pipelines Operator does not remove the event listeners.
As a workaround, to remove the
EventListener
andPod
CRDs:Edit the
EventListener
object with theforegroundDeletion
finalizers:$ oc patch el/<eventlistener_name> -p '{"metadata":{"finalizers":["foregroundDeletion"]}}' --type=merge
For example:
$ oc patch el/github-listener-interceptor -p '{"metadata":{"finalizers":["foregroundDeletion"]}}' --type=merge
Delete the
EventListener
CRD:$ oc patch crd/eventlisteners.triggers.tekton.dev -p '{"metadata":{"finalizers":[]}}' --type=merge
When you run a multi-arch container image task without command specification on an IBM Power Systems (ppc64le) or IBM Z (s390x) cluster, the
TaskRun
resource fails with the following error:Error executing command: fork/exec /bin/bash: exec format error
As a workaround, use an architecture specific container image or specify the sha256 digest to point to the correct architecture. To get the sha256 digest enter:
$ skopeo inspect --raw <image_name>| jq '.manifests[] | select(.platform.architecture == "<architecture>") | .digest'
4.1.11.4. Fixed issues
- A simple syntax validation to check the CEL filter, overlays in the Webhook validator, and the expressions in the interceptor has now been added.
- Triggers no longer overwrite annotations set on the underlying deployment and service objects.
-
Previously, an event listener would stop accepting events. This fix adds an idle timeout of 120 seconds for the
EventListener
sink to resolve this issue. -
Previously, canceling a pipeline run with a
Failed(Canceled)
state gave a success message. This has been fixed to display an error instead. -
The
tkn eventlistener list
command now provides the status of the listed event listeners, thus enabling you to easily identify the available ones. -
Consistent error messages are now displayed for the
triggers list
andtriggers describe
commands when triggers are not installed or when a resource cannot be found. -
Previously, a large number of idle connections would build up during cloud event delivery. The
DisableKeepAlives: true
parameter was added to thecloudeventclient
config to fix this issue. Thus, a new connection is set up for every cloud event. -
Previously, the
creds-init
code would write empty files to the disk even if credentials of a given type were not provided. This fix modifies thecreds-init
code to write files for only those credentials that have actually been mounted from correctly annotated secrets.
4.1.12. Release notes for Red Hat OpenShift Pipelines Technology Preview 1.1
4.1.12.1. New features
Red Hat OpenShift Pipelines Technology Preview (TP) 1.1 is now available on OpenShift Container Platform 4.5. Red Hat OpenShift Pipelines TP 1.1 is updated to support:
- Tekton Pipelines 0.14.3
-
Tekton
tkn
CLI 0.11.0 - Tekton Triggers 0.6.1
- cluster tasks based on Tekton Catalog 0.14
In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.1.
4.1.12.1.1. Pipelines
- Workspaces can now be used instead of pipeline resources. It is recommended that you use workspaces in OpenShift Pipelines, as pipeline resources are difficult to debug, limited in scope, and make tasks less reusable. For more details on workspaces, see the Understanding OpenShift Pipelines section.
Workspace support for volume claim templates has been added:
- The volume claim template for a pipeline run and task run can now be added as a volume source for workspaces. The tekton-controller then creates a persistent volume claim (PVC) using the template that is seen as a PVC for all task runs in the pipeline. Thus you do not need to define the PVC configuration every time it binds a workspace that spans multiple tasks.
- Support to find the name of the PVC when a volume claim template is used as a volume source is now available using variable substitution.
Support for improving audits:
-
The
PipelineRun.Status
field now contains the status of every task run in the pipeline and the pipeline specification used to instantiate a pipeline run to monitor the progress of the pipeline run. -
Pipeline results have been added to the pipeline specification and
PipelineRun
status. -
The
TaskRun.Status
field now contains the exact task specification used to instantiate theTaskRun
resource.
-
The
- Support to apply the default parameter to conditions.
-
A task run created by referencing a cluster task now adds the
tekton.dev/clusterTask
label instead of thetekton.dev/task
label. -
The kube config writer now adds the
ClientKeyData
and theClientCertificateData
configurations in the resource structure to enable replacement of the pipeline resource type cluster with the kubeconfig-creator task. -
The names of the
feature-flags
and theconfig-defaults
config maps are now customizable. - Support for the host network in the pod template used by the task run is now available.
- An Affinity Assistant is now available to support node affinity in task runs that share workspace volume. By default, this is disabled on OpenShift Pipelines.
-
The pod template has been updated to specify
imagePullSecrets
to identify secrets that the container runtime should use to authorize container image pulls when starting a pod. - Support for emitting warning events from the task run controller if the controller fails to update the task run.
- Standard or recommended k8s labels have been added to all resources to identify resources belonging to an application or component.
-
The
Entrypoint
process is now notified for signals and these signals are then propagated using a dedicated PID Group of theEntrypoint
process. - The pod template can now be set on a task level at runtime using task run specs.
Support for emitting Kubernetes events:
-
The controller now emits events for additional task run lifecycle events -
taskrun started
andtaskrun running
. - The pipeline run controller now emits an event every time a pipeline starts.
-
The controller now emits events for additional task run lifecycle events -
- In addition to the default Kubernetes events, support for cloud events for task runs is now available. The controller can be configured to send any task run events, such as create, started, and failed, as cloud events.
-
Support for using the
$context.<task|taskRun|pipeline|pipelineRun>.name
variable to reference the appropriate name when in pipeline runs and task runs. - Validation for pipeline run parameters is now available to ensure that all the parameters required by the pipeline are provided by the pipeline run. This also allows pipeline runs to provide extra parameters in addition to the required parameters.
-
You can now specify tasks within a pipeline that will always execute before the pipeline exits, either after finishing all tasks successfully or after a task in the pipeline failed, using the
finally
field in the pipeline YAML file. -
The
git-clone
cluster task is now available.
4.1.12.1.2. Pipelines CLI
-
Support for embedded trigger binding is now available to the
tkn evenlistener describe
command. - Support to recommend subcommands and make suggestions if an incorrect subcommand is used.
-
The
tkn task describe
command now auto selects the task if only one task is present in the pipeline. -
You can now start a task using default parameter values by specifying the
--use-param-defaults
flag in thetkn task start
command. -
You can now specify a volume claim template for pipeline runs or task runs using the
--workspace
option with thetkn pipeline start
ortkn task start
commands. -
The
tkn pipelinerun logs
command now displays logs for the final tasks listed in thefinally
section. -
Interactive mode support has now been provided to the
tkn task start
command and thedescribe
subcommand for the followingtkn
resources:pipeline
,pipelinerun
,task
,taskrun
,clustertask
, andpipelineresource
. -
The
tkn version
command now displays the version of the triggers installed in the cluster. -
The
tkn pipeline describe
command now displays parameter values and timeouts specified for tasks used in the pipeline. -
Support added for the
--last
option for thetkn pipelinerun describe
and thetkn taskrun describe
commands to describe the most recent pipeline run or task run, respectively. -
The
tkn pipeline describe
command now displays the conditions applicable to the tasks in the pipeline. -
You can now use the
--no-headers
and--all-namespaces
flags with thetkn resource list
command.
4.1.12.1.3. Triggers
The following Common Expression Language (CEL) functions are now available:
-
parseURL
to parse and extract portions of a URL -
parseJSON
to parse JSON value types embedded in a string in thepayload
field of thedeployment
webhook
-
- A new interceptor for webhooks from Bitbucket has been added.
-
Event listeners now display the
Address URL
and theAvailable status
as additional fields when listed with thekubectl get
command. -
trigger template params now use the
$(tt.params.<paramName>)
syntax instead of$(params.<paramName>)
to reduce the confusion between trigger template and resource templates params. -
You can now add
tolerations
in theEventListener
CRD to ensure that event listeners are deployed with the same configuration even if all nodes are tainted due to security or management issues. -
You can now add a Readiness Probe for event listener Deployment at
URL/live
. -
Support for embedding
TriggerBinding
specifications in event listener triggers is now added. -
Trigger resources are now annotated with the recommended
app.kubernetes.io
labels.
4.1.12.2. Deprecated features
The following items are deprecated in this release:
-
The
--namespace
or-n
flags for all cluster-wide commands, including theclustertask
andclustertriggerbinding
commands, are deprecated. It will be removed in a future release. -
The
name
field intriggers.bindings
within an event listener has been deprecated in favor of theref
field and will be removed in a future release. -
Variable interpolation in trigger templates using
$(params)
has been deprecated in favor of using$(tt.params)
to reduce confusion with the pipeline variable interpolation syntax. The$(params.<paramName>)
syntax will be removed in a future release. -
The
tekton.dev/task
label is deprecated on cluster tasks. -
The
TaskRun.Status.ResourceResults.ResourceRef
field is deprecated and will be removed. -
The
tkn pipeline create
,tkn task create
, andtkn resource create -f
subcommands have been removed. -
Namespace validation has been removed from
tkn
commands. -
The default timeout of
1h
and the-t
flag for thetkn ct start
command have been removed. -
The
s2i
cluster task has been deprecated.
4.1.12.3. Known issues
- Conditions do not support workspaces.
-
The
--workspace
option and the interactive mode is not supported for thetkn clustertask start
command. -
Support of backward compatibility for
$(params.<paramName>)
syntax forces you to use trigger templates with pipeline specific params as the trigger s webhook is unable to differentiate trigger params from pipelines params. -
Pipeline metrics report incorrect values when you run a promQL query for
tekton_taskrun_count
andtekton_taskrun_duration_seconds_count
. -
pipeline runs and task runs continue to be in the
Running
andRunning(Pending)
states respectively even when a non existing PVC name is given to a workspace.
4.1.12.4. Fixed issues
-
Previously, the
tkn task delete <name> --trs
command would delete both the task and cluster task if the name of the task and cluster task were the same. With this fix, the command deletes only the task runs that are created by the task<name>
. -
Previously the
tkn pr delete -p <name> --keep 2
command would disregard the-p
flag when used with the--keep
flag and would delete all the pipeline runs except the latest two. With this fix, the command deletes only the pipeline runs that are created by the pipeline<name>
, except for the latest two. -
The
tkn triggertemplate describe
output now displays resource templates in a table format instead of YAML format. -
Previously the
buildah
cluster task failed when a new user was added to a container. With this fix, the issue has been resolved.
4.1.13. Release notes for Red Hat OpenShift Pipelines Technology Preview 1.0
4.1.13.1. New features
Red Hat OpenShift Pipelines Technology Preview (TP) 1.0 is now available on OpenShift Container Platform 4.4. Red Hat OpenShift Pipelines TP 1.0 is updated to support:
- Tekton Pipelines 0.11.3
-
Tekton
tkn
CLI 0.9.0 - Tekton Triggers 0.4.0
- cluster tasks based on Tekton Catalog 0.11
In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.0.
4.1.13.1.1. Pipelines
- Support for v1beta1 API Version.
- Support for an improved limit range. Previously, limit range was specified exclusively for the task run and the pipeline run. Now there is no need to explicitly specify the limit range. The minimum limit range across the namespace is used.
- Support for sharing data between tasks using task results and task params.
-
Pipelines can now be configured to not overwrite the
HOME
environment variable and the working directory of steps. -
Similar to task steps,
sidecars
now support script mode. -
You can now specify a different scheduler name in task run
podTemplate
resource. - Support for variable substitution using Star Array Notation.
- Tekton controller can now be configured to monitor an individual namespace.
- A new description field is now added to the specification of pipelines, tasks, cluster tasks, resources, and conditions.
- Addition of proxy parameters to Git pipeline resources.
4.1.13.1.2. Pipelines CLI
-
The
describe
subcommand is now added for the followingtkn
resources:EventListener
,Condition
,TriggerTemplate
,ClusterTask
, andTriggerSBinding
. -
Support added for
v1beta1
to the following resources along with backward compatibility forv1alpha1
:ClusterTask
,Task
,Pipeline
,PipelineRun
, andTaskRun
. The following commands can now list output from all namespaces using the
--all-namespaces
flag option:tkn task list
,tkn pipeline list
,tkn taskrun list
,tkn pipelinerun list
The output of these commands is also enhanced to display information without headers using the
--no-headers
flag option.-
You can now start a pipeline using default parameter values by specifying
--use-param-defaults
flag in thetkn pipelines start
command. -
Support for workspace is now added to
tkn pipeline start
andtkn task start
commands. -
A new
clustertriggerbinding
command is now added with the following subcommands:describe
,delete
, andlist
. -
You can now directly start a pipeline run using a local or remote
yaml
file. -
The
describe
subcommand now displays an enhanced and detailed output. With the addition of new fields, such asdescription
,timeout
,param description
, andsidecar status
, the command output now provides more detailed information about a specifictkn
resource. -
The
tkn task log
command now displays logs directly if only one task is present in the namespace.
4.1.13.1.3. Triggers
-
Triggers can now create both
v1alpha1
andv1beta1
pipeline resources. -
Support for new Common Expression Language (CEL) interceptor function -
compareSecret
. This function securely compares strings to secrets in CEL expressions. - Support for authentication and authorization at the event listener trigger level.
4.1.13.2. Deprecated features
The following items are deprecated in this release:
The environment variable
$HOME
, and variableworkingDir
in theSteps
specification are deprecated and might be changed in a future release. Currently in aStep
container, theHOME
andworkingDir
variables are overwritten to/tekton/home
and/workspace
variables, respectively.In a later release, these two fields will not be modified, and will be set to values defined in the container image and the
Task
YAML. For this release, use thedisable-home-env-overwrite
anddisable-working-directory-overwrite
flags to disable overwriting of theHOME
andworkingDir
variables.-
The following commands are deprecated and might be removed in the future release:
tkn pipeline create
,tkn task create
. -
The
-f
flag with thetkn resource create
command is now deprecated. It might be removed in the future release. -
The
-t
flag and the--timeout
flag (with seconds format) for thetkn clustertask create
command are now deprecated. Only duration timeout format is now supported, for example1h30s
. These deprecated flags might be removed in the future release.
4.1.13.3. Known issues
- If you are upgrading from an older version of Red Hat OpenShift Pipelines, you must delete your existing deployments before upgrading to Red Hat OpenShift Pipelines version 1.0. To delete an existing deployment, you must first delete Custom Resources and then uninstall the Red Hat OpenShift Pipelines Operator. For more details, see the uninstalling Red Hat OpenShift Pipelines section.
-
Submitting the same
v1alpha1
tasks more than once results in an error. Use theoc replace
command instead ofoc apply
when re-submitting av1alpha1
task. The
buildah
cluster task does not work when a new user is added to a container.When the Operator is installed, the
--storage-driver
flag for thebuildah
cluster task is not specified, therefore the flag is set to its default value. In some cases, this causes the storage driver to be set incorrectly. When a new user is added, the incorrect storage-driver results in the failure of thebuildah
cluster task with the following error:useradd: /etc/passwd.8: lock file already used useradd: cannot lock /etc/passwd; try again later.
As a workaround, manually set the
--storage-driver
flag value tooverlay
in thebuildah-task.yaml
file:Login to your cluster as a
cluster-admin
:$ oc login -u <login> -p <password> https://openshift.example.com:6443
Use the
oc edit
command to editbuildah
cluster task:$ oc edit clustertask buildah
The current version of the
buildah
clustertask YAML file opens in the editor set by yourEDITOR
environment variable.Under the
Steps
field, locate the followingcommand
field:command: ['buildah', 'bud', '--format=$(params.FORMAT)', '--tls-verify=$(params.TLSVERIFY)', '--layers', '-f', '$(params.DOCKERFILE)', '-t', '$(resources.outputs.image.url)', '$(params.CONTEXT)']
Replace the
command
field with the following:command: ['buildah', '--storage-driver=overlay', 'bud', '--format=$(params.FORMAT)', '--tls-verify=$(params.TLSVERIFY)', '--no-cache', '-f', '$(params.DOCKERFILE)', '-t', '$(params.IMAGE)', '$(params.CONTEXT)']
- Save the file and exit.
Alternatively, you can also modify the
buildah
cluster task YAML file directly on the web console by navigating to PipelinesCluster Tasks buildah. Select Edit Cluster Task from the Actions menu and replace the command
field as shown in the previous procedure.
4.1.13.4. Fixed issues
-
Previously, the
DeploymentConfig
task triggered a new deployment build even when an image build was already in progress. This caused the deployment of the pipeline to fail. With this fix, thedeploy task
command is now replaced with theoc rollout status
command which waits for the in-progress deployment to finish. -
Support for
APP_NAME
parameter is now added in pipeline templates. -
Previously, the pipeline template for Java S2I failed to look up the image in the registry. With this fix, the image is looked up using the existing image pipeline resources instead of the user provided
IMAGE_NAME
parameter. - All the OpenShift Pipelines images are now based on the Red Hat Universal Base Images (UBI).
-
Previously, when the pipeline was installed in a namespace other than
tekton-pipelines
, thetkn version
command displayed the pipeline version asunknown
. With this fix, thetkn version
command now displays the correct pipeline version in any namespace. -
The
-c
flag is no longer supported for thetkn version
command. - Non-admin users can now list the cluster trigger bindings.
-
The event listener
CompareSecret
function is now fixed for the CEL Interceptor. -
The
list
,describe
, andstart
subcommands for tasks and cluster tasks now correctly display the output in case a task and cluster task have the same name. - Previously, the OpenShift Pipelines Operator modified the privileged security context constraints (SCCs), which caused an error during cluster upgrade. This error is now fixed.
-
In the
tekton-pipelines
namespace, the timeouts of all task runs and pipeline runs are now set to the value ofdefault-timeout-minutes
field using the config map. - Previously, the pipelines section in the web console was not displayed for non-admin users. This issue is now resolved.
4.2. Understanding OpenShift Pipelines
Red Hat OpenShift Pipelines is a cloud-native, continuous integration and continuous delivery (CI/CD) solution based on Kubernetes resources. It uses Tekton building blocks to automate deployments across multiple platforms by abstracting away the underlying implementation details. Tekton introduces a number of standard custom resource definitions (CRDs) for defining CI/CD pipelines that are portable across Kubernetes distributions.
4.2.1. Key features
- Red Hat OpenShift Pipelines is a serverless CI/CD system that runs pipelines with all the required dependencies in isolated containers.
- Red Hat OpenShift Pipelines are designed for decentralized teams that work on microservice-based architecture.
- Red Hat OpenShift Pipelines use standard CI/CD pipeline definitions that are easy to extend and integrate with the existing Kubernetes tools, enabling you to scale on-demand.
- You can use Red Hat OpenShift Pipelines to build images with Kubernetes tools such as Source-to-Image (S2I), Buildah, Buildpacks, and Kaniko that are portable across any Kubernetes platform.
- You can use the OpenShift Container Platform web console Developer perspective to create Tekton resources, view logs of pipeline runs, and manage pipelines in your OpenShift Container Platform namespaces.
4.2.2. OpenShift Pipeline Concepts
This guide provides a detailed view of the various pipeline concepts.
4.2.2.1. Tasks
Tasks are the building blocks of a pipeline and consists of sequentially executed steps. It is essentially a function of inputs and outputs. A task can run individually or as a part of the pipeline. Tasks are reusable and can be used in multiple Pipelines.
Steps are a series of commands that are sequentially executed by the task and achieve a specific goal, such as building an image. Every task runs as a pod, and each step runs as a container within that pod. Because steps run within the same pod, they can access the same volumes for caching files, config maps, and secrets.
The following example shows the apply-manifests
task.
apiVersion: tekton.dev/v1beta1 1 kind: Task 2 metadata: name: apply-manifests 3 spec: 4 workspaces: - name: source params: - name: manifest_dir description: The directory in source that contains yaml manifests type: string default: "k8s" steps: - name: apply image: image-registry.openshift-image-registry.svc:5000/openshift/cli:latest workingDir: /workspace/source command: ["/bin/bash", "-c"] args: - |- echo Applying manifests in $(params.manifest_dir) directory oc apply -f $(params.manifest_dir) echo -----------------------------------
This task starts the pod and runs a container inside that pod using the specified image to run the specified commands.
Starting with Pipelines 1.6, the following defaults from the step YAML file are removed:
-
The
HOME
environment variable does not default to the/tekton/home
directory -
The
workingDir
field does not default to the/workspace
directory
Instead, the container for the step defines the HOME
environment variable and the workingDir
field. However, you can override the default values by specifying the custom values in the YAML file for the step.
As a temporary measure, to maintain backward compatibility with the older Pipelines versions, you can set the following fields in the TektonConfig
custom resource definition to false
:
spec: pipeline: disable-working-directory-overwrite: false disable-home-env-overwrite: false
4.2.2.2. When expression
When expressions guard task execution by setting criteria for the execution of tasks within a pipeline. They contain a list of components that allows a task to run only when certain criteria are met. When expressions are also supported in the final set of tasks that are specified using the finally
field in the pipeline YAML file.
The key components of a when expression are as follows:
-
input
: Specifies static inputs or variables such as a parameter, task result, and execution status. You must enter a valid input. If you do not enter a valid input, its value defaults to an empty string. -
operator
: Specifies the relationship of an input to a set ofvalues
. Enterin
ornotin
as your operator values. -
values
: Specifies an array of string values. Enter a non-empty array of static values or variables such as parameters, results, and a bound state of a workspace.
The declared when expressions are evaluated before the task is run. If the value of a when expression is True
, the task is run. If the value of a when expression is False
, the task is skipped.
You can use the when expressions in various use cases. For example, whether:
- The result of a previous task is as expected.
- A file in a Git repository has changed in the previous commits.
- An image exists in the registry.
- An optional workspace is available.
The following example shows the when expressions for a pipeline run. The pipeline run will execute the create-file
task only if the following criteria are met: the path
parameter is README.md
, and the echo-file-exists
task executed only if the exists
result from the check-file
task is yes
.
apiVersion: tekton.dev/v1beta1 kind: PipelineRun 1 metadata: generateName: guarded-pr- spec: serviceAccountName: 'pipeline' pipelineSpec: params: - name: path type: string description: The path of the file to be created workspaces: - name: source description: | This workspace is shared among all the pipeline tasks to read/write common resources tasks: - name: create-file 2 when: - input: "$(params.path)" operator: in values: ["README.md"] workspaces: - name: source workspace: source taskSpec: workspaces: - name: source description: The workspace to create the readme file in steps: - name: write-new-stuff image: ubuntu script: 'touch $(workspaces.source.path)/README.md' - name: check-file params: - name: path value: "$(params.path)" workspaces: - name: source workspace: source runAfter: - create-file taskSpec: params: - name: path workspaces: - name: source description: The workspace to check for the file results: - name: exists description: indicates whether the file exists or is missing steps: - name: check-file image: alpine script: | if test -f $(workspaces.source.path)/$(params.path); then printf yes | tee /tekton/results/exists else printf no | tee /tekton/results/exists fi - name: echo-file-exists when: 3 - input: "$(tasks.check-file.results.exists)" operator: in values: ["yes"] taskSpec: steps: - name: echo image: ubuntu script: 'echo file exists' ... - name: task-should-be-skipped-1 when: 4 - input: "$(params.path)" operator: notin values: ["README.md"] taskSpec: steps: - name: echo image: ubuntu script: exit 1 ... finally: - name: finally-task-should-be-executed when: 5 - input: "$(tasks.echo-file-exists.status)" operator: in values: ["Succeeded"] - input: "$(tasks.status)" operator: in values: ["Succeeded"] - input: "$(tasks.check-file.results.exists)" operator: in values: ["yes"] - input: "$(params.path)" operator: in values: ["README.md"] taskSpec: steps: - name: echo image: ubuntu script: 'echo finally done' params: - name: path value: README.md workspaces: - name: source volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 16Mi
- 1
- Specifies the type of Kubernetes object. In this example,
PipelineRun
. - 2
- Task
create-file
used in the Pipeline. - 3
when
expression that specifies to execute theecho-file-exists
task only if theexists
result from thecheck-file
task isyes
.- 4
when
expression that specifies to skip thetask-should-be-skipped-1
task only if thepath
parameter isREADME.md
.- 5
when
expression that specifies to execute thefinally-task-should-be-executed
task only if the execution status of theecho-file-exists
task and the task status isSucceeded
, theexists
result from thecheck-file
task isyes
, and thepath
parameter isREADME.md
.
The Pipeline Run details page of the OpenShift Container Platform web console shows the status of the tasks and when expressions as follows:
- All the criteria are met: Tasks and the when expression symbol, which is represented by a diamond shape are green.
- Any one of the criteria are not met: Task is skipped. Skipped tasks and the when expression symbol are grey.
- None of the criteria are met: Task is skipped. Skipped tasks and the when expression symbol are grey.
- Task run fails: Failed tasks and the when expression symbol are red.
4.2.2.3. Finally tasks
The finally
tasks are the final set of tasks specified using the finally
field in the pipeline YAML file. A finally
task always executes the tasks within the pipeline, irrespective of whether the pipeline runs are executed successfully. The finally
tasks are executed in parallel after all the pipeline tasks are run, before the corresponding pipeline exits.
You can configure a finally
task to consume the results of any task within the same pipeline. This approach does not change the order in which this final task is run. It is executed in parallel with other final tasks after all the non-final tasks are executed.
The following example shows a code snippet of the clone-cleanup-workspace
pipeline. This code clones the repository into a shared workspace and cleans up the workspace. After executing the pipeline tasks, the cleanup
task specified in the finally
section of the pipeline YAML file cleans up the workspace.
apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: clone-cleanup-workspace 1 spec: workspaces: - name: git-source 2 tasks: - name: clone-app-repo 3 taskRef: name: git-clone-from-catalog params: - name: url value: https://github.com/tektoncd/community.git - name: subdirectory value: application workspaces: - name: output workspace: git-source finally: - name: cleanup 4 taskRef: 5 name: cleanup-workspace workspaces: 6 - name: source workspace: git-source - name: check-git-commit params: 7 - name: commit value: $(tasks.clone-app-repo.results.commit) taskSpec: 8 params: - name: commit steps: - name: check-commit-initialized image: alpine script: | if [[ ! $(params.commit) ]]; then exit 1 fi
- 1
- Unique name of the Pipeline.
- 2
- The shared workspace where the git repository is cloned.
- 3
- The task to clone the application repository to the shared workspace.
- 4
- The task to clean-up the shared workspace.
- 5
- A reference to the task that is to be executed in the TaskRun.
- 6
- A shared storage volume that a Task in a Pipeline needs at runtime to receive input or provide output.
- 7
- A list of parameters required for a task. If a parameter does not have an implicit default value, you must explicitly set its value.
- 8
- Embedded task definition.
4.2.2.4. TaskRun
A TaskRun instantiates a Task for execution with specific inputs, outputs, and execution parameters on a cluster. It can be invoked on its own or as part of a PipelineRun for each Task in a pipeline.
A Task consists of one or more Steps that execute container images, and each container image performs a specific piece of build work. A TaskRun executes the Steps in a Task in the specified order, until all Steps execute successfully or a failure occurs. A TaskRun is automatically created by a PipelineRun for each Task in a Pipeline.
The following example shows a TaskRun that runs the apply-manifests
Task with the relevant input parameters:
apiVersion: tekton.dev/v1beta1 1 kind: TaskRun 2 metadata: name: apply-manifests-taskrun 3 spec: 4 serviceAccountName: pipeline taskRef: 5 kind: Task name: apply-manifests workspaces: 6 - name: source persistentVolumeClaim: claimName: source-pvc
- 1
- TaskRun API version
v1beta1
. - 2
- Specifies the type of Kubernetes object. In this example,
TaskRun
. - 3
- Unique name to identify this TaskRun.
- 4
- Definition of the TaskRun. For this TaskRun, the Task and the required workspace are specified.
- 5
- Name of the Task reference used for this TaskRun. This TaskRun executes the
apply-manifests
Task. - 6
- Workspace used by the TaskRun.
4.2.2.5. Pipelines
A Pipeline is a collection of Task
resources arranged in a specific order of execution. They are executed to construct complex workflows that automate the build, deployment and delivery of applications. You can define a CI/CD workflow for your application using pipelines containing one or more tasks.
A Pipeline
resource definition consists of a number of fields or attributes, which together enable the pipeline to accomplish a specific goal. Each Pipeline
resource definition must contain at least one Task
resource, which ingests specific inputs and produces specific outputs. The pipeline definition can also optionally include Conditions, Workspaces, Parameters, or Resources depending on the application requirements.
The following example shows the build-and-deploy
pipeline, which builds an application image from a Git repository using the buildah
ClusterTask
resource:
apiVersion: tekton.dev/v1beta1 1 kind: Pipeline 2 metadata: name: build-and-deploy 3 spec: 4 workspaces: 5 - name: shared-workspace params: 6 - name: deployment-name type: string description: name of the deployment to be patched - name: git-url type: string description: url of the git repo for the code of deployment - name: git-revision type: string description: revision to be used from repo of the code for deployment default: "pipelines-1.10" - name: IMAGE type: string description: image to be built from the code tasks: 7 - name: fetch-repository taskRef: name: git-clone kind: ClusterTask workspaces: - name: output workspace: shared-workspace params: - name: url value: $(params.git-url) - name: subdirectory value: "" - name: deleteExisting value: "true" - name: revision value: $(params.git-revision) - name: build-image 8 taskRef: name: buildah kind: ClusterTask params: - name: TLSVERIFY value: "false" - name: IMAGE value: $(params.IMAGE) workspaces: - name: source workspace: shared-workspace runAfter: - fetch-repository - name: apply-manifests 9 taskRef: name: apply-manifests workspaces: - name: source workspace: shared-workspace runAfter: 10 - build-image - name: update-deployment taskRef: name: update-deployment workspaces: - name: source workspace: shared-workspace params: - name: deployment value: $(params.deployment-name) - name: IMAGE value: $(params.IMAGE) runAfter: - apply-manifests
- 1
- Pipeline API version
v1beta1
. - 2
- Specifies the type of Kubernetes object. In this example,
Pipeline
. - 3
- Unique name of this Pipeline.
- 4
- Specifies the definition and structure of the Pipeline.
- 5
- Workspaces used across all the Tasks in the Pipeline.
- 6
- Parameters used across all the Tasks in the Pipeline.
- 7
- Specifies the list of Tasks used in the Pipeline.
- 8
- Task
build-image
, which uses thebuildah
ClusterTask to build application images from a given Git repository. - 9
- Task
apply-manifests
, which uses a user-defined Task with the same name. - 10
- Specifies the sequence in which Tasks are run in a Pipeline. In this example, the
apply-manifests
Task is run only after thebuild-image
Task is completed.
The Red Hat OpenShift Pipelines Operator installs the Buildah cluster task and creates the pipeline
service account with sufficient permission to build and push an image. The Buildah cluster task can fail when associated with a different service account with insufficient permissions.
4.2.2.6. PipelineRun
A PipelineRun
is a type of resource that binds a pipeline, workspaces, credentials, and a set of parameter values specific to a scenario to run the CI/CD workflow.
A pipeline run is the running instance of a pipeline. It instantiates a pipeline for execution with specific inputs, outputs, and execution parameters on a cluster. It also creates a task run for each task in the pipeline run.
The pipeline runs the tasks sequentially until they are complete or a task fails. The status
field tracks and the progress of each task run and stores it for monitoring and auditing purposes.
The following example runs the build-and-deploy
pipeline with relevant resources and parameters:
apiVersion: tekton.dev/v1beta1 1 kind: PipelineRun 2 metadata: name: build-deploy-api-pipelinerun 3 spec: pipelineRef: name: build-and-deploy 4 params: 5 - name: deployment-name value: vote-api - name: git-url value: https://github.com/openshift-pipelines/vote-api.git - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/vote-api workspaces: 6 - name: shared-workspace volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi
- 1
- Pipeline run API version
v1beta1
. - 2
- The type of Kubernetes object. In this example,
PipelineRun
. - 3
- Unique name to identify this pipeline run.
- 4
- Name of the pipeline to be run. In this example,
build-and-deploy
. - 5
- The list of parameters required to run the pipeline.
- 6
- Workspace used by the pipeline run.
Additional resources
4.2.2.7. Workspaces
It is recommended that you use Workspaces instead of PipelineResources in OpenShift Pipelines, as PipelineResources are difficult to debug, limited in scope, and make Tasks less reusable.
Workspaces declare shared storage volumes that a Task in a Pipeline needs at runtime to receive input or provide output. Instead of specifying the actual location of the volumes, Workspaces enable you to declare the filesystem or parts of the filesystem that would be required at runtime. A Task or Pipeline declares the Workspace and you must provide the specific location details of the volume. It is then mounted into that Workspace in a TaskRun or a PipelineRun. This separation of volume declaration from runtime storage volumes makes the Tasks reusable, flexible, and independent of the user environment.
With Workspaces, you can:
- Store Task inputs and outputs
- Share data among Tasks
- Use it as a mount point for credentials held in Secrets
- Use it as a mount point for configurations held in ConfigMaps
- Use it as a mount point for common tools shared by an organization
- Create a cache of build artifacts that speed up jobs
You can specify Workspaces in the TaskRun or PipelineRun using:
- A read-only ConfigMaps or Secret
- An existing PersistentVolumeClaim shared with other Tasks
- A PersistentVolumeClaim from a provided VolumeClaimTemplate
- An emptyDir that is discarded when the TaskRun completes
The following example shows a code snippet of the build-and-deploy
Pipeline, which declares a shared-workspace
Workspace for the build-image
and apply-manifests
Tasks as defined in the Pipeline.
apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: build-and-deploy spec: workspaces: 1 - name: shared-workspace params: ... tasks: 2 - name: build-image taskRef: name: buildah kind: ClusterTask params: - name: TLSVERIFY value: "false" - name: IMAGE value: $(params.IMAGE) workspaces: 3 - name: source 4 workspace: shared-workspace 5 runAfter: - fetch-repository - name: apply-manifests taskRef: name: apply-manifests workspaces: 6 - name: source workspace: shared-workspace runAfter: - build-image ...
- 1
- List of Workspaces shared between the Tasks defined in the Pipeline. A Pipeline can define as many Workspaces as required. In this example, only one Workspace named
shared-workspace
is declared. - 2
- Definition of Tasks used in the Pipeline. This snippet defines two Tasks,
build-image
andapply-manifests
, which share a common Workspace. - 3
- List of Workspaces used in the
build-image
Task. A Task definition can include as many Workspaces as it requires. However, it is recommended that a Task uses at most one writable Workspace. - 4
- Name that uniquely identifies the Workspace used in the Task. This Task uses one Workspace named
source
. - 5
- Name of the Pipeline Workspace used by the Task. Note that the Workspace
source
in turn uses the Pipeline Workspace namedshared-workspace
. - 6
- List of Workspaces used in the
apply-manifests
Task. Note that this Task shares thesource
Workspace with thebuild-image
Task.
Workspaces help tasks share data, and allow you to specify one or more volumes that each task in the pipeline requires during execution. You can create a persistent volume claim or provide a volume claim template that creates a persistent volume claim for you.
The following code snippet of the build-deploy-api-pipelinerun
PipelineRun uses a volume claim template to create a persistent volume claim for defining the storage volume for the shared-workspace
Workspace used in the build-and-deploy
Pipeline.
apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: build-deploy-api-pipelinerun spec: pipelineRef: name: build-and-deploy params: ... workspaces: 1 - name: shared-workspace 2 volumeClaimTemplate: 3 spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi
- 1
- Specifies the list of Pipeline Workspaces for which volume binding will be provided in the PipelineRun.
- 2
- The name of the Workspace in the Pipeline for which the volume is being provided.
- 3
- Specifies a volume claim template that creates a persistent volume claim to define the storage volume for the workspace.
4.2.2.8. Triggers
Use Triggers in conjunction with pipelines to create a full-fledged CI/CD system where Kubernetes resources define the entire CI/CD execution. Triggers capture the external events, such as a Git pull request, and process them to extract key pieces of information. Mapping this event data to a set of predefined parameters triggers a series of tasks that can then create and deploy Kubernetes resources and instantiate the pipeline.
For example, you define a CI/CD workflow using Red Hat OpenShift Pipelines for your application. The pipeline must start for any new changes to take effect in the application repository. Triggers automate this process by capturing and processing any change event and by triggering a pipeline run that deploys the new image with the latest changes.
Triggers consist of the following main resources that work together to form a reusable, decoupled, and self-sustaining CI/CD system:
The
TriggerBinding
resource extracts the fields from an event payload and stores them as parameters.The following example shows a code snippet of the
TriggerBinding
resource, which extracts the Git repository information from the received event payload:apiVersion: triggers.tekton.dev/v1beta1 1 kind: TriggerBinding 2 metadata: name: vote-app 3 spec: params: 4 - name: git-repo-url value: $(body.repository.url) - name: git-repo-name value: $(body.repository.name) - name: git-revision value: $(body.head_commit.id)
- 1
- The API version of the
TriggerBinding
resource. In this example,v1beta1
. - 2
- Specifies the type of Kubernetes object. In this example,
TriggerBinding
. - 3
- Unique name to identify the
TriggerBinding
resource. - 4
- List of parameters which will be extracted from the received event payload and passed to the
TriggerTemplate
resource. In this example, the Git repository URL, name, and revision are extracted from the body of the event payload.
The
TriggerTemplate
resource acts as a standard for the way resources must be created. It specifies the way parameterized data from theTriggerBinding
resource should be used. A trigger template receives input from the trigger binding, and then performs a series of actions that results in creation of new pipeline resources, and initiation of a new pipeline run.The following example shows a code snippet of a
TriggerTemplate
resource, which creates a pipeline run using the Git repository information received from theTriggerBinding
resource you just created:apiVersion: triggers.tekton.dev/v1beta1 1 kind: TriggerTemplate 2 metadata: name: vote-app 3 spec: params: 4 - name: git-repo-url description: The git repository url - name: git-revision description: The git revision default: pipelines-1.10 - name: git-repo-name description: The name of the deployment to be created / patched resourcetemplates: 5 - apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: build-deploy-$(tt.params.git-repo-name)-$(uid) spec: serviceAccountName: pipeline pipelineRef: name: build-and-deploy params: - name: deployment-name value: $(tt.params.git-repo-name) - name: git-url value: $(tt.params.git-repo-url) - name: git-revision value: $(tt.params.git-revision) - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/$(tt.params.git-repo-name) workspaces: - name: shared-workspace volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi
- 1
- The API version of the
TriggerTemplate
resource. In this example,v1beta1
. - 2
- Specifies the type of Kubernetes object. In this example,
TriggerTemplate
. - 3
- Unique name to identify the
TriggerTemplate
resource. - 4
- Parameters supplied by the
TriggerBinding
resource. - 5
- List of templates that specify the way resources must be created using the parameters received through the
TriggerBinding
orEventListener
resources.
The
Trigger
resource combines theTriggerBinding
andTriggerTemplate
resources, and optionally, theinterceptors
event processor.Interceptors process all the events for a specific platform that runs before the
TriggerBinding
resource. You can use interceptors to filter the payload, verify events, define and test trigger conditions, and implement other useful processing. Interceptors use secret for event verification. Once the event data passes through an interceptor, it then goes to the trigger before you pass the payload data to the trigger binding. You can also use an interceptor to modify the behavior of the associated trigger referenced in theEventListener
specification.The following example shows a code snippet of a
Trigger
resource, namedvote-trigger
that connects theTriggerBinding
andTriggerTemplate
resources, and theinterceptors
event processor.apiVersion: triggers.tekton.dev/v1beta1 1 kind: Trigger 2 metadata: name: vote-trigger 3 spec: serviceAccountName: pipeline 4 interceptors: - ref: name: "github" 5 params: 6 - name: "secretRef" value: secretName: github-secret secretKey: secretToken - name: "eventTypes" value: ["push"] bindings: - ref: vote-app 7 template: 8 ref: vote-app --- apiVersion: v1 kind: Secret 9 metadata: name: github-secret type: Opaque stringData: secretToken: "1234567"
- 1
- The API version of the
Trigger
resource. In this example,v1beta1
. - 2
- Specifies the type of Kubernetes object. In this example,
Trigger
. - 3
- Unique name to identify the
Trigger
resource. - 4
- Service account name to be used.
- 5
- Interceptor name to be referenced. In this example,
github
. - 6
- Desired parameters to be specified.
- 7
- Name of the
TriggerBinding
resource to be connected to theTriggerTemplate
resource. - 8
- Name of the
TriggerTemplate
resource to be connected to theTriggerBinding
resource. - 9
- Secret to be used to verify events.
The
EventListener
resource provides an endpoint, or an event sink, that listens for incoming HTTP-based events with a JSON payload. It extracts event parameters from eachTriggerBinding
resource, and then processes this data to create Kubernetes resources as specified by the correspondingTriggerTemplate
resource. TheEventListener
resource also performs lightweight event processing or basic filtering on the payload using eventinterceptors
, which identify the type of payload and optionally modify it. Currently, pipeline triggers support five types of interceptors: Webhook Interceptors, GitHub Interceptors, GitLab Interceptors, Bitbucket Interceptors, and Common Expression Language (CEL) Interceptors.The following example shows an
EventListener
resource, which references theTrigger
resource namedvote-trigger
.apiVersion: triggers.tekton.dev/v1beta1 1 kind: EventListener 2 metadata: name: vote-app 3 spec: serviceAccountName: pipeline 4 triggers: - triggerRef: vote-trigger 5
- 1
- The API version of the
EventListener
resource. In this example,v1beta1
. - 2
- Specifies the type of Kubernetes object. In this example,
EventListener
. - 3
- Unique name to identify the
EventListener
resource. - 4
- Service account name to be used.
- 5
- Name of the
Trigger
resource referenced by theEventListener
resource.
4.2.3. Additional resources
- For information on installing pipelines, see Installing OpenShift Pipelines.
- For more details on creating custom CI/CD solutions, see Creating applications with CI/CD Pipelines.
- For more details on re-encrypt TLS termination, see Re-encryption Termination.
- For more details on secured routes, see the Secured routes section.
4.3. Installing OpenShift Pipelines
This guide walks cluster administrators through the process of installing the Red Hat OpenShift Pipelines Operator to an OpenShift Container Platform cluster.
Prerequisites
-
You have access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions. -
You have installed
oc
CLI. -
You have installed OpenShift Pipelines (
tkn
) CLI on your local system.
4.3.1. Installing the Red Hat OpenShift Pipelines Operator in web console
You can install Red Hat OpenShift Pipelines using the Operator listed in the OpenShift Container Platform OperatorHub. When you install the Red Hat OpenShift Pipelines Operator, the custom resources (CRs) required for the pipelines configuration are automatically installed along with the Operator.
The default Operator custom resource definition (CRD) config.operator.tekton.dev
is now replaced by tektonconfigs.operator.tekton.dev
. In addition, the Operator provides the following additional CRDs to individually manage OpenShift Pipelines components: tektonpipelines.operator.tekton.dev
, tektontriggers.operator.tekton.dev
and tektonaddons.operator.tekton.dev
.
If you have OpenShift Pipelines already installed on your cluster, the existing installation is seamlessly upgraded. The Operator will replace the instance of config.operator.tekton.dev
on your cluster with an instance of tektonconfigs.operator.tekton.dev
and additional objects of the other CRDs as necessary.
If you manually changed your existing installation, such as, changing the target namespace in the config.operator.tekton.dev
CRD instance by making changes to the resource name - cluster
field, then the upgrade path is not smooth. In such cases, the recommended workflow is to uninstall your installation and reinstall the Red Hat OpenShift Pipelines Operator.
The Red Hat OpenShift Pipelines Operator now provides the option to choose the components that you want to install by specifying profiles as part of the TektonConfig
CR. The TektonConfig
CR is automatically installed when the Operator is installed. The supported profiles are:
- Lite: This installs only Tekton Pipelines.
- Basic: This installs Tekton Pipelines and Tekton Triggers.
-
All: This is the default profile used when the
TektonConfig
CR is installed. This profile installs all of the Tekton components: Tekton Pipelines, Tekton Triggers, Tekton Addons (which includeClusterTasks
,ClusterTriggerBindings
,ConsoleCLIDownload
,ConsoleQuickStart
andConsoleYAMLSample
resources).
Procedure
-
In the Administrator perspective of the web console, navigate to Operators
OperatorHub. -
Use the Filter by keyword box to search for
Red Hat OpenShift Pipelines
Operator in the catalog. Click the Red Hat OpenShift Pipelines Operator tile. - Read the brief description about the Operator on the Red Hat OpenShift Pipelines Operator page. Click Install.
On the Install Operator page:
-
Select All namespaces on the cluster (default) for the Installation Mode. This mode installs the Operator in the default
openshift-operators
namespace, which enables the Operator to watch and be made available to all namespaces in the cluster. - Select Automatic for the Approval Strategy. This ensures that the future upgrades to the Operator are handled automatically by the Operator Lifecycle Manager (OLM). If you select the Manual approval strategy, OLM creates an update request. As a cluster administrator, you must then manually approve the OLM update request to update the Operator to the new version.
Select an Update Channel.
-
The
pipelines-<version>
channel is the default channel to install the Red Hat OpenShift Pipelines Operator. For example, the default channel to install the Red Hat OpenShift Pipelines Operator version1.7
ispipelines-1.7
. The
latest
channel enables installation of the most recent stable version of the Red Hat OpenShift Pipelines Operator.NoteThe
preview
andstable
channels will be deprecated and removed in a future release.
-
The
-
Select All namespaces on the cluster (default) for the Installation Mode. This mode installs the Operator in the default
Click Install. You will see the Operator listed on the Installed Operators page.
NoteThe Operator is installed automatically into the
openshift-operators
namespace.Verify that the Status is set to Succeeded Up to date to confirm successful installation of Red Hat OpenShift Pipelines Operator.
WarningThe success status may show as Succeeded Up to date even if installation of other components is in-progress. Therefore, it is important to verify the installation manually in the terminal.
Verify that all components of the Red Hat OpenShift Pipelines Operator were installed successfully. Login to the cluster on the terminal, and run the following command:
$ oc get tektonconfig config
Example output
NAME VERSION READY REASON config 1.9.2 True
If the READY condition is True, the Operator and its components have been installed successfully.
Additonally, check the components' versions by running the following command:
$ oc get tektonpipeline,tektontrigger,tektonaddon,pac
Example output
NAME VERSION READY REASON tektonpipeline.operator.tekton.dev/pipeline v0.41.1 True NAME VERSION READY REASON tektontrigger.operator.tekton.dev/trigger v0.22.2 True NAME VERSION READY REASON tektonaddon.operator.tekton.dev/addon 1.9.2 True NAME VERSION READY REASON openshiftpipelinesascode.operator.tekton.dev/pipelines-as-code v0.15.5 True
4.3.2. Installing the OpenShift Pipelines Operator using the CLI
You can install Red Hat OpenShift Pipelines Operator from the OperatorHub using the CLI.
Procedure
Create a Subscription object YAML file to subscribe a namespace to the Red Hat OpenShift Pipelines Operator, for example,
sub.yaml
:Example Subscription
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-pipelines-operator namespace: openshift-operators spec: channel: <channel name> 1 name: openshift-pipelines-operator-rh 2 source: redhat-operators 3 sourceNamespace: openshift-marketplace 4
- 1
- The channel name of the Operator. The
pipelines-<version>
channel is the default channel. For example, the default channel for Red Hat OpenShift Pipelines Operator version1.7
ispipelines-1.7
. Thelatest
channel enables installation of the most recent stable version of the Red Hat OpenShift Pipelines Operator. - 2
- Name of the Operator to subscribe to.
- 3
- Name of the CatalogSource that provides the Operator.
- 4
- Namespace of the CatalogSource. Use
openshift-marketplace
for the default OperatorHub CatalogSources.
Create the Subscription object:
$ oc apply -f sub.yaml
The Red Hat OpenShift Pipelines Operator is now installed in the default target namespace
openshift-operators
.
4.3.3. Red Hat OpenShift Pipelines Operator in a restricted environment
The Red Hat OpenShift Pipelines Operator enables support for installation of pipelines in a restricted network environment.
The Operator installs a proxy webhook that sets the proxy environment variables in the containers of the pod created by tekton-controllers based on the cluster
proxy object. It also sets the proxy environment variables in the TektonPipelines
, TektonTriggers
, Controllers
, Webhooks
, and Operator Proxy Webhook
resources.
By default, the proxy webhook is disabled for the openshift-pipelines
namespace. To disable it for any other namespace, you can add the operator.tekton.dev/disable-proxy: true
label to the namespace
object.
4.3.4. Additional resources
- You can learn more about installing Operators on OpenShift Container Platform in the adding Operators to a cluster section.
- To install Tekton Chains using the Red Hat OpenShift Pipelines Operator, see Using Tekton Chains for Red Hat OpenShift Pipelines supply chain security.
- To install and deploy in-cluster Tekton Hub, see Using Tekton Hub with Red Hat OpenShift Pipelines.
For more information on using pipelines in a restricted environment, see:
4.4. Uninstalling OpenShift Pipelines
Cluster administrators can uninstall the Red Hat OpenShift Pipelines Operator by performing the following steps:
- Delete the Custom Resources (CRs) that were added by default when you installed the Red Hat OpenShift Pipelines Operator.
Delete the CRs of the optional components such as Tekton Hub that depend on the Operator.
CautionIf you uninstall the Operator without removing the CRs of optional components, you cannot remove them later.
- Uninstall the Red Hat OpenShift Pipelines Operator.
Uninstalling only the Operator will not remove the Red Hat OpenShift Pipelines components created by default when the Operator is installed.
4.4.1. Deleting the Red Hat OpenShift Pipelines components and Custom Resources
Delete the Custom Resources (CRs) created by default during installation of the Red Hat OpenShift Pipelines Operator.
Procedure
-
In the Administrator perspective of the web console, navigate to Administration
Custom Resource Definition. -
Type
config.operator.tekton.dev
in the Filter by name box to search for the Red Hat OpenShift Pipelines Operator CRs. - Click CRD Config to see the Custom Resource Definition Details page.
Click the Actions drop-down menu and select Delete Custom Resource Definition.
NoteDeleting the CRs will delete the Red Hat OpenShift Pipelines components, and all the Tasks and Pipelines on the cluster will be lost.
- Click Delete to confirm the deletion of the CRs.
Repeat the procedure to find and remove CRs of optional components such as Tekton Hub before uninstalling the Operator. If you uninstall the Operator without removing the CRs of optional components, you cannot remove them later.
4.4.2. Uninstalling the Red Hat OpenShift Pipelines Operator
You can uninstall the Red Hat OpenShift Pipelines Operator by using the Administrator perspective in the web console.
Procedure
-
From the Operators
OperatorHub page, use the Filter by keyword box to search for the Red Hat OpenShift Pipelines Operator. - Click the Red Hat OpenShift Pipelines Operator tile. The Operator tile indicates that the Operator is installed.
- In the Red Hat OpenShift Pipelines Operator description page, click Uninstall.
Additional resources
- You can learn more about uninstalling Operators on OpenShift Container Platform in the deleting Operators from a cluster section.
4.5. Creating CI/CD solutions for applications using OpenShift Pipelines
With Red Hat OpenShift Pipelines, you can create a customized CI/CD solution to build, test, and deploy your application.
To create a full-fledged, self-serving CI/CD pipeline for an application, perform the following tasks:
- Create custom tasks, or install existing reusable tasks.
- Create and define the delivery pipeline for your application.
Provide a storage volume or filesystem that is attached to a workspace for the pipeline execution, using one of the following approaches:
- Specify a volume claim template that creates a persistent volume claim
- Specify a persistent volume claim
-
Create a
PipelineRun
object to instantiate and invoke the pipeline. - Add triggers to capture events in the source repository.
This section uses the pipelines-tutorial
example to demonstrate the preceding tasks. The example uses a simple application which consists of:
-
A front-end interface,
pipelines-vote-ui
, with the source code in thepipelines-vote-ui
Git repository. -
A back-end interface,
pipelines-vote-api
, with the source code in thepipelines-vote-api
Git repository. -
The
apply-manifests
andupdate-deployment
tasks in thepipelines-tutorial
Git repository.
4.5.1. Prerequisites
- You have access to an OpenShift Container Platform cluster.
- You have installed OpenShift Pipelines using the Red Hat OpenShift Pipelines Operator listed in the OpenShift OperatorHub. After it is installed, it is applicable to the entire cluster.
- You have installed OpenShift Pipelines CLI.
-
You have forked the front-end
pipelines-vote-ui
and back-endpipelines-vote-api
Git repositories using your GitHub ID, and have administrator access to these repositories. -
Optional: You have cloned the
pipelines-tutorial
Git repository.
4.5.2. Creating a project and checking your pipeline service account
Procedure
Log in to your OpenShift Container Platform cluster:
$ oc login -u <login> -p <password> https://openshift.example.com:6443
Create a project for the sample application. For this example workflow, create the
pipelines-tutorial
project:$ oc new-project pipelines-tutorial
NoteIf you create a project with a different name, be sure to update the resource URLs used in the example with your project name.
View the
pipeline
service account:Red Hat OpenShift Pipelines Operator adds and configures a service account named
pipeline
that has sufficient permissions to build and push an image. This service account is used by thePipelineRun
object.$ oc get serviceaccount pipeline
4.5.3. Creating pipeline tasks
Procedure
Install the
apply-manifests
andupdate-deployment
task resources from thepipelines-tutorial
repository, which contains a list of reusable tasks for pipelines:$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/01_pipeline/01_apply_manifest_task.yaml $ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/01_pipeline/02_update_deployment_task.yaml
Use the
tkn task list
command to list the tasks you created:$ tkn task list
The output verifies that the
apply-manifests
andupdate-deployment
task resources were created:NAME DESCRIPTION AGE apply-manifests 1 minute ago update-deployment 48 seconds ago
Use the
tkn clustertasks list
command to list the Operator-installed additional cluster tasks such asbuildah
ands2i-python
:NoteTo use the
buildah
cluster task in a restricted environment, you must ensure that the Dockerfile uses an internal image stream as the base image.$ tkn clustertasks list
The output lists the Operator-installed
ClusterTask
resources:NAME DESCRIPTION AGE buildah 1 day ago git-clone 1 day ago s2i-python 1 day ago tkn 1 day ago
Additional resources
4.5.4. Assembling a pipeline
A pipeline represents a CI/CD flow and is defined by the tasks to be executed. It is designed to be generic and reusable in multiple applications and environments.
A pipeline specifies how the tasks interact with each other and their order of execution using the from
and runAfter
parameters. It uses the workspaces
field to specify one or more volumes that each task in the pipeline requires during execution.
In this section, you will create a pipeline that takes the source code of the application from GitHub, and then builds and deploys it on OpenShift Container Platform.
The pipeline performs the following tasks for the back-end application pipelines-vote-api
and front-end application pipelines-vote-ui
:
-
Clones the source code of the application from the Git repository by referring to the
git-url
andgit-revision
parameters. -
Builds the container image using the
buildah
cluster task. -
Pushes the image to the OpenShift image registry by referring to the
image
parameter. -
Deploys the new image on OpenShift Container Platform by using the
apply-manifests
andupdate-deployment
tasks.
Procedure
Copy the contents of the following sample pipeline YAML file and save it:
apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: build-and-deploy spec: workspaces: - name: shared-workspace params: - name: deployment-name type: string description: name of the deployment to be patched - name: git-url type: string description: url of the git repo for the code of deployment - name: git-revision type: string description: revision to be used from repo of the code for deployment default: "pipelines-1.10" - name: IMAGE type: string description: image to be built from the code tasks: - name: fetch-repository taskRef: name: git-clone kind: ClusterTask workspaces: - name: output workspace: shared-workspace params: - name: url value: $(params.git-url) - name: subdirectory value: "" - name: deleteExisting value: "true" - name: revision value: $(params.git-revision) - name: build-image taskRef: name: buildah kind: ClusterTask params: - name: IMAGE value: $(params.IMAGE) workspaces: - name: source workspace: shared-workspace runAfter: - fetch-repository - name: apply-manifests taskRef: name: apply-manifests workspaces: - name: source workspace: shared-workspace runAfter: - build-image - name: update-deployment taskRef: name: update-deployment params: - name: deployment value: $(params.deployment-name) - name: IMAGE value: $(params.IMAGE) runAfter: - apply-manifests
The pipeline definition abstracts away the specifics of the Git source repository and image registries. These details are added as
params
when a pipeline is triggered and executed.Create the pipeline:
$ oc create -f <pipeline-yaml-file-name.yaml>
Alternatively, you can also execute the YAML file directly from the Git repository:
$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/01_pipeline/04_pipeline.yaml
Use the
tkn pipeline list
command to verify that the pipeline is added to the application:$ tkn pipeline list
The output verifies that the
build-and-deploy
pipeline was created:NAME AGE LAST RUN STARTED DURATION STATUS build-and-deploy 1 minute ago --- --- --- ---
4.5.5. Mirroring images to run pipelines in a restricted environment
To run OpenShift Pipelines in a disconnected cluster or a cluster provisioned in a restricted environment, ensure that either the Samples Operator is configured for a restricted network, or a cluster administrator has created a cluster with a mirrored registry.
The following procedure uses the pipelines-tutorial
example to create a pipeline for an application in a restricted environment using a cluster with a mirrored registry. To ensure that the pipelines-tutorial
example works in a restricted environment, you must mirror the respective builder images from the mirror registry for the front-end interface, pipelines-vote-ui
; back-end interface, pipelines-vote-api
; and the cli
.
Procedure
Mirror the builder image from the mirror registry for the front-end interface,
pipelines-vote-ui
.Verify that the required images tag is not imported:
$ oc describe imagestream python -n openshift
Example output
Name: python Namespace: openshift [...] 3.8-ubi8 (latest) tagged from registry.redhat.io/ubi8/python-38:latest prefer registry pullthrough when referencing this tag Build and run Python 3.8 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.8/README.md. Tags: builder, python Supports: python:3.8, python Example Repo: https://github.com/sclorg/django-ex.git [...]
Mirror the supported image tag to the private registry:
$ oc image mirror registry.redhat.io/ubi8/python-38:latest <mirror-registry>:<port>/ubi8/python-38
Import the image:
$ oc tag <mirror-registry>:<port>/ubi8/python-38 python:latest --scheduled -n openshift
You must periodically re-import the image. The
--scheduled
flag enables automatic re-import of the image.Verify that the images with the given tag have been imported:
$ oc describe imagestream python -n openshift
Example output
Name: python Namespace: openshift [...] latest updates automatically from registry <mirror-registry>:<port>/ubi8/python-38 * <mirror-registry>:<port>/ubi8/python-38@sha256:3ee3c2e70251e75bfeac25c0c33356add9cc4abcbc9c51d858f39e4dc29c5f58 [...]
Mirror the builder image from the mirror registry for the back-end interface,
pipelines-vote-api
.Verify that the required images tag is not imported:
$ oc describe imagestream golang -n openshift
Example output
Name: golang Namespace: openshift [...] 1.14.7-ubi8 (latest) tagged from registry.redhat.io/ubi8/go-toolset:1.14.7 prefer registry pullthrough when referencing this tag Build and run Go applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/golang-container/blob/master/README.md. Tags: builder, golang, go Supports: golang Example Repo: https://github.com/sclorg/golang-ex.git [...]
Mirror the supported image tag to the private registry:
$ oc image mirror registry.redhat.io/ubi8/go-toolset:1.14.7 <mirror-registry>:<port>/ubi8/go-toolset
Import the image:
$ oc tag <mirror-registry>:<port>/ubi8/go-toolset golang:latest --scheduled -n openshift
You must periodically re-import the image. The
--scheduled
flag enables automatic re-import of the image.Verify that the images with the given tag have been imported:
$ oc describe imagestream golang -n openshift
Example output
Name: golang Namespace: openshift [...] latest updates automatically from registry <mirror-registry>:<port>/ubi8/go-toolset * <mirror-registry>:<port>/ubi8/go-toolset@sha256:59a74d581df3a2bd63ab55f7ac106677694bf612a1fe9e7e3e1487f55c421b37 [...]
Mirror the builder image from the mirror registry for the
cli
.Verify that the required images tag is not imported:
$ oc describe imagestream cli -n openshift
Example output
Name: cli Namespace: openshift [...] latest updates automatically from registry quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:65c68e8c22487375c4c6ce6f18ed5485915f2bf612e41fef6d41cbfcdb143551 * quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:65c68e8c22487375c4c6ce6f18ed5485915f2bf612e41fef6d41cbfcdb143551 [...]
Mirror the supported image tag to the private registry:
$ oc image mirror quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:65c68e8c22487375c4c6ce6f18ed5485915f2bf612e41fef6d41cbfcdb143551 <mirror-registry>:<port>/openshift-release-dev/ocp-v4.0-art-dev:latest
Import the image:
$ oc tag <mirror-registry>:<port>/openshift-release-dev/ocp-v4.0-art-dev cli:latest --scheduled -n openshift
You must periodically re-import the image. The
--scheduled
flag enables automatic re-import of the image.Verify that the images with the given tag have been imported:
$ oc describe imagestream cli -n openshift
Example output
Name: cli Namespace: openshift [...] latest updates automatically from registry <mirror-registry>:<port>/openshift-release-dev/ocp-v4.0-art-dev * <mirror-registry>:<port>/openshift-release-dev/ocp-v4.0-art-dev@sha256:65c68e8c22487375c4c6ce6f18ed5485915f2bf612e41fef6d41cbfcdb143551 [...]
4.5.6. Running a pipeline
A PipelineRun
resource starts a pipeline and ties it to the Git and image resources that should be used for the specific invocation. It automatically creates and starts the TaskRun
resources for each task in the pipeline.
Procedure
Start the pipeline for the back-end application:
$ tkn pipeline start build-and-deploy \ -w name=shared-workspace,volumeClaimTemplateFile=https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/01_pipeline/03_persistent_volume_claim.yaml \ -p deployment-name=pipelines-vote-api \ -p git-url=https://github.com/openshift/pipelines-vote-api.git \ -p IMAGE='image-registry.openshift-image-registry.svc:5000/$(context.pipelineRun.namespace)/pipelines-vote-api' \ --use-param-defaults
The previous command uses a volume claim template, which creates a persistent volume claim for the pipeline execution.
To track the progress of the pipeline run, enter the following command::
$ tkn pipelinerun logs <pipelinerun_id> -f
The <pipelinerun_id> in the above command is the ID for the
PipelineRun
that was returned in the output of the previous command.Start the pipeline for the front-end application:
$ tkn pipeline start build-and-deploy \ -w name=shared-workspace,volumeClaimTemplateFile=https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/01_pipeline/03_persistent_volume_claim.yaml \ -p deployment-name=pipelines-vote-ui \ -p git-url=https://github.com/openshift/pipelines-vote-ui.git \ -p IMAGE='image-registry.openshift-image-registry.svc:5000/$(context.pipelineRun.namespace)/pipelines-vote-ui' \ --use-param-defaults
To track the progress of the pipeline run, enter the following command:
$ tkn pipelinerun logs <pipelinerun_id> -f
The <pipelinerun_id> in the above command is the ID for the
PipelineRun
that was returned in the output of the previous command.After a few minutes, use
tkn pipelinerun list
command to verify that the pipeline ran successfully by listing all the pipeline runs:$ tkn pipelinerun list
The output lists the pipeline runs:
NAME STARTED DURATION STATUS build-and-deploy-run-xy7rw 1 hour ago 2 minutes Succeeded build-and-deploy-run-z2rz8 1 hour ago 19 minutes Succeeded
Get the application route:
$ oc get route pipelines-vote-ui --template='http://{{.spec.host}}'
Note the output of the previous command. You can access the application using this route.
To rerun the last pipeline run, using the pipeline resources and service account of the previous pipeline, run:
$ tkn pipeline start build-and-deploy --last
Additional resources
4.5.7. Adding triggers to a pipeline
Triggers enable pipelines to respond to external GitHub events, such as push events and pull requests. After you assemble and start a pipeline for the application, add the TriggerBinding
, TriggerTemplate
, Trigger
, and EventListener
resources to capture the GitHub events.
Procedure
Copy the content of the following sample
TriggerBinding
YAML file and save it:apiVersion: triggers.tekton.dev/v1beta1 kind: TriggerBinding metadata: name: vote-app spec: params: - name: git-repo-url value: $(body.repository.url) - name: git-repo-name value: $(body.repository.name) - name: git-revision value: $(body.head_commit.id)
Create the
TriggerBinding
resource:$ oc create -f <triggerbinding-yaml-file-name.yaml>
Alternatively, you can create the
TriggerBinding
resource directly from thepipelines-tutorial
Git repository:$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/03_triggers/01_binding.yaml
Copy the content of the following sample
TriggerTemplate
YAML file and save it:apiVersion: triggers.tekton.dev/v1beta1 kind: TriggerTemplate metadata: name: vote-app spec: params: - name: git-repo-url description: The git repository url - name: git-revision description: The git revision default: pipelines-1.10 - name: git-repo-name description: The name of the deployment to be created / patched resourcetemplates: - apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: generateName: build-deploy-$(tt.params.git-repo-name)- spec: serviceAccountName: pipeline pipelineRef: name: build-and-deploy params: - name: deployment-name value: $(tt.params.git-repo-name) - name: git-url value: $(tt.params.git-repo-url) - name: git-revision value: $(tt.params.git-revision) - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/$(context.pipelineRun.namespace)/$(tt.params.git-repo-name) workspaces: - name: shared-workspace volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi
The template specifies a volume claim template to create a persistent volume claim for defining the storage volume for the workspace. Therefore, you do not need to create a persistent volume claim to provide data storage.
Create the
TriggerTemplate
resource:$ oc create -f <triggertemplate-yaml-file-name.yaml>
Alternatively, you can create the
TriggerTemplate
resource directly from thepipelines-tutorial
Git repository:$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/03_triggers/02_template.yaml
Copy the contents of the following sample
Trigger
YAML file and save it:apiVersion: triggers.tekton.dev/v1beta1 kind: Trigger metadata: name: vote-trigger spec: serviceAccountName: pipeline bindings: - ref: vote-app template: ref: vote-app
Create the
Trigger
resource:$ oc create -f <trigger-yaml-file-name.yaml>
Alternatively, you can create the
Trigger
resource directly from thepipelines-tutorial
Git repository:$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/03_triggers/03_trigger.yaml
Copy the contents of the following sample
EventListener
YAML file and save it:apiVersion: triggers.tekton.dev/v1beta1 kind: EventListener metadata: name: vote-app spec: serviceAccountName: pipeline triggers: - triggerRef: vote-trigger
Alternatively, if you have not defined a trigger custom resource, add the binding and template spec to the
EventListener
YAML file, instead of referring to the name of the trigger:apiVersion: triggers.tekton.dev/v1beta1 kind: EventListener metadata: name: vote-app spec: serviceAccountName: pipeline triggers: - bindings: - ref: vote-app template: ref: vote-app
Create the
EventListener
resource by performing the following steps:To create an
EventListener
resource using a secure HTTPS connection:Add a label to enable the secure HTTPS connection to the
Eventlistener
resource:$ oc label namespace <ns-name> operator.tekton.dev/enable-annotation=enabled
Create the
EventListener
resource:$ oc create -f <eventlistener-yaml-file-name.yaml>
Alternatively, you can create the
EvenListener
resource directly from thepipelines-tutorial
Git repository:$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/03_triggers/04_event_listener.yaml
Create a route with the re-encrypt TLS termination:
$ oc create route reencrypt --service=<svc-name> --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=<hostname>
Alternatively, you can create a re-encrypt TLS termination YAML file to create a secured route.
Example Re-encrypt TLS Termination YAML of the Secured Route
apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: <hostname> to: kind: Service name: frontend 2 tls: termination: reencrypt 3 key: [as in edge termination] certificate: [as in edge termination] caCertificate: [as in edge termination] destinationCACertificate: |- 4 -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----
- 1 2
- The name of the object, which is limited to 63 characters.
- 3
- The
termination
field is set toreencrypt
. This is the only requiredtls
field. - 4
- Required for re-encryption.
destinationCACertificate
specifies a CA certificate to validate the endpoint certificate, securing the connection from the router to the destination pods. If the service is using a service signing certificate, or the administrator has specified a default CA certificate for the router and the service has a certificate signed by that CA, this field can be omitted.
See
oc create route reencrypt --help
for more options.
To create an
EventListener
resource using an insecure HTTP connection:-
Create the
EventListener
resource. Expose the
EventListener
service as an OpenShift Container Platform route to make it publicly accessible:$ oc expose svc el-vote-app
-
Create the
4.5.8. Configuring event listeners to serve multiple namespaces
You can skip this section if you want to create a basic CI/CD pipeline. However, if your deployment strategy involves multiple namespaces, you can configure event listeners to serve multiple namespaces.
To increase reusability of EvenListener
objects, cluster administrators can configure and deploy them as multi-tenant event listeners that serve multiple namespaces.
Procedure
Configure cluster-wide fetch permission for the event listener.
Set a service account name to be used in the
ClusterRoleBinding
andEventListener
objects. For example,el-sa
.Example
ServiceAccount.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: el-sa ---
In the
rules
section of theClusterRole.yaml
file, set appropriate permissions for every event listener deployment to function cluster-wide.Example
ClusterRole.yaml
kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: el-sel-clusterrole rules: - apiGroups: ["triggers.tekton.dev"] resources: ["eventlisteners", "clustertriggerbindings", "clusterinterceptors", "triggerbindings", "triggertemplates", "triggers"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["configmaps", "secrets"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["serviceaccounts"] verbs: ["impersonate"] ...
Configure cluster role binding with the appropriate service account name and cluster role name.
Example
ClusterRoleBinding.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: el-mul-clusterrolebinding subjects: - kind: ServiceAccount name: el-sa namespace: default roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: el-sel-clusterrole ...
In the
spec
parameter of the event listener, add the service account name, for exampleel-sa
. Fill thenamespaceSelector
parameter with names of namespaces where event listener is intended to serve.Example
EventListener.yaml
apiVersion: triggers.tekton.dev/v1beta1 kind: EventListener metadata: name: namespace-selector-listener spec: serviceAccountName: el-sa namespaceSelector: matchNames: - default - foo ...
Create a service account with the necessary permissions, for example
foo-trigger-sa
. Use it for role binding the triggers.Example
ServiceAccount.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: foo-trigger-sa namespace: foo ...
Example
RoleBinding.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: triggercr-rolebinding namespace: foo subjects: - kind: ServiceAccount name: foo-trigger-sa namespace: foo roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tekton-triggers-eventlistener-roles ...
Create a trigger with the appropriate trigger template, trigger binding, and service account name.
Example
Trigger.yaml
apiVersion: triggers.tekton.dev/v1beta1 kind: Trigger metadata: name: trigger namespace: foo spec: serviceAccountName: foo-trigger-sa interceptors: - ref: name: "github" params: - name: "secretRef" value: secretName: github-secret secretKey: secretToken - name: "eventTypes" value: ["push"] bindings: - ref: vote-app template: ref: vote-app ...
4.5.9. Creating webhooks
Webhooks are HTTP POST messages that are received by the event listeners whenever a configured event occurs in your repository. The event payload is then mapped to trigger bindings, and processed by trigger templates. The trigger templates eventually start one or more pipeline runs, leading to the creation and deployment of Kubernetes resources.
In this section, you will configure a webhook URL on your forked Git repositories pipelines-vote-ui
and pipelines-vote-api
. This URL points to the publicly accessible EventListener
service route.
Adding webhooks requires administrative privileges to the repository. If you do not have administrative access to your repository, contact your system administrator for adding webhooks.
Procedure
Get the webhook URL:
For a secure HTTPS connection:
$ echo "URL: $(oc get route el-vote-app --template='https://{{.spec.host}}')"
For an HTTP (insecure) connection:
$ echo "URL: $(oc get route el-vote-app --template='http://{{.spec.host}}')"
Note the URL obtained in the output.
Configure webhooks manually on the front-end repository:
-
Open the front-end Git repository
pipelines-vote-ui
in your browser. -
Click Settings
Webhooks Add Webhook On the Webhooks/Add Webhook page:
- Enter the webhook URL from step 1 in Payload URL field
- Select application/json for the Content type
- Specify the secret in the Secret field
- Ensure that the Just the push event is selected
- Select Active
- Click Add Webhook
-
Open the front-end Git repository
-
Repeat step 2 for the back-end repository
pipelines-vote-api
.
4.5.10. Triggering a pipeline run
Whenever a push
event occurs in the Git repository, the configured webhook sends an event payload to the publicly exposed EventListener
service route. The EventListener
service of the application processes the payload, and passes it to the relevant TriggerBinding
and TriggerTemplate
resource pairs. The TriggerBinding
resource extracts the parameters, and the TriggerTemplate
resource uses these parameters and specifies the way the resources must be created. This may rebuild and redeploy the application.
In this section, you push an empty commit to the front-end pipelines-vote-ui
repository, which then triggers the pipeline run.
Procedure
From the terminal, clone your forked Git repository
pipelines-vote-ui
:$ git clone git@github.com:<your GitHub ID>/pipelines-vote-ui.git -b pipelines-1.10
Push an empty commit:
$ git commit -m "empty-commit" --allow-empty && git push origin pipelines-1.10
Check if the pipeline run was triggered:
$ tkn pipelinerun list
Notice that a new pipeline run was initiated.
4.5.11. Enabling monitoring of event listeners for Triggers for user-defined projects
As a cluster administrator, to gather event listener metrics for the Triggers
service in a user-defined project and display them in the OpenShift Container Platform web console, you can create a service monitor for each event listener. On receiving an HTTP request, event listeners for the Triggers
service return three metrics — eventlistener_http_duration_seconds
, eventlistener_event_count
, and eventlistener_triggered_resources
.
Prerequisites
- You have logged in to the OpenShift Container Platform web console.
- You have installed the Red Hat OpenShift Pipelines Operator.
- You have enabled monitoring for user-defined projects.
Procedure
For each event listener, create a service monitor. For example, to view the metrics for the
github-listener
event listener in thetest
namespace, create the following service monitor:apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app.kubernetes.io/managed-by: EventListener app.kubernetes.io/part-of: Triggers eventlistener: github-listener annotations: networkoperator.openshift.io/ignore-errors: "" name: el-monitor namespace: test spec: endpoints: - interval: 10s port: http-metrics jobLabel: name namespaceSelector: matchNames: - test selector: matchLabels: app.kubernetes.io/managed-by: EventListener app.kubernetes.io/part-of: Triggers eventlistener: github-listener ...
Test the service monitor by sending a request to the event listener. For example, push an empty commit:
$ git commit -m "empty-commit" --allow-empty && git push origin main
-
On the OpenShift Container Platform web console, navigate to Administrator
Observe Metrics. -
To view a metric, search by its name. For example, to view the details of the
eventlistener_http_resources
metric for thegithub-listener
event listener, search using theeventlistener_http_resources
keyword.
Additional resources
4.5.12. Additional resources
- To include pipelines as code along with the application source code in the same repository, see Using Pipelines as code.
- For more details on pipelines in the Developer perspective, see the working with pipelines in the web console section.
- To learn more about Security Context Constraints (SCCs), see the Managing Security Context Constraints section.
- For more examples of reusable tasks, see the OpenShift Catalog repository. Additionally, you can also see the Tekton Catalog in the Tekton project.
- To install and deploy a custom instance of Tekton Hub for reusable tasks and pipelines, see Using Tekton Hub with Red Hat OpenShift Pipelines.
- For more details on re-encrypt TLS termination, see Re-encryption Termination.
- For more details on secured routes, see the Secured routes section.
4.6. Managing non-versioned and versioned cluster tasks
As a cluster administrator, installing the Red Hat OpenShift Pipelines Operator creates variants of each default cluster task known as versioned cluster tasks (VCT) and non-versioned cluster tasks (NVCT). For example, installing the Red Hat OpenShift Pipelines Operator v1.7 creates a buildah-1-7-0
VCT and a buildah
NVCT.
Both NVCT and VCT have the same metadata, behavior, and specifications, including params
, workspaces
, and steps
. However, they behave differently when you disable them or upgrade the Operator.
4.6.1. Differences between non-versioned and versioned cluster tasks
Non-versioned and versioned cluster tasks have different naming conventions. And, the Red Hat OpenShift Pipelines Operator upgrades them differently.
Non-versioned cluster task | Versioned cluster task | |
---|---|---|
Nomenclature |
The NVCT only contains the name of the cluster task. For example, the name of the NVCT of Buildah installed with Operator v1.7 is |
The VCT contains the name of the cluster task, followed by the version as a suffix. For example, the name of the VCT of Buildah installed with Operator v1.7 is |
Upgrade | When you upgrade the Operator, it updates the non-versioned cluster task with the latest changes. The name of the NVCT remains unchanged. |
Upgrading the Operator installs the latest version of the VCT and retains the earlier version. The latest version of a VCT corresponds to the upgraded Operator. For example, installing Operator 1.7 installs |
4.6.2. Advantages and disadvantages of non-versioned and versioned cluster tasks
Before adopting non-versioned or versioned cluster tasks as a standard in production environments, cluster administrators might consider their advantages and disadvantages.
Cluster task | Advantages | Disadvantages |
---|---|---|
Non-versioned cluster task (NVCT) |
| If you deploy pipelines that use NVCT, they might break after an Operator upgrade if the automatically upgraded cluster tasks are not backward-compatible. |
Versioned cluster task (VCT) |
|
|
4.6.3. Disabling non-versioned and versioned cluster tasks
As a cluster administrator, you can disable cluster tasks that the Pipelines Operator installed.
Procedure
To delete all non-versioned cluster tasks and latest versioned cluster tasks, edit the
TektonConfig
custom resource definition (CRD) and set theclusterTasks
parameter inspec.addon.params
tofalse
.Example
TektonConfig
CRapiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: params: - name: createRbacResource value: "false" profile: all targetNamespace: openshift-pipelines addon: params: - name: clusterTasks value: "false" ...
When you disable cluster tasks, the Operator removes all the non-versioned cluster tasks and only the latest version of the versioned cluster tasks from the cluster.
NoteRe-enabling cluster tasks installs the non-versioned cluster tasks.
Optional: To delete earlier versions of the versioned cluster tasks, use any one of the following methods:
To delete individual earlier versioned cluster tasks, use the
oc delete clustertask
command followed by the versioned cluster task name. For example:$ oc delete clustertask buildah-1-6-0
To delete all versioned cluster tasks created by an old version of the Operator, you can delete the corresponding installer set. For example:
$ oc delete tektoninstallerset versioned-clustertask-1-6-k98as
CautionIf you delete an old versioned cluster task, you cannot restore it. You can only restore versioned and non-versioned cluster tasks that the current version of the Operator has created.
4.7. Using Tekton Hub with OpenShift Pipelines
Tekton Hub is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Tekton Hub helps you discover, search, and share reusable tasks and pipelines for your CI/CD workflows. A public instance of Tekton Hub is available at hub.tekton.dev. Cluster administrators can also install and deploy a custom instance of Tekton Hub for enterprise use.
4.7.1. Installing and deploying Tekton Hub on a OpenShift Container Platform cluster
Tekton Hub is an optional component; cluster administrators cannot install it using the TektonConfig
custom resource (CR). To install and manage Tekton Hub, use the TektonHub
CR.
If you are using Github Enterprise or Gitlab Enterprise, install and deploy Tekton Hub in the same network as the enterprise server. For example, if the enterprise server is running behind a VPN, deploy Tekton Hub on a cluster that is also behind the VPN.
Prerequisites
-
Ensure that the Red Hat OpenShift Pipelines Operator is installed in the default
openshift-pipelines
namespace on the cluster.
Procedure
- Create a fork of the Tekton Hub repository.
- Clone the forked repository.
Update the
config.yaml
file to include at least one user with the following scopes:-
A user with
agent:create
scope who can set up a cron job that refreshes the Tekton Hub database after an interval, if there are any changes in the catalog. -
A user with the
catalog:refresh
scope who can refresh the catalog and all resources in the database of the Tekton Hub. A user with the
config:refresh
scope who can get additional scopes.... scopes: - name: agent:create users: <username_registered_with_the_Git_repository_hosting_service_provider> - name: catalog:refresh users: <username_registered_with_the_Git_repository_hosting_service_provider> - name: config:refresh users: <username_registered_with_the_Git_repository_hosting_service_provider> ...
The supported service providers are GitHub, GitLab, and BitBucket.
-
A user with
Create an OAuth application with your Git repository hosting provider, and note the Client ID and Client Secret.
-
For a GitHub OAuth application, set the
Homepage URL
and theAuthorization callback URL
as<auth-route>
. -
For a GitLab OAuth application, set the
REDIRECT_URI
as<auth-route>/auth/gitlab/callback
. -
For a BitBucket OAuth application, set the
Callback URL
as<auth-route>
.
-
For a GitHub OAuth application, set the
Edit the following fields in the
<tekton_hub_repository>/config/02-api/20-api-secret.yaml
file for the Tekton Hub API secret:-
GH_CLIENT_ID
: The Client ID from the OAuth application created with the Git repository hosting service provider. -
GH_CLIENT_SECRET
: The Client Secret from the OAuth application created with the Git repository hosting service provider. -
GHE_URL
: GitHub Enterprise URL, if you are authenticating using GitHub Enterprise. Do not provide the URL to the catalog as a value for this field. -
GL_CLIENT_ID
: The Client ID from the GitLab OAuth application. -
GL_CLIENT_SECRET
: The Client Secret from the GitLab OAuth application. -
GLE_URL
: GitLab Enterprise URL, if you are authenticating using GitLab Enterprise. Do not provide the URL to the catalog as a value for this field. -
BB_CLIENT_ID
: The Client ID from the BitBucket OAuth application. -
BB_CLIENT_SECRET
: The Client Secret from the BitBucket OAuth application. -
JWT_SIGNING_KEY
: A long, random string used to sign the JSON Web Token (JWT) created for users. -
ACCESS_JWT_EXPIRES_IN
: Add the time limit after which the access token expires. For example,1m
, wherem
denotes minutes. The supported units of time are seconds (s
), minutes (m
), hours (h
), days (d
), and weeks (w
). -
REFRESH_JWT_EXPIRES_IN
: Add the time limit after which the refresh token expires. For example,1m
, wherem
denotes minutes. The supported units of time are seconds (s
), minutes (m
), hours (h
), days (d
), and weeks (w
). Ensure that the expiry time set for token refresh is greater than the expiry time set for token access. AUTH_BASE_URL
: Route URL for the OAuth application.Note- Use the fields related to Client ID and Client Secret for any one of the supported Git repository hosting service providers.
-
The account credentials registered with the Git repository hosting service provider enables the users with
catalog: refresh
scope to authenticate and load all catalog resources to the database.
-
- Commit and push the changes to your forked repository.
Ensure that the
TektonHub
CR is similar to the following example:apiVersion: operator.tekton.dev/v1alpha1 kind: TektonHub metadata: name: hub spec: targetNamespace: openshift-pipelines 1 api: hubConfigUrl: https://raw.githubusercontent.com/tektoncd/hub/main/config.yaml 2
Install the Tekton Hub.
$ oc apply -f TektonHub.yaml 1
- 1
- The file name or path of the
TektonConfig
CR.
Check the status of the installation.
$ oc get tektonhub.operator.tekton.dev NAME VERSION READY REASON APIURL UIURL hub v1.7.2 True https://api.route.url/ https://ui.route.url/
4.7.1.1. Manually refreshing the catalog in Tekton Hub
When you install and deploy Tekton Hub on a OpenShift Container Platform cluster, a Postgres database is also installed. Initially, the database is empty. To add the tasks and pipelines available in the catalog to the database, cluster administrators must refresh the catalog.
Prerequisites
-
Ensure that you are in the
<tekton_hub_repository>/config/
directory.
Procedure
In the Tekton Hub UI, click Login -→ Sign In With GitHub.
NoteGitHub is used as an example from the publicly available Tekton Hub UI. For custom installation on your cluster, all Git repository hosting service providers for which you have provided Client ID and Client Secret are listed.
- On the home page, click the user profile and copy the token.
Call the Catalog Refresh API.
To refresh a catalog with a specific name, run the following command:
$ curl -X POST -H "Authorization: <jwt-token>" \ 1 <api-url>/catalog/<catalog_name>/refresh 2
Sample output:
[{"id":1,"catalogName":"tekton","status":"queued"}]
To refresh all catalogs, run the following command:
$ curl -X POST -H "Authorization: <jwt-token>" \ 1 <api-url>/catalog/refresh 2
- Refresh the page in the browser.
4.7.1.2. Optional: Setting a cron job for refreshing catalog in Tekton Hub
Cluster administrators can optionally set up a cron job to refresh the database after a fixed interval, so that changes in the catalog appear in the Tekton Hub web console.
If resources are added to the catalog or updated, refreshing the catalog displays these changes in the Tekton Hub UI. However, if a resource is deleted from the catalog, refreshing the catalog does not remove the resource from the database. The Tekton Hub UI continues displaying the deleted resource.
Prerequisites
-
Ensure that you are in the
<project_root>/config/
directory, where<project_root>
is the top level directory of the cloned Tekton Hub repository. - Ensure that you have a JSON web token (JWT) token with a scope of refreshing the catalog.
Procedure
Create an agent-based JWT token for longer use.
$ curl -X PUT --header "Content-Type: application/json" \ -H "Authorization: <access-token>" \ 1 --data '{"name":"catalog-refresh-agent","scopes": ["catalog:refresh"]}' \ <api-route>/system/user/agent
- 1
- The JWT token.
The agent token with the necessary scopes are returned in the
{"token":"<agent_jwt_token>"}
format. Note the returned token and preserve it for the catalog refresh cron job.Edit the
05-catalog-refresh-cj/50-catalog-refresh-secret.yaml
file to set theHUB_TOKEN
parameter to the<agent_jwt_token>
returned in the previous step.apiVersion: v1 kind: Secret metadata: name: catalog-refresh type: Opaque stringData: HUB_TOKEN: <hub_token> 1
- 1
- The
<agent_jwt_token>
returned in the previous step.
Apply the modified YAML files.
$ oc apply -f 05-catalog-refresh-cj/ -n openshift-pipelines.
Optional: By default, the cron job is configured to run every 30 minutes. To change the interval, modify the value of the
schedule
parameter in the05-catalog-refresh-cj/51-catalog-refresh-cronjob.yaml
file.apiVersion: batch/v1 kind: CronJob metadata: name: catalog-refresh labels: app: tekton-hub-api spec: schedule: "*/30 * * * *" ...
4.7.1.3. Optional: Adding new users in Tekton Hub configuration
Procedure
Depending on the intended scope, cluster administrators can add new users in the
config.yaml
file.... scopes: - name: agent:create users: [<username_1>, <username_2>] 1 - name: catalog:refresh users: [<username_3>, <username_4>] - name: config:refresh users: [<username_5>, <username_6>] default: scopes: - rating:read - rating:write ...
- 1
- The usernames registered with the Git repository hosting service provider.
NoteWhen any user logs in for the first time, they will have only the default scope even if they are added in the
config.yaml
. To activate additional scopes, ensure the user has logged in at least once.-
Ensure that in the
config.yaml
file, you have theconfig-refresh
scope. Refresh the configuration.
$ curl -X POST -H "Authorization: <access-token>" \ 1 --header "Content-Type: application/json" \ --data '{"force": true} \ <api-route>/system/config/refresh
- 1
- The JWT token.
4.7.2. Opting out of Tekton Hub in the Developer perspective
Cluster administrators can opt out of displaying Tekton Hub resources, such as tasks and pipelines, in the Pipeline builder page of the Developer perspective of an OpenShift Container Platform cluster.
Prerequisite
-
Ensure that the Red Hat OpenShift Pipelines Operator is installed on the cluster, and the
oc
command line tool is available.
Procedure
To opt of displaying Tekton Hub resources in the Developer perspective, set the value of the
enable-devconsole-integration
field in theTektonConfig
custom resource (CR) tofalse
.apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: targetNamespace: openshift-pipelines ... hub: params: - name: enable-devconsole-integration value: "false" ...
By default, the
TektonConfig
CR does not include theenable-devconsole-integration
field, and the Red Hat OpenShift Pipelines Operator assumes that the value istrue
.
4.7.3. Additional resources
- GitHub repository of Tekton Hub.
- Installing OpenShift Pipelines
- Red Hat OpenShift Pipelines release notes
4.8. Using Pipelines as Code
With Pipelines as Code, cluster administrators and users with the required privileges can define pipeline templates as part of source code Git repositories. When triggered by a source code push or a pull request for the configured Git repository, the feature runs the pipeline and reports the status.
4.8.1. Key features
Pipelines as Code supports the following features:
- Pull request status and control on the platform hosting the Git repository.
- GitHub Checks API to set the status of a pipeline run, including rechecks.
- GitHub pull request and commit events.
-
Pull request actions in comments, such as
/retest
. - Git events filtering and a separate pipeline for each event.
- Automatic task resolution in Pipelines, including local tasks, Tekton Hub, and remote URLs.
- Retrieval of configurations using GitHub blobs and objects API.
-
Access Control List (ACL) over a GitHub organization, or using a Prow style
OWNER
file. -
The
tkn pac
CLI plugin for managing bootstrapping and Pipelines as Code repositories. - Support for GitHub App, GitHub Webhook, Bitbucket Server, and Bitbucket Cloud.
4.8.2. Installing Pipelines as Code on an OpenShift Container Platform
Pipelines as Code is installed by default when you install the Red Hat OpenShift Pipelines Operator. If you are using Pipelines 1.7 or later versions, skip the procedure for manual installation of Pipelines as Code.
To disable the default installation of Pipelines as Code with the Operator, set the value of the enable
parameter to false
in the TektonConfig
custom resource.
... spec: platforms: openshift: pipelinesAsCode: enable: false settings: application-name: Pipelines as Code CI auto-configure-new-github-repo: "false" bitbucket-cloud-check-source-ip: "true" hub-catalog-name: tekton hub-url: https://api.hub.tekton.dev/v1 remote-tasks: "true" secret-auto-create: "true" ...
Optionally, you can run the following command:
$ oc patch tektonconfig config --type="merge" -p '{"spec": {"platforms": {"openshift":{"pipelinesAsCode": {"enable": false}}}}}'
To enable the default installation of Pipelines as Code with the Red Hat OpenShift Pipelines Operator, set the value of the enable
parameter to true
in the TektonConfig
custom resource:
... spec: addon: enablePipelinesAsCode: false ...
Optionally, you can run the following command:
$ oc patch tektonconfig config --type="merge" -p '{"spec": {"platforms": {"openshift":{"pipelinesAsCode": {"enable": true}}}}}'
4.8.3. Installing Pipelines as Code CLI
Cluster administrators can use the tkn pac
and opc
CLI tools on local machines or as containers for testing. The tkn pac
and opc
CLI tools are installed automatically when you install the tkn
CLI for Red Hat OpenShift Pipelines.
You can install the tkn pac
and opc
version 1.9.1
binaries for the supported platforms:
- Linux (x86_64, amd64)
- Linux on IBM Z and LinuxONE (s390x)
- Linux on IBM Power Systems (ppc64le)
- Mac
- Note
The binaries are compatible with
tkn
version0.23.1
.
4.8.4. Using Pipelines as Code with a Git repository hosting service provider
After installing Pipelines as Code, cluster administrators can configure a Git repository hosting service provider. Currently, the following services are supported:
- GitHub App
- GitHub Webhook
- GitLab
- Bitbucket Server
- Bitbucket Cloud
GitHub App is the recommended service for using with Pipelines as Code.
4.8.5. Using Pipelines as Code with a GitHub App
GitHub Apps act as a point of integration with Red Hat OpenShift Pipelines and bring the advantage of Git-based workflows to OpenShift Pipelines. Cluster administrators can configure a single GitHub App for all cluster users. For GitHub Apps to work with Pipelines as Code, ensure that the webhook of the GitHub App points to the Pipelines as Code event listener route (or ingress endpoint) that listens for GitHub events.
4.8.5.1. Configuring a GitHub App
Cluster administrators can create a GitHub App by running the following command:
$ tkn pac bootstrap github-app
If the tkn pac
CLI plugin is not installed, you can create the GitHub App manually.
Procedure
To create and configure a GitHub App manually for Pipelines as Code, perform the following steps:
- Sign in to your GitHub account.
-
Go to Settings
Developer settings GitHub Apps, and click New GitHub App. Provide the following information in the GitHub App form:
-
GitHub Application Name:
OpenShift Pipelines
- Homepage URL: OpenShift Console URL
-
Webhook URL: The Pipelines as Code route or ingress URL. You can find it by running the command
echo https://$(oc get route -n openshift-pipelines pipelines-as-code-controller -o jsonpath='{.spec.host}')
. -
Webhook secret: An arbitrary secret. You can generate a secret by executing the command
openssl rand -hex 20
.
-
GitHub Application Name:
Select the following Repository permissions:
-
Checks:
Read & Write
-
Contents:
Read & Write
-
Issues:
Read & Write
-
Metadata:
Read-only
-
Pull request:
Read & Write
-
Checks:
Select the following Organization permissions:
-
Members:
Readonly
-
Plan:
Readonly
-
Members:
Select the following User permissions:
- Commit comment
- Issue comment
- Pull request
- Pull request review
- Pull request review comment
- Push
- Click Create GitHub App.
- On the Details page of the newly created GitHub App, note the App ID displayed at the top.
- In the Private keys section, click Generate Private key to automatically generate and download a private key for the GitHub app. Securely store the private key for future reference and usage.
4.8.5.2. Configuring Pipelines as Code to access a GitHub App
To configure Pipelines as Code to access the newly created GitHub App, execute the following command:
+
$ oc -n openshift-pipelines create secret generic pipelines-as-code-secret \ --from-literal github-private-key="$(cat <PATH_PRIVATE_KEY>)" \ 1 --from-literal github-application-id="<APP_ID>" \ 2 --from-literal webhook.secret="<WEBHOOK_SECRET>" 3
Pipelines as Code works automatically with GitHub Enterprise by detecting the header set from GitHub Enterprise and using it for the GitHub Enterprise API authorization URL.
4.8.5.3. Creating a GitHub App in administrator perspective
As a cluster administrator, you can configure your GitHub App with the OpenShift Container Platform cluster to use Pipelines as Code. This configuration allows you to execute a set of tasks required for build deployment.
Prerequisites
You have installed the Red Hat OpenShift Pipelines pipelines-1.10
operator from the Operator Hub.
Procedure
- In the administrator perspective, navigate to Pipelines using the navigation pane.
- Click Setup GitHub App on the Pipelines page.
-
Enter your GitHub App name. For example,
pipelines-ci-clustername-testui
. - Click Setup.
- Enter your Git password when prompted in the browser.
-
Click Create GitHub App for <username>, where
<username>
is your GitHub user name.
Verification
After successful creation of the GitHub App, the OpenShift Container Platform web console opens and displays the details about the application.
The details of the GitHub App are saved as a secret in the openShift-pipelines
namespace.
To view details such as name, link, and secret associated with the GitHub applications, navigate to Pipelines and click View GitHub App.
4.8.6. Using Pipelines as Code with GitHub Webhook
Use Pipelines as Code with GitHub Webhook on your repository if you cannot create a GitHub App. However, using Pipelines as Code with GitHub Webhook does not give you access to the GitHub Check Runs API. The status of the tasks is added as comments on the pull request and is unavailable under the Checks tab.
Pipelines as Code with GitHub Webhook does not support GitOps comments such as /retest
and /ok-to-test
. To restart the continuous integration (CI), create a new commit to the repository. For example, to create a new commit without any changes, you can use the following command:
$ git --amend -a --no-edit && git push --force-with-lease <origin> <branchname>
Prerequisites
- Ensure that Pipelines as Code is installed on the cluster.
For authorization, create a personal access token on GitHub.
To generate a secure and fine-grained token, restrict its scope to a specific repository and grant the following permissions:
Table 4.7. Permissions for fine-grained tokens Name Access Administration
Read-only
Metadata
Read-only
Content
Read-only
Commit statuses
Read and Write
Pull request
Read and Write
Webhooks
Read and Write
To use classic tokens, set the scope as
public_repo
for public repositories andrepo
for private repositories. In addition, provide a short token expiration period and note the token in an alternate location.NoteIf you want to configure the webhook using the
tkn pac
CLI, add theadmin:repo_hook
scope.
Procedure
Configure the webhook and create a
Repository
custom resource (CR).To configure a webhook and create a
Repository
CR automatically using thetkn pac
CLI tool, use the following command:$ tkn pac create repo
Sample interactive output
? Enter the Git repository url (default: https://github.com/owner/repo): ? Please enter the namespace where the pipeline should run (default: repo-pipelines): ! Namespace repo-pipelines is not found ? Would you like me to create the namespace repo-pipelines? Yes ✓ Repository owner-repo has been created in repo-pipelines namespace ✓ Setting up GitHub Webhook for Repository https://github.com/owner/repo 👀 I have detected a controller url: https://pipelines-as-code-controller-openshift-pipelines.apps.example.com ? Do you want me to use it? Yes ? Please enter the secret to configure the webhook for payload validation (default: sJNwdmTifHTs): sJNwdmTifHTs ℹ ️You now need to create a GitHub personal access token, please checkout the docs at https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token for the required scopes ? Please enter the GitHub access token: **************************************** ✓ Webhook has been created on repository owner/repo 🔑 Webhook Secret owner-repo has been created in the repo-pipelines namespace. 🔑 Repository CR owner-repo has been updated with webhook secret in the repo-pipelines namespace ℹ Directory .tekton has been created. ✓ We have detected your repository using the programming language Go. ✓ A basic template has been created in /home/Go/src/github.com/owner/repo/.tekton/pipelinerun.yaml, feel free to customize it.
To configure a webhook and create a
Repository
CR manually, perform the following steps:On your OpenShift cluster, extract the public URL of the Pipelines as Code controller.
$ echo https://$(oc get route -n pipelines-as-code pipelines-as-code-controller -o jsonpath='{.spec.host}')
On your GitHub repository or organization, perform the following steps:
- Go to Settings –> Webhooks and click Add webhook.
- Set the Payload URL to the Pipelines as Code controller public URL.
- Select the content type as application/json.
Add a webhook secret and note it in an alternate location. With
openssl
installed on your local machine, generate a random secret.$ openssl rand -hex 20
- Click Let me select individual events and select these events: Commit comments, Issue comments, Pull request, and Pushes.
- Click Add webhook.
On your OpenShift cluster, create a
Secret
object with the personal access token and webhook secret.$ oc -n target-namespace create secret generic github-webhook-config \ --from-literal provider.token="<GITHUB_PERSONAL_ACCESS_TOKEN>" \ --from-literal webhook.secret="<WEBHOOK_SECRET>"
Create a
Repository
CR.Example:
Repository
CRapiVersion: "pipelinesascode.tekton.dev/v1alpha1" kind: Repository metadata: name: my-repo namespace: target-namespace spec: url: "https://github.com/owner/repo" git_provider: secret: name: "github-webhook-config" key: "provider.token" # Set this if you have a different key in your secret webhook_secret: name: "github-webhook-config" key: "webhook.secret" # Set this if you have a different key for your secret
NotePipelines as Code assumes that the OpenShift
Secret
object and theRepository
CR are in the same namespace.
Optional: For an existing
Repository
CR, add multiple GitHub Webhook secrets or provide a substitute for a deleted secret.Add a webhook using the
tkn pac
CLI tool.Example: Additional webhook using the
tkn pac
CLI$ tkn pac webhook add -n repo-pipelines
Sample interactive output
✓ Setting up GitHub Webhook for Repository https://github.com/owner/repo 👀 I have detected a controller url: https://pipelines-as-code-controller-openshift-pipelines.apps.example.com ? Do you want me to use it? Yes ? Please enter the secret to configure the webhook for payload validation (default: AeHdHTJVfAeH): AeHdHTJVfAeH ✓ Webhook has been created on repository owner/repo 🔑 Secret owner-repo has been updated with webhook secert in the repo-pipelines namespace.
-
Update the
webhook.secret
key in the existing OpenShiftSecret
object.
Optional: For an existing
Repository
CR, update the personal access token.Update the personal access token using the
tkn pac
CLI tool.Example: Updating personal access token using the
tkn pac
CLI$ tkn pac webhook update-token -n repo-pipelines
Sample interactive output
? Please enter your personal access token: **************************************** 🔑 Secret owner-repo has been updated with new personal access token in the repo-pipelines namespace.
Alternatively, update the personal access token by modifying the
Repository
CR.Find the name of the secret in the
Repository
CR.... spec: git_provider: secret: name: "github-webhook-config" ...
Use the
oc patch
command to update the values of the$NEW_TOKEN
in the$target_namespace
namespace.$ oc -n $target_namespace patch secret github-webhook-config -p "{\"data\": {\"provider.token\": \"$(echo -n $NEW_TOKEN|base64 -w0)\"}}"
4.8.7. Using Pipelines as Code with GitLab
If your organization or project uses GitLab as the preferred platform, you can use Pipelines as Code for your repository with a webhook on GitLab.
Prerequisites
- Ensure that Pipelines as Code is installed on the cluster.
For authorization, generate a personal access token as the manager of the project or organization on GitLab.
Note-
If you want to configure the webhook using the
tkn pac
CLI, add theadmin:repo_hook
scope to the token. - Using a token scoped for a specific project cannot provide API access to a merge request (MR) sent from a forked repository. In such cases, Pipelines as Code displays the result of a pipeline as a comment on the MR.
-
If you want to configure the webhook using the
Procedure
Configure the webhook and create a
Repository
custom resource (CR).To configure a webhook and create a
Repository
CR automatically using thetkn pac
CLI tool, use the following command:$ tkn pac create repo
Sample interactive output
? Enter the Git repository url (default: https://gitlab.com/owner/repo): ? Please enter the namespace where the pipeline should run (default: repo-pipelines): ! Namespace repo-pipelines is not found ? Would you like me to create the namespace repo-pipelines? Yes ✓ Repository repositories-project has been created in repo-pipelines namespace ✓ Setting up GitLab Webhook for Repository https://gitlab.com/owner/repo ? Please enter the project ID for the repository you want to be configured, project ID refers to an unique ID (e.g. 34405323) shown at the top of your GitLab project : 17103 👀 I have detected a controller url: https://pipelines-as-code-controller-openshift-pipelines.apps.example.com ? Do you want me to use it? Yes ? Please enter the secret to configure the webhook for payload validation (default: lFjHIEcaGFlF): lFjHIEcaGFlF ℹ ️You now need to create a GitLab personal access token with `api` scope ℹ ️Go to this URL to generate one https://gitlab.com/-/profile/personal_access_tokens, see https://is.gd/rOEo9B for documentation ? Please enter the GitLab access token: ************************** ? Please enter your GitLab API URL:: https://gitlab.com ✓ Webhook has been created on your repository 🔑 Webhook Secret repositories-project has been created in the repo-pipelines namespace. 🔑 Repository CR repositories-project has been updated with webhook secret in the repo-pipelines namespace ℹ Directory .tekton has been created. ✓ A basic template has been created in /home/Go/src/gitlab.com/repositories/project/.tekton/pipelinerun.yaml, feel free to customize it.
To configure a webhook and create a
Repository
CR manually, perform the following steps:On your OpenShift cluster, extract the public URL of the Pipelines as Code controller.
$ echo https://$(oc get route -n pipelines-as-code pipelines-as-code-controller -o jsonpath='{.spec.host}')
On your GitLab project, perform the following steps:
- Use the left sidebar to go to Settings –> Webhooks.
- Set the URL to the Pipelines as Code controller public URL.
Add a webhook secret and note it in an alternate location. With
openssl
installed on your local machine, generate a random secret.$ openssl rand -hex 20
- Click Let me select individual events and select these events: Commit comments, Issue comments, Pull request, and Pushes.
- Click Save changes.
On your OpenShift cluster, create a
Secret
object with the personal access token and webhook secret.$ oc -n target-namespace create secret generic gitlab-webhook-config \ --from-literal provider.token="<GITLAB_PERSONAL_ACCESS_TOKEN>" \ --from-literal webhook.secret="<WEBHOOK_SECRET>"
Create a
Repository
CR.Example:
Repository
CRapiVersion: "pipelinesascode.tekton.dev/v1alpha1" kind: Repository metadata: name: my-repo namespace: target-namespace spec: url: "https://gitlab.com/owner/repo" 1 git_provider: secret: name: "gitlab-webhook-config" key: "provider.token" # Set this if you have a different key in your secret webhook_secret: name: "gitlab-webhook-config" key: "webhook.secret" # Set this if you have a different key for your secret
- 1
- Currently, Pipelines as Code does not automatically detects private instances for GitLab. In such cases, specify the API URL under the
git_provider.url
spec. In general, you can use thegit_provider.url
spec to manually override the API URL.
Note-
Pipelines as Code assumes that the OpenShift
Secret
object and theRepository
CR are in the same namespace.
Optional: For an existing
Repository
CR, add multiple GitLab Webhook secrets or provide a substitute for a deleted secret.Add a webhook using the
tkn pac
CLI tool.Example: Adding additional webhook using the
tkn pac
CLI$ tkn pac webhook add -n repo-pipelines
Sample interactive output
✓ Setting up GitLab Webhook for Repository https://gitlab.com/owner/repo 👀 I have detected a controller url: https://pipelines-as-code-controller-openshift-pipelines.apps.example.com ? Do you want me to use it? Yes ? Please enter the secret to configure the webhook for payload validation (default: AeHdHTJVfAeH): AeHdHTJVfAeH ✓ Webhook has been created on repository owner/repo 🔑 Secret owner-repo has been updated with webhook secert in the repo-pipelines namespace.
-
Update the
webhook.secret
key in the existing OpenShiftSecret
object.
Optional: For an existing
Repository
CR, update the personal access token.Update the personal access token using the
tkn pac
CLI tool.Example: Updating personal access token using the
tkn pac
CLI$ tkn pac webhook update-token -n repo-pipelines
Sample interactive output
? Please enter your personal access token: **************************************** 🔑 Secret owner-repo has been updated with new personal access token in the repo-pipelines namespace.
Alternatively, update the personal access token by modifying the
Repository
CR.Find the name of the secret in the
Repository
CR.... spec: git_provider: secret: name: "gitlab-webhook-config" ...
Use the
oc patch
command to update the values of the$NEW_TOKEN
in the$target_namespace
namespace.$ oc -n $target_namespace patch secret gitlab-webhook-config -p "{\"data\": {\"provider.token\": \"$(echo -n $NEW_TOKEN|base64 -w0)\"}}"
Additional resources
4.8.8. Using Pipelines as Code with Bitbucket Cloud
If your organization or project uses Bitbucket Cloud as the preferred platform, you can use Pipelines as Code for your repository with a webhook on Bitbucket Cloud.
Prerequisites
- Ensure that Pipelines as Code is installed on the cluster.
Create an app password on Bitbucket Cloud.
Check the following boxes to add appropriate permissions to the token:
-
Account:
Email
,Read
-
Workspace membership:
Read
,Write
-
Projects:
Read
,Write
-
Issues:
Read
,Write
Pull requests:
Read
,Write
Note-
If you want to configure the webhook using the
tkn pac
CLI, add theWebhooks
:Read
andWrite
permission to the token. - Once generated, save a copy of the password or token in an alternate location.
-
If you want to configure the webhook using the
-
Account:
Procedure
Configure the webhook and create a
Repository
CR.To configure a webhook and create a
Repository
CR automatically using thetkn pac
CLI tool, use the following command:$ tkn pac create repo
Sample interactive output
? Enter the Git repository url (default: https://bitbucket.org/workspace/repo): ? Please enter the namespace where the pipeline should run (default: repo-pipelines): ! Namespace repo-pipelines is not found ? Would you like me to create the namespace repo-pipelines? Yes ✓ Repository workspace-repo has been created in repo-pipelines namespace ✓ Setting up Bitbucket Webhook for Repository https://bitbucket.org/workspace/repo ? Please enter your bitbucket cloud username: <username> ℹ ️You now need to create a Bitbucket Cloud app password, please checkout the docs at https://is.gd/fqMHiJ for the required permissions ? Please enter the Bitbucket Cloud app password: ************************************ 👀 I have detected a controller url: https://pipelines-as-code-controller-openshift-pipelines.apps.example.com ? Do you want me to use it? Yes ✓ Webhook has been created on repository workspace/repo 🔑 Webhook Secret workspace-repo has been created in the repo-pipelines namespace. 🔑 Repository CR workspace-repo has been updated with webhook secret in the repo-pipelines namespace ℹ Directory .tekton has been created. ✓ A basic template has been created in /home/Go/src/bitbucket/repo/.tekton/pipelinerun.yaml, feel free to customize it.
To configure a webhook and create a
Repository
CR manually, perform the following steps:On your OpenShift cluster, extract the public URL of the Pipelines as Code controller.
$ echo https://$(oc get route -n pipelines-as-code pipelines-as-code-controller -o jsonpath='{.spec.host}')
On Bitbucket Cloud, perform the following steps:
- Use the left navigation pane of your Bitbucket Cloud repository to go to Repository settings –> Webhooks and click Add webhook.
- Set a Title. For example, "Pipelines as Code".
- Set the URL to the Pipelines as Code controller public URL.
- Select these events: Repository: Push, Pull Request: Created, Pull Request: Updated, and Pull Request: Comment created.
- Click Save.
On your OpenShift cluster, create a
Secret
object with the app password in the target namespace.$ oc -n target-namespace create secret generic bitbucket-cloud-token \ --from-literal provider.token="<BITBUCKET_APP_PASSWORD>"
Create a
Repository
CR.Example:
Repository
CRapiVersion: "pipelinesascode.tekton.dev/v1alpha1" kind: Repository metadata: name: my-repo namespace: target-namespace spec: url: "https://bitbucket.com/workspace/repo" branch: "main" git_provider: user: "<BITBUCKET_USERNAME>" 1 secret: name: "bitbucket-cloud-token" 2 key: "provider.token" # Set this if you have a different key in your secret
Note-
The
tkn pac create
andtkn pac bootstrap
commands are not supported on Bitbucket Cloud. Bitbucket Cloud does not support webhook secrets. To secure the payload and prevent hijacking of the CI, Pipelines as Code fetches the list of Bitbucket Cloud IP addresses and ensures that the webhook receptions come only from those IP addresses.
-
To disable the default behavior, set the
bitbucket-cloud-check-source-ip key
tofalse
in the Pipelines as Code config map for thepipelines-as-code
namespace. -
To allow additional safe IP addresses or networks, add them as comma separated values to the
bitbucket-cloud-additional-source-ip
key in the Pipelines as Code config map for thepipelines-as-code
namespace.
-
To disable the default behavior, set the
Optional: For an existing
Repository
CR, add multiple Bitbucket Cloud Webhook secrets or provide a substitute for a deleted secret.Add a webhook using the
tkn pac
CLI tool.Example: Adding additional webhook using the
tkn pac
CLI$ tkn pac webhook add -n repo-pipelines
Sample interactive output
✓ Setting up Bitbucket Webhook for Repository https://bitbucket.org/workspace/repo ? Please enter your bitbucket cloud username: <username> 👀 I have detected a controller url: https://pipelines-as-code-controller-openshift-pipelines.apps.example.com ? Do you want me to use it? Yes ✓ Webhook has been created on repository workspace/repo 🔑 Secret workspace-repo has been updated with webhook secret in the repo-pipelines namespace.
NoteUse the
[-n <namespace>]
option with thetkn pac webhook add
command only when theRepository
CR exists in a namespace other than the default namespace.-
Update the
webhook.secret
key in the existing OpenShiftSecret
object.
Optional: For an existing
Repository
CR, update the personal access token.Update the personal access token using the
tkn pac
CLI tool.Example: Updating personal access token using the
tkn pac
CLI$ tkn pac webhook update-token -n repo-pipelines
Sample interactive output
? Please enter your personal access token: **************************************** 🔑 Secret owner-repo has been updated with new personal access token in the repo-pipelines namespace.
NoteUse the
[-n <namespace>]
option with thetkn pac webhook update-token
command only when theRepository
CR exists in a namespace other than the default namespace.Alternatively, update the personal access token by modifying the
Repository
CR.Find the name of the secret in the
Repository
CR.... spec: git_provider: user: "<BITBUCKET_USERNAME>" secret: name: "bitbucket-cloud-token" key: "provider.token" ...
Use the
oc patch
command to update the values of the$password
in the$target_namespace
namespace.$ oc -n $target_namespace patch secret bitbucket-cloud-token -p "{\"data\": {\"provider.token\": \"$(echo -n $NEW_TOKEN|base64 -w0)\"}}"
Additional resources
4.8.9. Using Pipelines as Code with Bitbucket Server
If your organization or project uses Bitbucket Server as the preferred platform, you can use Pipelines as Code for your repository with a webhook on Bitbucket Server.
Prerequisites
- Ensure that Pipelines as Code is installed on the cluster.
Generate a personal access token as the manager of the project on Bitbucket Server, and save a copy of it in an alternate location.
Note-
The token must have the
PROJECT_ADMIN
andREPOSITORY_ADMIN
permissions. - The token must have access to forked repositories in pull requests.
-
The token must have the
Procedure
On your OpenShift cluster, extract the public URL of the Pipelines as Code controller.
$ echo https://$(oc get route -n pipelines-as-code pipelines-as-code-controller -o jsonpath='{.spec.host}')
On Bitbucket Server, perform the following steps:
- Use the left navigation pane of your Bitbucket Data Center repository to go to Repository settings –> Webhooks and click Add webhook.
- Set a Title. For example, "Pipelines as Code".
- Set the URL to the Pipelines as Code controller public URL.
Add a webhook secret and save a copy of it in an alternate location. If you have
openssl
installed on your local machine, generate a random secret using the following command:$ openssl rand -hex 20
Select the following events:
- Repository: Push
- Repository: Modified
- Pull Request: Opened
- Pull Request: Source branch updated
- Pull Request: Comment added
- Click Save.
On your OpenShift cluster, create a
Secret
object with the app password in the target namespace.$ oc -n target-namespace create secret generic bitbucket-server-webhook-config \ --from-literal provider.token="<PERSONAL_TOKEN>" \ --from-literal webhook.secret="<WEBHOOK_SECRET>"
Create a
Repository
CR.Example:
Repository
CR--- apiVersion: "pipelinesascode.tekton.dev/v1alpha1" kind: Repository metadata: name: my-repo namespace: target-namespace spec: url: "https://bitbucket.com/workspace/repo" git_provider: url: "https://bitbucket.server.api.url/rest" 1 user: "<BITBUCKET_USERNAME>" 2 secret: 3 name: "bitbucket-server-webhook-config" key: "provider.token" # Set this if you have a different key in your secret webhook_secret: name: "bitbucket-server-webhook-config" key: "webhook.secret" # Set this if you have a different key for your secret
- 1
- Ensure that you have the right Bitbucket Server API URL without the
/api/v1.0
suffix. Usually, the default install has a/rest
suffix. - 2
- You can only reference a user by the
ACCOUNT_ID
in an owner file. - 3
- Pipelines as Code assumes that the secret referred in the
git_provider.secret
spec and theRepository
CR is in the same namespace.
NoteThe
tkn pac create
andtkn pac bootstrap
commands are not supported on Bitbucket Server.
Additional resources
4.8.10. Interfacing Pipelines as Code with custom certificates
To configure Pipelines as Code with a Git repository that is accessible with a privately signed or custom certificate, you can expose the certificate to Pipelines as Code.
Procedure
-
If you have installed Pipelines as Code using the Red Hat OpenShift Pipelines Operator, you can add your custom certificate to the cluster using the
Proxy
object. The Operator exposes the certificate in all Red Hat OpenShift Pipelines components and workloads, including Pipelines as Code.
Additional resources
4.8.11. Using the Repository
CRD with Pipelines as Code
The Repository
custom resource (CR) has the following primary functions:
- Inform Pipelines as Code about processing an event from a URL.
- Inform Pipelines as Code about the namespace for the pipeline runs.
- Reference an API secret, username, or an API URL necessary for Git provider platforms when using webhook methods.
- Provide the last pipeline run status for a repository.
You can use the tkn pac
CLI or other alternative methods to create a Repository
CR inside the target namespace. For example:
cat <<EOF|kubectl create -n my-pipeline-ci -f- 1
apiVersion: "pipelinesascode.tekton.dev/v1alpha1"
kind: Repository
metadata:
name: project-repository
spec:
url: "https://github.com/<repository>/<project>"
EOF
- 1
my-pipeline-ci
is the target namespace.
Whenever there is an event coming from the URL such as https://github.com/<repository>/<project>
, Pipelines as Code matches it and starts checking out the content of the <repository>/<project>
repository for pipeline run to match the content in the .tekton/
directory.
-
You must create the
Repository
CRD in the same namespace where pipelines associated with the source code repository will be executed; it cannot target a different namespace. -
If multiple
Repository
CRDs match the same event, Pipelines as Code will process only the oldest one. If you need to match a specific namespace, add thepipelinesascode.tekton.dev/target-namespace: "<mynamespace>"
annotation. Such explicit targeting prevents a malicious actor from executing a pipeline run in a namespace to which they do not have access.
4.8.11.1. Setting concurrency limits in the Repository
CRD
You can use the concurrency_limit
spec in the Repository
CRD to define the maximum number of pipeline runs running simultaneously for a repository.
... spec: concurrency_limit: <number> ...
If there are multiple pipeline runs matching an event, the pipeline runs that match the event start in an alphabetical order.
For example, if you have three pipeline runs in the .tekton
directory and you create a pull request with a concurrency_limit
of 1
in the repository configuration, then all the pipeline runs are executed in an alphabetical order. At any given time, only one pipeline run is in the running state while the rest are queued.
4.8.12. Using Pipelines as Code resolver
The Pipelines as Code resolver ensures that a running pipeline run does not conflict with others.
To split your pipeline and pipeline run, store the files in the .tekton/
directory or its subdirectories.
If Pipelines as Code observes a pipeline run with a reference to a task or a pipeline in any YAML file located in the .tekton/
directory, Pipelines as Code automatically resolves the referenced task to provide a single pipeline run with an embedded spec in a PipelineRun
object.
If Pipelines as Code cannot resolve the referenced tasks in the Pipeline
or PipelineSpec
definition, the run fails before applying any changes to the cluster. You can see the issue on your Git provider platform and inside the events of the target namespace where the Repository
CR is located.
The resolver skips resolving if it observes the following type of tasks:
- A reference to a cluster task.
- A task or pipeline bundle.
-
A custom task with an API version that does not have a
tekton.dev/
prefix.
The resolver uses such tasks literally, without any transformation.
To test your pipeline run locally before sending it in a pull request, use the tkn pac resolve
command.
You can also reference remote pipelines and tasks.
4.8.12.1. Using remote task annotations with Pipelines as Code
Pipelines as Code supports fetching remote tasks or pipelines by using annotations in a pipeline run. If you reference a remote task in a pipeline run, or a pipeline in a PipelineRun
or a PipelineSpec
object, the Pipelines as Code resolver automatically includes it. If there is any error while fetching the remote tasks or parsing them, Pipelines as Code stops processing the tasks.
To include remote tasks, refer to the following examples of annotation:
Reference remote tasks in Tekton Hub
Reference a single remote task in Tekton Hub.
... pipelinesascode.tekton.dev/task: "git-clone" 1 ...
- 1
- Pipelines as Code includes the latest version of the task from the Tekton Hub.
Reference multiple remote tasks from Tekton Hub
... pipelinesascode.tekton.dev/task: "[git-clone, golang-test, tkn]" ...
Reference multiple remote tasks from Tekton Hub using the
-<NUMBER>
suffix.... pipelinesascode.tekton.dev/task: "git-clone" pipelinesascode.tekton.dev/task-1: "golang-test" pipelinesascode.tekton.dev/task-2: "tkn" 1 ...
- 1
- By default, Pipelines as Code interprets the string as the latest task to fetch from Tekton Hub.
Reference a specific version of a remote task from Tekton Hub.
... pipelinesascode.tekton.dev/task: "[git-clone:0.1]" 1 ...
- 1
- Refers to the
0.1
version of thegit-clone
remote task from Tekton Hub.
Remote tasks using URLs
...
pipelinesascode.tekton.dev/task: "<https://remote.url/task.yaml>" 1
...
- 1
- The public URL to the remote task.Note
If you use GitHub and the remote task URL uses the same host as the
Repository
CRD, Pipelines as Code uses the GitHub token and fetches the URL using the GitHub API.For example, if you have a repository URL similar to
https://github.com/<organization>/<repository>
and the remote HTTP URL references a GitHub blob similar tohttps://github.com/<organization>/<repository>/blob/<mainbranch>/<path>/<file>
, Pipelines as Code fetches the task definition files from that private repository with the GitHub App token.When you work on a public GitHub repository, Pipelines as Code acts similarly for a GitHub raw URL such as
https://raw.githubusercontent.com/<organization>/<repository>/<mainbranch>/<path>/<file>
.- GitHub App tokens are scoped to the owner or organization where the repository is located. When you use the GitHub webhook method, you can fetch any private or public repository on any organization where the personal token is allowed.
Reference a task from a YAML file inside your repository
...
pipelinesascode.tekton.dev/task: "<share/tasks/git-clone.yaml>" 1
...
- 1
- Relative path to the local file containing the task definition.
4.8.12.2. Using remote pipeline annotations with Pipelines as Code
You can share a pipeline definition across multiple repositories by using the remote pipeline annotation.
...
pipelinesascode.tekton.dev/pipeline: "<https://git.provider/raw/pipeline.yaml>" 1
...
- 1
- URL to the remote pipeline definition. You can also provide locations for files inside the same repository.
You can reference only one pipeline definition using the annotation.
4.8.13. Creating a pipeline run using Pipelines as Code
To run pipelines using Pipelines as Code, you can create pipelines definitions or templates as YAML files in the .tekton/
directory of the repository. You can reference YAML files in other repositories using remote URLs, but pipeline runs are only triggered by events in the repository containing the .tekton/
directory.
The Pipelines as Code resolver bundles the pipeline runs with all tasks as a single pipeline run without external dependencies.
-
For pipelines, use at least one pipeline run with a spec, or a separated
Pipeline
object. - For tasks, embed task spec inside a pipeline, or define it separately as a Task object.
Parameterizing commits and URLs
You can specify the parameters of your commit and URL by using dynamic, expandable variables with the {{<var>}} format. Currently, you can use the following variables:
-
{{repo_owner}}
: The repository owner. -
{{repo_name}}
: The repository name. -
{{repo_url}}
: The repository full URL. -
{{revision}}
: Full SHA revision of a commit. -
{{sender}}
: The username or account id of the sender of the commit. -
{{source_branch}}
: The branch name where the event originated. -
{{target_branch}}
: The branch name that the event targets. For push events, it’s the same as thesource_branch
. -
{{pull_request_number}}
: The pull or merge request number, defined only for apull_request
event type. -
{{git_auth_secret}}
: The secret name that is generated automatically with Git provider’s token for checking out private repos.
Matching an event to a pipeline run
You can match different Git provider events with each pipeline by using special annotations on the pipeline run. If there are multiple pipeline runs matching an event, Pipelines as Code runs them in parallel and posts the results to the Git provider as soon a pipeline run finishes.
Matching a pull event to a pipeline run
You can use the following example to match the pipeline-pr-main
pipeline with a pull_request
event that targets the main
branch:
...
metadata:
name: pipeline-pr-main
annotations:
pipelinesascode.tekton.dev/on-target-branch: "[main]" 1
pipelinesascode.tekton.dev/on-event: "[pull_request]"
...
- 1
- You can specify multiple branches by adding comma-separated entries. For example,
"[main, release-nightly]"
. In addition, you can specify the following:-
Full references to branches such as
"refs/heads/main"
-
Globs with pattern matching such as
"refs/heads/\*"
-
Tags such as
"refs/tags/1.\*"
-
Full references to branches such as
Matching a push event to a pipeline run
You can use the following example to match the pipeline-push-on-main
pipeline with a push
event targeting the refs/heads/main
branch:
...
metadata:
name: pipeline-push-on-main
annotations:
pipelinesascode.tekton.dev/on-target-branch: "[refs/heads/main]" 1
pipelinesascode.tekton.dev/on-event: "[push]"
...
- 1
- You can specifiy multiple branches by adding comma-separated entries. For example,
"[main, release-nightly]"
. In addition, you can specify the following:-
Full references to branches such as
"refs/heads/main"
-
Globs with pattern matching such as
"refs/heads/\*"
-
Tags such as
"refs/tags/1.\*"
-
Full references to branches such as
Advanced event matching
Pipelines as Code supports using Common Expression Language (CEL) based filtering for advanced event matching. If you have the pipelinesascode.tekton.dev/on-cel-expression
annotation in your pipeline run, Pipelines as Code uses the CEL expression and skips the on-target-branch
annotation. Compared to the simple on-target-branch
annotation matching, the CEL expressions allow complex filtering and negation.
To use CEL-based filtering with Pipelines as Code, consider the following examples of annotations:
To match a
pull_request
event targeting themain
branch and coming from thewip
branch:... pipelinesascode.tekton.dev/on-cel-expression: | event == "pull_request" && target_branch == "main" && source_branch == "wip" ...
To run a pipeline only if a path has changed, you can use the
.pathChanged
suffix function with a glob pattern:... pipelinesascode.tekton.dev/on-cel-expression: | event == "pull_request" && "docs/\*.md".pathChanged() 1 ...
- 1
- Matches all markdown files in the
docs
directory.
To match all pull requests starting with the title
[DOWNSTREAM]
:... pipelinesascode.tekton.dev/on-cel-expression: | event == "pull_request && event_title.startsWith("[DOWNSTREAM]") ...
To run a pipeline on a
pull_request
event, but skip theexperimental
branch:... pipelinesascode.tekton.dev/on-cel-expression: | event == "pull_request" && target_branch != experimental" ...
For advanced CEL-based filtering while using Pipelines as Code, you can use the following fields and suffix functions:
-
event
: Apush
orpull_request
event. -
target_branch
: The target branch. -
source_branch
: The branch of origin of apull_request
event. Forpush
events, it is same as thetarget_branch
. -
event_title
: Matches the title of the event, such as the commit title for apush
event, and the title of a pull or merge request for apull_request
event. Currently, only GitHub, Gitlab, and Bitbucket Cloud are the supported providers. -
.pathChanged
: A suffix function to a string. The string can be a glob of a path to check if the path has changed. Currently, only GitHub and Gitlab are supported as providers.
Using the temporary GitHub App token for Github API operations
You can use the temporary installation token generated by Pipelines as Code from GitHub App to access the GitHub API. The token value is stored in the temporary {{git_auth_secret}}
dynamic variable generated for private repositories in the git-provider-token
key.
For example, to add a comment to a pull request, you can use the github-add-comment
task from Tekton Hub using a Pipelines as Code annotation:
... pipelinesascode.tekton.dev/task: "github-add-comment" ...
You can then add a task to the tasks
section or finally
tasks in the pipeline run definition:
[...]
tasks:
- name:
taskRef:
name: github-add-comment
params:
- name: REQUEST_URL
value: "{{ repo_url }}/pull/{{ pull_request_number }}" 1
- name: COMMENT_OR_FILE
value: "Pipelines as Code IS GREAT!"
- name: GITHUB_TOKEN_SECRET_NAME
value: "{{ git_auth_secret }}"
- name: GITHUB_TOKEN_SECRET_KEY
value: "git-provider-token"
...
- 1
- By using the dynamic variables, you can reuse this snippet template for any pull request from any repository.
On GitHub Apps, the generated installation token is available for 8 hours and scoped to the repository from where the events originate unless configured differently on the cluster.
Additional resources
4.8.14. Running a pipeline run using Pipelines as Code
With default configuration, Pipelines as Code runs any pipeline run in the .tekton/
directory of the default branch of repository, when specified events such as pull request or push occurs on the repository. For example, if a pipeline run on the default branch has the annotation pipelinesascode.tekton.dev/on-event: "[pull_request]"
, it will run whenever a pull request event occurs.
In the event of a pull request or a merge request, Pipelines as Code also runs pipelines from branches other than the default branch, if the following conditions are met by the author of the pull request:
- The author is the owner of the repository.
- The author is a collaborator on the repository.
- The author is a public member on the organization of the repository.
-
The pull request author is listed in an
OWNER
file located in the repository root of themain
branch as defined in the GitHub configuration for the repository. Also, the pull request author is added to eitherapprovers
orreviewers
section. For example, if an author is listed in theapprovers
section, then a pull request raised by that author starts the pipeline run.
... approvers: - approved ...
If the pull request author does not meet the requirements, another user who meets the requirements can comment /ok-to-test
on the pull request, and start the pipeline run.
Pipeline run execution
A pipeline run always runs in the namespace of the Repository
CRD associated with the repository that generated the event.
You can observe the execution of your pipeline runs using the tkn pac
CLI tool.
To follow the execution of the last pipeline run, use the following example:
$ tkn pac logs -n <my-pipeline-ci> -L 1
- 1
my-pipeline-ci
is the namespace for theRepository
CRD.
To follow the execution of any pipeline run interactively, use the following example:
$ tkn pac logs -n <my-pipeline-ci> 1
- 1
my-pipeline-ci
is the namespace for theRepository
CRD. If you need to view a pipeline run other than the last one, you can use thetkn pac logs
command to select aPipelineRun
attached to the repository:
If you have configured Pipelines as Code with a GitHub App, Pipelines as Code posts a URL in the Checks tab of the GitHub App. You can click the URL and follow the pipeline execution.
Restarting a pipeline run
You can restart a pipeline run with no events, such as sending a new commit to your branch or raising a pull request. On a GitHub App, go to the Checks tab and click Re-run.
If you target a pull or merge request, use the following comments inside your pull request to restart all or specific pipeline runs:
-
The
/retest
comment restarts all pipeline runs. -
The
/retest <pipelinerun-name>
comment restarts a specific pipeline run. -
The
/cancel
comment cancels all pipeline runs. -
The
/cancel <pipelinerun-name>
comment cancels a specific pipeline run.
The results of the comments are visible under the Checks tab of a GitHub App.
4.8.15. Monitoring pipeline run status using Pipelines as Code
Depending on the context and supported tools, you can monitor the status of a pipeline run in different ways.
Status on GitHub Apps
When a pipeline run finishes, the status is added in the Check tabs with limited information on how long each task of your pipeline took, and the output of the tkn pipelinerun describe
command.
Log error snippet
When Pipelines as Code detects an error in one of the tasks of a pipeline, a small snippet consisting of the last 3 lines in the task breakdown of the first failed task is displayed.
Pipelines as Code avoids leaking secrets by looking into the pipeline run and replacing secret values with hidden characters. However, Pipelines as Code cannot hide secrets coming from workspaces and envFrom source.
Annotations for log error snippets
In the Pipelines as Code config map, if you set the error-detection-from-container-logs
parameter to true
, Pipelines as Code detects the errors from the container logs and adds them as annotations on the pull request where the error occurred.
This feature is in Technology Preview.
Currently, Pipelines as Code supports only the simple cases where the error looks like makefile
or grep
output of the following format:
<filename>:<line>:<column>: <error message>
You can customize the regular expression used to detect the errors with the error-detection-simple-regexp
field. The regular expression uses named groups to give flexibility on how to specify the matching. The groups needed to match are filename, line, and error. You can view the Pipelines as Code config map for the default regular expression.
By default, Pipelines as Code scans only the last 50 lines of the container logs. You can increase this value in the error-detection-max-number-of-lines
field or set -1
for an unlimited number of lines. However, such configurations may increase the memory usage of the watcher.
Status for webhook
For webhook, when the event is a pull request, the status is added as a comment on the pull or merge request.
Failures
If a namespace is matched to a Repository
CRD, Pipelines as Code emits its failure log messages in the Kubernetes events inside the namespace.
Status associated with Repository CRD
The last 5 status messages for a pipeline run is stored inside the Repository
custom resource.
$ oc get repo -n <pipelines-as-code-ci>
NAME URL NAMESPACE SUCCEEDED REASON STARTTIME COMPLETIONTIME pipelines-as-code-ci https://github.com/openshift-pipelines/pipelines-as-code pipelines-as-code-ci True Succeeded 59m 56m
Using the tkn pac describe
command, you can extract the status of the runs associated with your repository and its metadata.
Notifications
Pipelines as Code does not manage notifications. If you need to have notifications, use the finally
feature of pipelines.
4.8.16. Using private repositories with Pipelines as Code
Pipelines as Code supports private repositories by creating or updating a secret in the target namespace with the user token. The git-clone
task from Tekton Hub uses the user token to clone private repositories.
Whenever Pipelines as Code creates a new pipeline run in the target namespace, it creates or updates a secret with the pac-gitauth-<REPOSITORY_OWNER>-<REPOSITORY_NAME>-<RANDOM_STRING>
format.
You must reference the secret with the basic-auth
workspace in your pipeline run and pipeline definitions, which is then passed on to the git-clone
task.
... workspace: - name: basic-auth secret: secretName: "{{ git_auth_secret }}" ...
In the pipeline, you can reference the basic-auth
workspace for the git-clone
task to reuse:
...
workspaces:
- name basic-auth
params:
- name: repo_url
- name: revision
...
tasks:
workspaces:
- name: basic-auth
workspace: basic-auth
...
tasks:
- name: git-clone-from-catalog
taskRef:
name: git-clone 1
params:
- name: url
value: $(params.repo_url)
- name: revision
value: $(params.revision)
...
- 1
- The
git-clone
task picks up thebasic-auth
workspace and uses it to clone the private repository.
You can modify this configuration by setting the secret-auto-create
flag to either a false
or true
value, as required in the Pipelines as Code config map.
Additional resources
4.8.17. Cleaning up pipeline run using Pipelines as Code
There can be many pipeline runs in a user namespace. By setting the max-keep-runs
annotation, you can configure Pipelines as Code to retain a limited number of pipeline runs that matches an event. For example:
...
pipelinesascode.tekton.dev/max-keep-runs: "<max_number>" 1
...
- 1
- Pipelines as Code starts cleaning up right after it finishes a successful execution, retaining only the maximum number of pipeline runs configured using the annotation.Note
- Pipelines as Code skips cleaning the running pipelines but cleans up the pipeline runs with an unknown status.
- Pipelines as Code skips cleaning a failed pull request.
4.8.18. Using incoming webhook with Pipelines as Code
Using an incoming webhook URL and a shared secret, you can start a pipeline run in a repository.
To use incoming webhooks, specify the following within the spec
section of the Repository
CRD:
- The incoming webhook URL that Pipelines as Code matches.
The Git provider and the user token. Currently, Pipelines as Code supports
github
,gitlab
, andbitbucket-cloud
.NoteWhen using incoming webhook URLs in the context of GitHub app, you must specify the token.
- The target branches and a secret for the incoming webhook URL.
Example: Repository
CRD with incoming webhook
apiVersion: "pipelinesascode.tekton.dev/v1alpha1" kind: Repository metadata: name: repo namespace: ns spec: url: "https://github.com/owner/repo" git_provider: type: github secret: name: "owner-token" incoming: - targets: - main secret: name: repo-incoming-secret type: webhook-url
Example: The repo-incoming-secret
secret for incoming webhook
apiVersion: v1 kind: Secret metadata: name: repo-incoming-secret namespace: ns type: Opaque stringData: secret: <very-secure-shared-secret>
To trigger a pipeline run located in the .tekton
directory of a Git repository, use the following command:
$ curl -X POST 'https://control.pac.url/incoming?secret=very-secure-shared-secret&repository=repo&branch=main&pipelinerun=target_pipelinerun'
Pipelines as Code matches the incoming URL and treats it as a push
event. However, Pipelines as Code does not report status of the pipeline runs triggered by this command.
To get a report or a notification, add it directly with a finally
task to your pipeline. Alternatively, you can inspect the Repository
CRD with the tkn pac
CLI tool.
4.8.19. Customizing Pipelines as Code configuration
To customize Pipelines as Code, cluster administrators can configure the following parameters using the pipelines-as-code
config map in the pipelines-as-code
namespace:
Parameter | Description | Default |
---|---|---|
| The name of the application. For example, the name displayed in the GitHub Checks labels. |
|
|
The number of the days for which the executed pipeline runs are kept in the Note that this configmap setting does not affect the cleanups of a user’s pipeline runs, which are controlled by the annotations on the pipeline run definition in the user’s GitHub repository. | |
| Indicates whether or not a secret should be automatically created using the token generated in the GitHub application. This secret can then be used with private repositories. |
|
| When enabled, allows remote tasks from pipeline run annotations. |
|
| The base URL for the Tekton Hub API. | |
| The Tekton Hub catalog name. |
|
|
The URL of the Tekton Hub dashboard. Pipelines as Code uses this URL to generate a | NA |
| Indicates whether to secure the service requests by querying IP ranges for a public Bitbucket. Changing the parameter’s default value might result into a security issue. |
|
| Indicates whether to provide an additional set of IP ranges or networks, which are separated by commas. | NA |
|
A maximum limit for the | NA |
|
A default limit for the | NA |
| Configures new GitHub repositories automatically. Pipelines as Code sets up a namespace and creates a custom resource for your repository. This parameter is only supported with GitHub applications. |
|
|
Configures a template to automatically generate the namespace for your new repository, if |
|
| Enables or disables the view of a log snippet for the failed tasks, with an error in a pipeline. You can disable this parameter in the case of data leakage from your pipeline. |
|
4.8.20. Pipelines as Code command reference
The tkn pac
CLI tool offers the following capabilities:
- Bootstrap Pipelines as Code installation and configuration.
- Create a new Pipelines as Code repository.
- List all Pipelines as Code repositories.
- Describe a Pipelines as Code repository and the associated runs.
- Generate a simple pipeline run to get started.
- Resolve a pipeline run as if it was executed by Pipelines as Code.
You can use the commands corresponding to the capabilities for testing and experimentation, so that you don’t have to make changes to the Git repository containing the application source code.
4.8.20.1. Basic syntax
$ tkn pac [command or options] [arguments]
4.8.20.2. Global options
$ tkn pac --help
4.8.20.3. Utility commands
4.8.20.3.1. bootstrap
Command | Description |
---|---|
| Installs and configures Pipelines as Code for Git repository hosting service providers, such as GitHub and GitHub Enterprise. |
| Installs the nightly build of Pipelines as Code. |
| Overrides the OpenShift route URL.
By default, If you do not have an OpenShift Container Platform cluster, it asks you for the public URL that points to the ingress endpoint. |
|
Create a GitHub application and secrets in the |
4.8.20.3.2. repository
Command | Description |
---|---|
| Creates a new Pipelines as Code repository and a namespace based on the pipeline run template. |
| Lists all the Pipelines as Code repositories and displays the last status of the associated runs. |
| Describes a Pipelines as Code repository and the associated runs. |
4.8.20.3.3. generate
Command | Description |
---|---|
| Generates a simple pipeline run. When executed from the directory containing the source code, it automatically detects current Git information. In addition, it uses basic language detection capability and adds extra tasks depending on the language.
For example, if it detects a |
4.8.20.3.4. resolve
Command | Description |
---|---|
| Executes a pipeline run as if it is owned by the Pipelines as Code on service. |
|
Displays the status of a live pipeline run that uses the template in Combined with a Kubernetes installation running on your local machine, you can observe the pipeline run without generating a new commit. If you run the command from a source code repository, it attempts to detect the current Git information and automatically resolve parameters such as current revision or branch. |
| Executes a pipeline run by overriding default parameter values derived from the Git repository.
The
You can override the default information gathered from the Git repository by specifying parameter values using the |
4.8.21. Additional resources
4.9. Working with Red Hat OpenShift Pipelines in the web console
You can use the Administrator or Developer perspective to create and modify Pipeline
, PipelineRun
, and Repository
objects from the Pipelines page in the OpenShift Container Platform web console. You can also use the +Add page in the Developer perspective of the web console to create CI/CD pipelines for your software delivery process.
4.9.1. Working with Red Hat OpenShift Pipelines in the Developer perspective
In the Developer perspective, you can access the following options for creating pipelines from the +Add page:
-
Use the +Add
Pipelines Pipeline builder option to create customized pipelines for your application. -
Use the +Add
From Git option to create pipelines using pipeline templates and resources while creating an application.
After you create the pipelines for your application, you can view and visually interact with the deployed pipelines in the Pipelines view. You can also use the Topology view to interact with the pipelines created using the From Git option. You must apply custom labels to pipelines created using the Pipeline builder to see them in the Topology view.
Prerequisites
- You have access to an OpenShift Container Platform cluster, and have switched to the Developer perspective.
- You have the Pipelines Operator installed in your cluster.
- You are a cluster administrator or a user with create and edit permissions.
- You have created a project.
4.9.2. Constructing Pipelines using the Pipeline builder
In the Developer perspective of the console, you can use the +Add
- Configure pipelines using either the Pipeline builder or the YAML view.
- Construct a pipeline flow using existing tasks and cluster tasks. When you install the OpenShift Pipelines Operator, it adds reusable pipeline cluster tasks to your cluster.
- Specify the type of resources required for the pipeline run, and if required, add additional parameters to the pipeline.
- Reference these pipeline resources in each of the tasks in the pipeline as input and output resources.
- If required, reference any additional parameters added to the pipeline in the task. The parameters for a task are prepopulated based on the specifications of the task.
- Use the Operator-installed, reusable snippets and samples to create detailed pipelines.
Procedure
- In the +Add view of the Developer perspective, click the Pipeline tile to see the Pipeline builder page.
Configure the pipeline using either the Pipeline builder view or the YAML view.
NoteThe Pipeline builder view supports a limited number of fields whereas the YAML view supports all available fields. Optionally, you can also use the Operator-installed, reusable snippets and samples to create detailed Pipelines.
Figure 4.1. YAML view
Configure your pipeline by using Pipeline builder:
- In the Name field, enter a unique name for the pipeline.
In the Tasks section:
- Click Add task.
- Search for a task using the quick search field and select the required task from the displayed list.
Click Add or Install and add. In this example, use the s2i-nodejs task.
NoteThe search list contains all the Tekton Hub tasks and tasks available in the cluster. Also, if a task is already installed it will show Add to add the task whereas it will show Install and add to install and add the task. It will show Update and add when you add the same task with an updated version.
To add sequential tasks to the pipeline:
-
Click the plus icon to the right or left of the task
click Add task. - Search for a task using the quick search field and select the required task from the displayed list.
Click Add or Install and add.
Figure 4.2. Pipeline builder
-
Click the plus icon to the right or left of the task
To add a final task:
-
Click the Add finally task
Click Add task. - Search for a task using the quick search field and select the required task from the displayed list.
- Click Add or Install and add.
-
Click the Add finally task
In the Resources section, click Add Resources to specify the name and type of resources for the pipeline run. These resources are then used by the tasks in the pipeline as inputs and outputs. For this example:
-
Add an input resource. In the Name field, enter
Source
, and then from the Resource Type drop-down list, select Git. Add an output resource. In the Name field, enter
Img
, and then from the Resource Type drop-down list, select Image.NoteA red icon appears next to the task if a resource is missing.
-
Add an input resource. In the Name field, enter
- Optional: The Parameters for a task are pre-populated based on the specifications of the task. If required, use the Add Parameters link in the Parameters section to add additional parameters.
- In the Workspaces section, click Add workspace and enter a unique workspace name in the Name field. You can add multiple workspaces to the pipeline.
In the Tasks section, click the s2i-nodejs task to see the side panel with details for the task. In the task side panel, specify the resources and parameters for the s2i-nodejs task:
- If required, in the Parameters section, add more parameters to the default ones, by using the $(params.<param-name>) syntax.
-
In the Image section, enter
Img
as specified in the Resources section. - Select a workspace from the source drop-down under Workspaces section.
- Add resources, parameters, and workspaces to the openshift-client task.
- Click Create to create and view the pipeline in the Pipeline Details page.
- Click the Actions drop-down menu then click Start, to see the Start Pipeline page.
- The Workspaces section lists the workspaces you created earlier. Use the respective drop-down to specify the volume source for your workspace. You have the following options: Empty Directory, Config Map, Secret, PersistentVolumeClaim, or VolumeClaimTemplate.
4.9.3. Creating OpenShift Pipelines along with applications
To create pipelines along with applications, use the From Git option in the Add+ view of the Developer perspective. You can view all of your available pipelines and select the pipelines you want to use to create applications while importing your code or deploying an image.
The Tekton Hub Integration is enabled by default and you can see tasks from the Tekton Hub that are supported by your cluster. Administrators can opt out of the Tekton Hub Integration and the Tekton Hub tasks will no longer be displayed. You can also check whether a webhook URL exists for a generated pipeline. Default webhooks are added for the pipelines that are created using the +Add flow and the URL is visible in the side panel of the selected resources in the Topology view.
For more information, see Creating applications using the Developer perspective.
4.9.4. Interacting with pipelines using the Developer perspective
The Pipelines view in the Developer perspective lists all the pipelines in a project, along with the following details:
- The namespace in which the pipeline was created
- The last pipeline run
- The status of the tasks in the pipeline run
- The status of the pipeline run
- The creation time of the last pipeline run
Procedure
- In the Pipelines view of the Developer perspective, select a project from the Project drop-down list to see the pipelines in that project.
Click the required pipeline to see the Pipeline details page.
By default, the Details tab displays a visual representation of all the all the serial tasks, parallel tasks,
finally
tasks, and when expressions in the pipeline. The tasks and thefinally
tasks are listed in the lower right portion of the page. Click the listed Tasks and Finally tasks to view the task details.Figure 4.3. Pipeline details
Optional: On the Pipeline details page, click the Metrics tab to see the following information about pipelines:
- Pipeline Success Ratio
- Number of Pipeline Runs
- Pipeline Run Duration
Task Run Duration
You can use this information to improve the pipeline workflow and eliminate issues early in the pipeline lifecycle.
- Optional: Click the YAML tab to edit the YAML file for the pipeline.
Optional: Click the Pipeline Runs tab to see the completed, running, or failed runs for the pipeline.
The Pipeline Runs tab provides details about the pipeline run, the status of the task, and a link to debug failed pipeline runs. Use the Options menu to stop a running pipeline, to rerun a pipeline using the same parameters and resources as that of the previous pipeline execution, or to delete a pipeline run.
Click the required pipeline run to see the Pipeline Run details page. By default, the Details tab displays a visual representation of all the serial tasks, parallel tasks,
finally
tasks, and when expressions in the pipeline run. The results for successful runs are displayed under the Pipeline Run results pane at the bottom of the page. Additionally, you would only be able to see tasks from Tekton Hub which are supported by the cluster. While looking at a task, you can click the link beside it to jump to the task documentation.NoteThe Details section of the Pipeline Run Details page displays a Log Snippet of the failed pipeline run. Log Snippet provides a general error message and a snippet of the log. A link to the Logs section provides quick access to the details about the failed run.
On the Pipeline Run details page, click the Task Runs tab to see the completed, running, and failed runs for the task.
The Task Runs tab provides information about the task run along with the links to its task and pod, and also the status and duration of the task run. Use the Options menu to delete a task run.
Click the required task run to see the Task Run details page. The results for successful runs are displayed under the Task Run results pane at the bottom of the page.
NoteThe Details section of the Task Run details page displays a Log Snippet of the failed task run. Log Snippet provides a general error message and a snippet of the log. A link to the Logs section provides quick access to the details about the failed task run.
- Click the Parameters tab to see the parameters defined in the pipeline. You can also add or edit additional parameters, as required.
- Click the Resources tab to see the resources defined in the pipeline. You can also add or edit additional resources, as required.
4.9.5. Starting pipelines from Pipelines view
After you create a pipeline, you need to start it to execute the included tasks in the defined sequence. You can start a pipeline from the Pipelines view, the Pipeline Details page, or the Topology view.
Procedure
To start a pipeline using the Pipelines view:
- In the Pipelines view of the Developer perspective, click the Options menu adjoining a pipeline, and select Start.
The Start Pipeline dialog box displays the Git Resources and the Image Resources based on the pipeline definition.
NoteFor pipelines created using the From Git option, the Start Pipeline dialog box also displays an
APP_NAME
field in the Parameters section, and all the fields in the dialog box are prepopulated by the pipeline template.- If you have resources in your namespace, the Git Resources and the Image Resources fields are prepopulated with those resources. If required, use the drop-downs to select or create the required resources and customize the pipeline run instance.
Optional: Modify the Advanced Options to add the credentials that authenticate the specified private Git server or the image registry.
- Under Advanced Options, click Show Credentials Options and select Add Secret.
In the Create Source Secret section, specify the following:
- A unique Secret Name for the secret.
- In the Designated provider to be authenticated section, specify the provider to be authenticated in the Access to field, and the base Server URL.
Select the Authentication Type and provide the credentials:
For the Authentication Type
Image Registry Credentials
, specify the Registry Server Address that you want to authenticate, and provide your credentials in the Username, Password, and Email fields.Select Add Credentials if you want to specify an additional Registry Server Address.
-
For the Authentication Type
Basic Authentication
, specify the values for the UserName and Password or Token fields. For the Authentication Type
SSH Keys
, specify the value of the SSH Private Key field.NoteFor basic authentication and SSH authentication, you can use annotations such as:
-
tekton.dev/git-0: https://github.com
-
tekton.dev/git-1: https://gitlab.com
.
-
- Select the check mark to add the secret.
You can add multiple secrets based upon the number of resources in your pipeline.
- Click Start to start the pipeline.
The Pipeline Run Details page displays the pipeline being executed. After the pipeline starts, the tasks and steps within each task are executed. You can:
- Hover over the tasks to see the time taken to execute each step.
- Click on a task to see the logs for each step in the task.
- Click the Logs tab to see the logs relating to the execution sequence of the tasks. You can also expand the pane and download the logs individually or in bulk, by using the relevant button.
Click the Events tab to see the stream of events generated by a pipeline run.
You can use the Task Runs, Logs, and Events tabs to assist in debugging a failed pipeline run or a failed task run.
Figure 4.4. Pipeline run details
4.9.6. Starting pipelines from Topology view
For pipelines created using the From Git option, you can use the Topology view to interact with pipelines after you start them:
To see the pipelines created using Pipeline builder in the Topology view, customize the pipeline labels to link the pipeline with the application workload.
Procedure
- Click Topology in the left navigation panel.
- Click the application to display Pipeline Runs in the side panel.
In Pipeline Runs, click Start Last Run to start a new pipeline run with the same parameters and resources as the previous one. This option is disabled if a pipeline run has not been initiated. You can also start a pipeline run when you create it.
Figure 4.5. Pipelines in Topology view
In the Topology page, hover to the left of the application to see the status of its pipeline run. After a pipeline is added, a bottom left icon indicates that there is an associated pipeline.
4.9.7. Interacting with pipelines from Topology view
The side panel of the application node in the Topology page displays the status of a pipeline run and you can interact with it.
- If a pipeline run does not start automatically, the side panel displays a message that the pipeline cannot be automatically started, hence it would need to be started manually.
- If a pipeline is created but the user has not started the pipeline, its status is not started. When the user clicks the Not started status icon, the start dialog box opens in the Topology view.
- If the pipeline has no build or build config, the Builds section is not visible. If there is a pipeline and build config, the Builds section is visible.
- The side panel displays a Log Snippet when a pipeline run fails on a specific task run. You can view the Log Snippet in the Pipeline Runs section, under the Resources tab. It provides a general error message and a snippet of the log. A link to the Logs section provides quick access to the details about the failed run.
4.9.8. Editing Pipelines
You can edit the Pipelines in your cluster using the Developer perspective of the web console:
Procedure
- In the Pipelines view of the Developer perspective, select the Pipeline you want to edit to see the details of the Pipeline. In the Pipeline Details page, click Actions and select Edit Pipeline.
On the Pipeline builder page, you can perform the following tasks:
- Add additional Tasks, parameters, or resources to the Pipeline.
- Click the Task you want to modify to see the Task details in the side panel and modify the required Task details, such as the display name, parameters, and resources.
- Alternatively, to delete the Task, click the Task, and in the side panel, click Actions and select Remove Task.
- Click Save to save the modified Pipeline.
4.9.9. Deleting Pipelines
You can delete the Pipelines in your cluster using the Developer perspective of the web console.
Procedure
- In the Pipelines view of the Developer perspective, click the Options menu adjoining a Pipeline, and select Delete Pipeline.
- In the Delete Pipeline confirmation prompt, click Delete to confirm the deletion.
4.9.9.1. Additional resources
4.9.10. Creating pipeline templates in the Administrator perspective
As a cluster administrator, you can create pipeline templates that developers can reuse when they create a pipeline on the cluster.
Prerequisites
- You have access to an OpenShift Container Platform cluster with cluster administrator permissions, and have switched to the Administrator perspective.
- You have installed the Pipelines Operator in your cluster.
Procedure
- Navigate to the Pipelines page to view existing pipeline templates.
- Click the icon to go to the Import YAML page.
Add the YAML for your pipeline template. The template must include the following information:
apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: # ... namespace: openshift 1 labels: pipeline.openshift.io/runtime: <runtime> 2 pipeline.openshift.io/type: <pipeline-type> 3 # ...
- 1
- The template must be created in the
openshift
namespace. - 2
- The template must contain the
pipeline.openshift.io/runtime
label. The accepted runtime values for this label arenodejs
,golang
,dotnet
,java
,php
,ruby
,perl
,python
,nginx
, andhttpd
. - 3
- The template must contain the
pipeline.openshift.io/type:
label. The accepted type values for this label areopenshift
,knative
, andkubernetes
.
- Click Create. After the pipeline has been created, you are taken to the Pipeline details page, where you can view information about or edit your pipeline.
4.10. Customizing configurations in the TektonConfig custom resource
In Red Hat OpenShift Pipelines, you can customize the following configurations by using the TektonConfig
custom resource (CR):
- Configuring the Red Hat OpenShift Pipelines control plane
- Changing the default service account
- Disabling the service monitor
- Disabling cluster tasks and pipeline templates
- Disabling the integration of Tekton Hub
- Disabling the automatic creation of RBAC resources
- Pruning of task runs and pipeline runs
4.10.1. Prerequisites
- You have installed the Red Hat OpenShift Pipelines Operator.
4.10.2. Configuring the Red Hat OpenShift Pipelines control plane
You can customize the Pipelines control plane by editing the configuration fields in the TektonConfig
custom resource (CR). The Red Hat OpenShift Pipelines Operator automatically adds the configuration fields with their default values so that you can use the Pipelines control plane.
Procedure
-
In the Administrator perspective of the web console, navigate to Administration
CustomResourceDefinitions. -
Use the Search by name box to search for the
tektonconfigs.operator.tekton.dev
custom resource definition (CRD). Click TektonConfig to see the CRD details page. - Click the Instances tab.
-
Click the config instance to see the
TektonConfig
CR details. - Click the YAML tab.
Edit the
TektonConfig
YAML file based on your requirements.Example of
TektonConfig
CR with default valuesapiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: running-in-environment-with-injected-sidecars: true metrics.taskrun.duration-type: histogram metrics.pipelinerun.duration-type: histogram await-sidecar-readiness: true params: - name: enableMetrics value: 'true' default-service-account: pipeline require-git-ssh-secret-known-hosts: false enable-tekton-oci-bundles: false metrics.taskrun.level: task metrics.pipelinerun.level: pipeline embedded-status: both enable-api-fields: stable enable-provenance-in-status: false enable-custom-tasks: true disable-creds-init: false disable-affinity-assistant: true
4.10.2.1. Modifiable fields with default values
The following list includes all modifiable fields with their default values in the TektonConfig
CR:
running-in-environment-with-injected-sidecars
(default:true
): Set this field tofalse
if pipelines run in a cluster that does not use injected sidecars, such as Istio. Setting it tofalse
decreases the time a pipeline takes for a task run to start.NoteFor clusters that use injected sidecars, setting this field to
false
can lead to an unexpected behavior.-
await-sidecar-readiness
(default:true
): Set this field tofalse
to stop Pipelines from waiting forTaskRun
sidecar containers to run before it begins to operate. This allows tasks to be run in environments that do not support thedownwardAPI
volume type. -
default-service-account
(default:pipeline
): This field contains the default service account name to use for theTaskRun
andPipelineRun
resources, if none is specified. require-git-ssh-secret-known-hosts
(default:false
): Setting this field totrue
requires that any Git SSH secret must include theknown_hosts
field.- For more information about configuring Git SSH secrets, see Configuring SSH authentication for Git in the Additional resources section.
-
enable-tekton-oci-bundles
(default:false
): Set this field totrue
to enable the use of an experimental alpha feature named Tekton OCI bundle. embedded-status
(default:both
): This field has three acceptable values:-
full
: Enables full embedding ofRun
andTaskRun
statuses in thePipelineRun
status -
minimal
: Populates theChildReferences
field with information, such as name, kind, and API version for each run and task run in the`PipelineRun` status both
: Applies both,full
andminimal
valuesNoteThe
embedded-status
field is deprecated and will be removed in a future release. In addition, the pipeline default embedded status will be changed tominimal
.
-
enable-api-fields
(default:stable
): Setting this field determines which features are enabled. Acceptable value isstable
,beta
, oralpha
.NoteRed Hat OpenShift Pipelines does not support the
alpha
value.-
enable-provenance-in-status
(default:false
): Set this field totrue
to enable populating theprovenance
field inTaskRun
andPipelineRun
statuses. Theprovenance
field contains metadata about resources used in the task run and pipeline run, such as the source from where a remote task or pipeline definition was fetched. -
enable-custom-tasks
(default:true
): Set this field tofalse
to disable the use of custom tasks in pipelines. -
disable-creds-init
(default:false
): Set this field totrue
to prevent Pipelines from scanning attached service accounts and injecting any credentials into your steps. -
disable-affinity-assistant
(default:true
): Set this field tofalse
to enable affinity assistant for eachTaskRun
resource sharing a persistent volume claim workspace.
Metrics options
You can modify the default values of the following metrics fields in the TektonConfig
CR:
-
metrics.taskrun.duration-type
andmetrics.pipelinerun.duration-type
(default:histogram
): Setting these fields determines the duration type for a task or pipeline run. Acceptable value isgauge
orhistogram
. -
metrics.taskrun.level
(default:task
): This field determines the level of the task run metrics. Acceptable value istaskrun
,task
, ornamespace
. -
metrics.pipelinerun.level
(default:pipeline
): This field determines the level of the pipeline run metrics. Acceptable value ispipelinerun
,pipeline
, ornamespace
.
4.10.2.2. Optional configuration fields
The following fields do not have a default value, and are considered only if you configure them. By default, the Operator does not add and configure these fields in the TektonConfig
custom resource (CR).
-
default-timeout-minutes
: This field sets the default timeout for theTaskRun
andPipelineRun
resources, if none is specified when creating them. If a task run or pipeline run takes more time than the set number of minutes for its execution, then the task run or pipeline run is timed out and cancelled. For example,default-timeout-minutes: 60
sets 60 minutes as default. -
default-managed-by-label-value
: This field contains the default value given to theapp.kubernetes.io/managed-by
label that is applied to allTaskRun
pods, if none is specified. For example,default-managed-by-label-value: tekton-pipelines
. -
default-pod-template
: This field sets the defaultTaskRun
andPipelineRun
pod templates, if none is specified. -
default-cloud-events-sink
: This field sets the defaultCloudEvents
sink that is used for theTaskRun
andPipelineRun
resources, if none is specified. -
default-task-run-workspace-binding
: This field contains the default workspace configuration for the workspaces that aTask
resource declares, but aTaskRun
resource does not explicitly declare. -
default-affinity-assistant-pod-template
: This field sets the defaultPipelineRun
pod template that is used for affinity assistant pods, if none is specified. -
default-max-matrix-combinations-count
: This field contains the default maximum number of combinations generated from a matrix, if none is specified.
4.10.3. Changing the default service account for Pipelines
You can change the default service account for Pipelines by editing the default-service-account
field in the .spec.pipeline
and .spec.trigger
specifications. The default service account name is pipeline
.
Example
apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: default-service-account: pipeline trigger: default-service-account: pipeline enable-api-fields: stable
4.10.4. Disabling the service monitor
You can disable the service monitor, which is part of Pipelines, to expose the telemetry data. To disable the service monitor, set the enableMetrics
parameter to false
in the .spec.pipeline
specification of the TektonConfig
custom resource (CR):
Example
apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: params: - name: enableMetrics value: 'false'
4.10.5. Disabling cluster tasks and pipeline templates
By default, the TektonAddon
custom resource (CR) installs clusterTasks
and pipelineTemplates
resources along with Pipelines on the cluster.
You can disable installation of the clusterTasks
and pipelineTemplates
resources by setting the parameter value to false
in the .spec.addon
specification. In addition, you can disable the communityClusterTasks
parameter.
Example
apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: addon: params: - name: clusterTasks value: 'false' - name: pipelineTemplates value: 'false' - name: communityClusterTasks value: 'true'
4.10.6. Disabling the integration of Tekton Hub
You can disable the integration of Tekton Hub in the web console Developer perspective by setting the enable-devconsole-integration
parameter to false
in the TektonConfig
custom resource (CR).
Example of disabling Tekton Hub
apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: hub: params: - name: enable-devconsole-integration value: false
4.10.7. Disabling the automatic creation of RBAC resources
The default installation of the Red Hat OpenShift Pipelines Operator creates multiple role-based access control (RBAC) resources for all namespaces in the cluster, except the namespaces matching the ^(openshift|kube)-*
regular expression pattern. Among these RBAC resources, the pipelines-scc-rolebinding
security context constraint (SCC) role binding resource is a potential security issue, because the associated pipelines-scc
SCC has the RunAsAny
privilege.
To disable the automatic creation of cluster-wide RBAC resources after the Red Hat OpenShift Pipelines Operator is installed, cluster administrators can set the createRbacResource
parameter to false
in the cluster-level TektonConfig
custom resource (CR).
Example TektonConfig
CR
apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: params: - name: createRbacResource value: "false" ...
As a cluster administrator or an user with appropriate privileges, when you disable the automatic creation of RBAC resources for all namespaces, the default ClusterTask
resource does not work. For the ClusterTask
resource to function, you must create the RBAC resources manually for each intended namespace.
4.10.8. Automatic pruning of task runs and pipeline runs
Stale TaskRun
and PipelineRun
objects and their executed instances occupy physical resources that can be used for active runs. For optimal utilization of these resources, Red Hat OpenShift Pipelines provides annotations that cluster administrators can use to automatically prune the unused objects and their instances in various namespaces.
Configuring automatic pruning by specifying annotations affects the entire namespace. You cannot selectively auto-prune an individual task run or pipeline run in a namespace.
4.10.8.1. Annotations for automatically pruning task runs and pipeline runs
To automatically prune task runs and pipeline runs in a namespace, you can set the following annotations in the namespace:
-
operator.tekton.dev/prune.schedule
: If the value of this annotation is different from the value specified in theTektonConfig
custom resource definition, a new cron job in that namespace is created. -
operator.tekton.dev/prune.skip
: When set totrue
, the namespace for which it is configured is not pruned. -
operator.tekton.dev/prune.resources
: This annotation accepts a comma-separated list of resources. To prune a single resource such as a pipeline run, set this annotation to"pipelinerun"
. To prune multiple resources, such as task run and pipeline run, set this annotation to"taskrun, pipelinerun"
. -
operator.tekton.dev/prune.keep
: Use this annotation to retain a resource without pruning. operator.tekton.dev/prune.keep-since
: Use this annotation to retain resources based on their age. The value for this annotation must be equal to the age of the resource in minutes. For example, to retain resources which were created not more than five days ago, setkeep-since
to7200
.NoteThe
keep
andkeep-since
annotations are mutually exclusive. For any resource, you must configure only one of them.-
operator.tekton.dev/prune.strategy
: Set the value of this annotation to eitherkeep
orkeep-since
.
For example, consider the following annotations that retain all task runs and pipeline runs created in the last five days, and deletes the older resources:
Example of auto-pruning annotations
... annotations: operator.tekton.dev/prune.resources: "taskrun, pipelinerun" operator.tekton.dev/prune.keep-since: 7200 ...
4.10.9. Additional resources
4.11. Reducing resource consumption of OpenShift Pipelines
If you use clusters in multi-tenant environments you must control the consumption of CPU, memory, and storage resources for each project and Kubernetes object. This helps prevent any one application from consuming too many resources and affecting other applications.
To define the final resource limits that are set on the resulting pods, Red Hat OpenShift Pipelines use resource quota limits and limit ranges of the project in which they are executed.
To restrict resource consumption in your project, you can:
- Set and manage resource quotas to limit the aggregate resource consumption.
- Use limit ranges to restrict resource consumption for specific objects, such as pods, images, image streams, and persistent volume claims.
4.11.1. Understanding resource consumption in pipelines
Each task consists of a number of required steps to be executed in a particular order defined in the steps
field of the Task
resource. Every task runs as a pod, and each step runs as a container within that pod.
Steps are executed one at a time. The pod that executes the task only requests enough resources to run a single container image (step) in the task at a time, and thus does not store resources for all the steps in the task.
The Resources
field in the steps
spec specifies the limits for resource consumption. By default, the resource requests for the CPU, memory, and ephemeral storage are set to BestEffort
(zero) values or to the minimums set through limit ranges in that project.
Example configuration of resource requests and limits for a step
spec: steps: - name: <step_name> resources: requests: memory: 2Gi cpu: 600m limits: memory: 4Gi cpu: 900m
When the LimitRange
parameter and the minimum values for container resource requests are specified in the project in which the pipeline and task runs are executed, Red Hat OpenShift Pipelines looks at all the LimitRange
values in the project and uses the minimum values instead of zero.
Example configuration of limit range parameters at a project level
apiVersion: v1 kind: LimitRange metadata: name: <limit_container_resource> spec: limits: - max: cpu: "600m" memory: "2Gi" min: cpu: "200m" memory: "100Mi" default: cpu: "500m" memory: "800Mi" defaultRequest: cpu: "100m" memory: "100Mi" type: Container ...
4.11.2. Mitigating extra resource consumption in pipelines
When you have resource limits set on the containers in your pod, OpenShift Container Platform sums up the resource limits requested as all containers run simultaneously.
To consume the minimum amount of resources needed to execute one step at a time in the invoked task, Red Hat OpenShift Pipelines requests the maximum CPU, memory, and ephemeral storage as specified in the step that requires the most amount of resources. This ensures that the resource requirements of all the steps are met. Requests other than the maximum values are set to zero.
However, this behavior can lead to higher resource usage than required. If you use resource quotas, this could also lead to unschedulable pods.
For example, consider a task with two steps that uses scripts, and that does not define any resource limits and requests. The resulting pod has two init containers (one for entrypoint copy, the other for writing scripts) and two containers, one for each step.
OpenShift Container Platform uses the limit range set up for the project to compute required resource requests and limits. For this example, set the following limit range in the project:
apiVersion: v1 kind: LimitRange metadata: name: mem-min-max-demo-lr spec: limits: - max: memory: 1Gi min: memory: 500Mi type: Container
In this scenario, each init container uses a request memory of 1Gi (the max limit of the limit range), and each container uses a request memory of 500Mi. Thus, the total memory request for the pod is 2Gi.
If the same limit range is used with a task of ten steps, the final memory request is 5Gi, which is higher than what each step actually needs, that is 500Mi (since each step runs after the other).
Thus, to reduce resource consumption of resources, you can:
- Reduce the number of steps in a given task by grouping different steps into one bigger step, using the script feature, and the same image. This reduces the minimum requested resource.
- Distribute steps that are relatively independent of each other and can run on their own to multiple tasks instead of a single task. This lowers the number of steps in each task, making the request for each task smaller, and the scheduler can then run them when the resources are available.
4.11.3. Additional resources
4.12. Setting compute resource quota for OpenShift Pipelines
A ResourceQuota
object in Red Hat OpenShift Pipelines controls the total resource consumption per namespace. You can use it to limit the quantity of objects created in a namespace, based on the type of the object. In addition, you can specify a compute resource quota to restrict the total amount of compute resources consumed in a namespace.
However, you might want to limit the amount of compute resources consumed by pods resulting from a pipeline run, rather than setting quotas for the entire namespace. Currently, Red Hat OpenShift Pipelines does not enable you to directly specify the compute resource quota for a pipeline.
4.12.1. Alternative approaches for limiting compute resource consumption in OpenShift Pipelines
To attain some degree of control over the usage of compute resources by a pipeline, consider the following alternative approaches:
Set resource requests and limits for each step in a task.
Example: Set resource requests and limits for each step in a task.
... spec: steps: - name: step-with-limts resources: requests: memory: 1Gi cpu: 500m limits: memory: 2Gi cpu: 800m ...
-
Set resource limits by specifying values for the
LimitRange
object. For more information onLimitRange
, refer to Restrict resource consumption with limit ranges. - Reduce pipeline resource consumption.
- Set and manage resource quotas per project.
- Ideally, the compute resource quota for a pipeline should be same as the total amount of compute resources consumed by the concurrently running pods in a pipeline run. However, the pods running the tasks consume compute resources based on the use case. For example, a Maven build task might require different compute resources for different applications that it builds. As a result, you cannot predetermine the compute resource quotas for tasks in a generic pipeline. For greater predictability and control over usage of compute resources, use customized pipelines for different applications.
When using Red Hat OpenShift Pipelines in a namespace configured with a ResourceQuota
object, the pods resulting from task runs and pipeline runs might fail with an error, such as: failed quota: <quota name> must specify cpu, memory
.
To avoid this error, do any one of the following:
- (Recommended) Specify a limit range for the namespace.
- Explicitly define requests and limits for all containers.
For more information, refer to the issue and the resolution.
If your use case is not addressed by these approaches, you can implement a workaround by using a resource quota for a priority class.
4.12.2. Specifying pipelines resource quota using priority class
A PriorityClass
object maps priority class names to the integer values that indicates their relative priorities. Higher values increase the priority of a class. After you create a priority class, you can create pods that specify the priority class name in their specifications. In addition, you can control a pod’s consumption of system resources based on the pod’s priority.
Specifying resource quota for a pipeline is similar to setting a resource quota for the subset of pods created by a pipeline run. The following steps provide an example of the workaround by specifying resource quota based on priority class.
Procedure
Create a priority class for a pipeline.
Example: Priority class for a pipeline
apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: pipeline1-pc value: 1000000 description: "Priority class for pipeline1"
Create a resource quota for a pipeline.
Example: Resource quota for a pipeline
apiVersion: v1 kind: ResourceQuota metadata: name: pipeline1-rq spec: hard: cpu: "1000" memory: 200Gi pods: "10" scopeSelector: matchExpressions: - operator : In scopeName: PriorityClass values: ["pipeline1-pc"]
Verify the resource quota usage for the pipeline.
Example: Verify resource quota usage for the pipeline
$ oc describe quota
Sample output
Name: pipeline1-rq Namespace: default Resource Used Hard -------- ---- ---- cpu 0 1k memory 0 200Gi pods 0 10
Because pods are not running, the quota is unused.
Create the pipelines and tasks.
Example: YAML for the pipeline
apiVersion: tekton.dev/v1alpha1 kind: Pipeline metadata: name: maven-build spec: workspaces: - name: local-maven-repo resources: - name: app-git type: git tasks: - name: build taskRef: name: mvn resources: inputs: - name: source resource: app-git params: - name: GOALS value: ["package"] workspaces: - name: maven-repo workspace: local-maven-repo - name: int-test taskRef: name: mvn runAfter: ["build"] resources: inputs: - name: source resource: app-git params: - name: GOALS value: ["verify"] workspaces: - name: maven-repo workspace: local-maven-repo - name: gen-report taskRef: name: mvn runAfter: ["build"] resources: inputs: - name: source resource: app-git params: - name: GOALS value: ["site"] workspaces: - name: maven-repo workspace: local-maven-repo
Example: YAML for a task in the pipeline
apiVersion: tekton.dev/v1alpha1 kind: Task metadata: name: mvn spec: workspaces: - name: maven-repo inputs: params: - name: GOALS description: The Maven goals to run type: array default: ["package"] resources: - name: source type: git steps: - name: mvn image: gcr.io/cloud-builders/mvn workingDir: /workspace/source command: ["/usr/bin/mvn"] args: - -Dmaven.repo.local=$(workspaces.maven-repo.path) - "$(inputs.params.GOALS)" priorityClassName: pipeline1-pc
NoteEnsure that all tasks in the pipeline belongs to the same priority class.
Create and start the pipeline run.
Example: YAML for a pipeline run
apiVersion: tekton.dev/v1alpha1 kind: PipelineRun metadata: generateName: petclinic-run- spec: pipelineRef: name: maven-build resources: - name: app-git resourceSpec: type: git params: - name: url value: https://github.com/spring-projects/spring-petclinic
After the pods are created, verify the resource quota usage for the pipeline run.
Example: Verify resource quota usage for the pipeline
$ oc describe quota
Sample output
Name: pipeline1-rq Namespace: default Resource Used Hard -------- ---- ---- cpu 500m 1k memory 10Gi 200Gi pods 1 10
The output indicates that you can manage the combined resource quota for all concurrent running pods belonging to a priority class, by specifying the resource quota per priority class.
4.12.3. Additional resources
4.13. Using pods in a privileged security context
The default configuration of OpenShift Pipelines 1.3.x and later versions does not allow you to run pods with privileged security context, if the pods result from pipeline run or task run. For such pods, the default service account is pipeline
, and the security context constraint (SCC) associated with the pipeline
service account is pipelines-scc
. The pipelines-scc
SCC is similar to the anyuid
SCC, but with minor differences as defined in the YAML file for the SCC of pipelines:
Example pipelines-scc.yaml
snippet
apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints ... allowedCapabilities: - SETFCAP ... fsGroup: type: MustRunAs ...
In addition, the Buildah
cluster task, shipped as part of the OpenShift Pipelines, uses vfs
as the default storage driver.
4.13.1. Running pipeline run and task run pods with privileged security context
Procedure
To run a pod (resulting from pipeline run or task run) with the privileged
security context, do the following modifications:
Configure the associated user account or service account to have an explicit SCC. You can perform the configuration using any of the following methods:
Run the following command:
$ oc adm policy add-scc-to-user <scc-name> -z <service-account-name>
Alternatively, modify the YAML files for
RoleBinding
, andRole
orClusterRole
:Example
RoleBinding
objectapiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: service-account-name 1 namespace: default roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: pipelines-scc-clusterrole 2 subjects: - kind: ServiceAccount name: pipeline namespace: default
Example
ClusterRole
objectapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pipelines-scc-clusterrole 1 rules: - apiGroups: - security.openshift.io resourceNames: - nonroot resources: - securitycontextconstraints verbs: - use
- 1
- Substitute with an appropriate cluster role based on the role binding you use.
NoteAs a best practice, create a copy of the default YAML files and make changes in the duplicate file.
-
If you do not use the
vfs
storage driver, configure the service account associated with the task run or the pipeline run to have a privileged SCC, and set the security context asprivileged: true
.
4.13.2. Running pipeline run and task run by using a custom SCC and a custom service account
When using the pipelines-scc
security context constraint (SCC) associated with the default pipelines
service account, the pipeline run and task run pods may face timeouts. This happens because in the default pipelines-scc
SCC, the fsGroup.type
parameter is set to MustRunAs
.
For more information about pod timeouts, see BZ#1995779.
To avoid pod timeouts, you can create a custom SCC with the fsGroup.type
parameter set to RunAsAny
, and associate it with a custom service account.
As a best practice, use a custom SCC and a custom service account for pipeline runs and task runs. This approach allows greater flexibility and does not break the runs when the defaults are modified during an upgrade.
Procedure
Define a custom SCC with the
fsGroup.type
parameter set toRunAsAny
:Example: Custom SCC
apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: annotations: kubernetes.io/description: my-scc is a close replica of anyuid scc. pipelines-scc has fsGroup - RunAsAny. name: my-scc allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null defaultAddCapabilities: null fsGroup: type: RunAsAny groups: - system:cluster-admins priority: 10 readOnlyRootFilesystem: false requiredDropCapabilities: - MKNOD runAsUser: type: RunAsAny seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret
Create the custom SCC:
Example: Create the
my-scc
SCC$ oc create -f my-scc.yaml
Create a custom service account:
Example: Create a
fsgroup-runasany
service account$ oc create serviceaccount fsgroup-runasany
Associate the custom SCC with the custom service account:
Example: Associate the
my-scc
SCC with thefsgroup-runasany
service account$ oc adm policy add-scc-to-user my-scc -z fsgroup-runasany
If you want to use the custom service account for privileged tasks, you can associate the
privileged
SCC with the custom service account by running the following command:Example: Associate the
privileged
SCC with thefsgroup-runasany
service account$ oc adm policy add-scc-to-user privileged -z fsgroup-runasany
Use the custom service account in the pipeline run and task run:
Example: Pipeline run YAML with
fsgroup-runasany
custom service accountapiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: <pipeline-run-name> spec: pipelineRef: name: <pipeline-cluster-task-name> serviceAccountName: 'fsgroup-runasany'
Example: Task run YAML with
fsgroup-runasany
custom service accountapiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: <task-run-name> spec: taskRef: name: <cluster-task-name> serviceAccountName: 'fsgroup-runasany'
4.13.3. Additional resources
- For information on managing SCCs, refer to Managing security context constraints.
4.14. Securing webhooks with event listeners
As an administrator, you can secure webhooks with event listeners. After creating a namespace, you enable HTTPS for the Eventlistener
resource by adding the operator.tekton.dev/enable-annotation=enabled
label to the namespace. Then, you create a Trigger
resource and a secured route using the re-encrypted TLS termination.
Triggers in Red Hat OpenShift Pipelines support insecure HTTP and secure HTTPS connections to the Eventlistener
resource. HTTPS secures connections within and outside the cluster.
Red Hat OpenShift Pipelines runs a tekton-operator-proxy-webhook
pod that watches for the labels in the namespace. When you add the label to the namespace, the webhook sets the service.beta.openshift.io/serving-cert-secret-name=<secret_name>
annotation on the EventListener
object. This, in turn, creates secrets and the required certificates.
service.beta.openshift.io/serving-cert-secret-name=<secret_name>
In addition, you can mount the created secret into the Eventlistener
pod to secure the request.
4.14.1. Providing secure connection with OpenShift routes
To create a route with the re-encrypted TLS termination, run:
$ oc create route reencrypt --service=<svc-name> --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=<hostname>
Alternatively, you can create a re-encrypted TLS termination YAML file to create a secure route.
Example re-encrypt TLS termination YAML to create a secure route
apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: <hostname> to: kind: Service name: frontend 2 tls: termination: reencrypt 3 key: [as in edge termination] certificate: [as in edge termination] caCertificate: [as in edge termination] destinationCACertificate: |- 4 -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----
- 1 2
- The name of the object, which is limited to only 63 characters.
- 3
- The termination field is set to
reencrypt
. This is the only required TLS field. - 4
- This is required for re-encryption. The
destinationCACertificate
field specifies a CA certificate to validate the endpoint certificate, thus securing the connection from the router to the destination pods. You can omit this field in either of the following scenarios:- The service uses a service signing certificate.
- The administrator specifies a default CA certificate for the router, and the service has a certificate signed by that CA.
You can run the oc create route reencrypt --help
command to display more options.
4.14.2. Creating a sample EventListener resource using a secure HTTPS connection
This section uses the pipelines-tutorial example to demonstrate creation of a sample EventListener resource using a secure HTTPS connection.
Procedure
Create the
TriggerBinding
resource from the YAML file available in the pipelines-tutorial repository:$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/01_binding.yaml
Create the
TriggerTemplate
resource from the YAML file available in the pipelines-tutorial repository:$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/02_template.yaml
Create the
Trigger
resource directly from the pipelines-tutorial repository:$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/03_trigger.yaml
Create an
EventListener
resource using a secure HTTPS connection:Add a label to enable the secure HTTPS connection to the
Eventlistener
resource:$ oc label namespace <ns-name> operator.tekton.dev/enable-annotation=enabled
Create the
EventListener
resource from the YAML file available in the pipelines-tutorial repository:$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/04_event_listener.yaml
Create a route with the re-encrypted TLS termination:
$ oc create route reencrypt --service=<svc-name> --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=<hostname>
4.15. Authenticating pipelines using git secret
A Git secret consists of credentials to securely interact with a Git repository, and is often used to automate authentication. In Red Hat OpenShift Pipelines, you can use Git secrets to authenticate pipeline runs and task runs that interact with a Git repository during execution.
A pipeline run or a task run gains access to the secrets through the associated service account. Pipelines support the use of Git secrets as annotations (key-value pairs) for basic authentication and SSH-based authentication.
4.15.1. Credential selection
A pipeline run or task run might require multiple authentications to access different Git repositories. Annotate each secret with the domains where Pipelines can use its credentials.
A credential annotation key for Git secrets must begin with tekton.dev/git-
, and its value is the URL of the host for which you want Pipelines to use that credential.
In the following example, Pipelines uses a basic-auth
secret, which relies on a username and password, to access repositories at github.com
and gitlab.com
.
Example: Multiple credentials for basic authentication
apiVersion: v1 kind: Secret metadata: annotations: tekton.dev/git-0: github.com tekton.dev/git-1: gitlab.com type: kubernetes.io/basic-auth stringData: username: <username> 1 password: <password> 2
You can also use an ssh-auth
secret (private key) to access a Git repository.
Example: Private key for SSH based authentication
apiVersion: v1
kind: Secret
metadata:
annotations:
tekton.dev/git-0: https://github.com
type: kubernetes.io/ssh-auth
stringData:
ssh-privatekey: 1
- 1
- The content of the SSH private key file.
4.15.2. Configuring basic authentication for Git
For a pipeline to retrieve resources from password-protected repositories, you must configure the basic authentication for that pipeline.
To configure basic authentication for a pipeline, update the secret.yaml
, serviceaccount.yaml
, and run.yaml
files with the credentials from the Git secret for the specified repository. When you complete this process, Pipelines can use that information to retrieve the specified pipeline resources.
For GitHub, authentication using plain password is deprecated. Instead, use a personal access token.
Procedure
In the
secret.yaml
file, specify the username and password or GitHub personal access token to access the target Git repository.apiVersion: v1 kind: Secret metadata: name: basic-user-pass 1 annotations: tekton.dev/git-0: https://github.com type: kubernetes.io/basic-auth stringData: username: <username> 2 password: <password> 3
In the
serviceaccount.yaml
file, associate the secret with the appropriate service account.apiVersion: v1 kind: ServiceAccount metadata: name: build-bot 1 secrets: - name: basic-user-pass 2
In the
run.yaml
file, associate the service account with a task run or a pipeline run.Associate the service account with a task run:
apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: build-push-task-run-2 1 spec: serviceAccountName: build-bot 2 taskRef: name: build-push 3
Associate the service account with a
PipelineRun
resource:apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: demo-pipeline 1 namespace: default spec: serviceAccountName: build-bot 2 pipelineRef: name: demo-pipeline 3
Apply the changes.
$ oc apply --filename secret.yaml,serviceaccount.yaml,run.yaml
4.15.3. Configuring SSH authentication for Git
For a pipeline to retrieve resources from repositories configured with SSH keys, you must configure the SSH-based authentication for that pipeline.
To configure SSH-based authentication for a pipeline, update the secret.yaml
, serviceaccount.yaml
, and run.yaml
files with the credentials from the SSH private key for the specified repository. When you complete this process, Pipelines can use that information to retrieve the specified pipeline resources.
Consider using SSH-based authentication rather than basic authentication.
Procedure
-
Generate an SSH private key, or copy an existing private key, which is usually available in the
~/.ssh/id_rsa
file. In the
secret.yaml
file, set the value ofssh-privatekey
to the content of the SSH private key file, and set the value ofknown_hosts
to the content of the known hosts file.apiVersion: v1 kind: Secret metadata: name: ssh-key 1 annotations: tekton.dev/git-0: github.com type: kubernetes.io/ssh-auth stringData: ssh-privatekey: 2 known_hosts: 3
CautionIf you omit the private key, Pipelines accepts the public key of any server.
-
Optional: To specify a custom SSH port, add
:<port number>
to the end of theannotation
value. For example,tekton.dev/git-0: github.com:2222
. In the
serviceaccount.yaml
file, associate thessh-key
secret with thebuild-bot
service account.apiVersion: v1 kind: ServiceAccount metadata: name: build-bot 1 secrets: - name: ssh-key 2
In the
run.yaml
file, associate the service account with a task run or a pipeline run.Associate the service account with a task run:
apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: build-push-task-run-2 1 spec: serviceAccountName: build-bot 2 taskRef: name: build-push 3
Associate the service account with a pipeline run:
apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: demo-pipeline 1 namespace: default spec: serviceAccountName: build-bot 2 pipelineRef: name: demo-pipeline 3
Apply the changes.
$ oc apply --filename secret.yaml,serviceaccount.yaml,run.yaml
4.15.4. Using SSH authentication in git type tasks
When invoking Git commands, you can use SSH authentication directly in the steps of a task. SSH authentication ignores the $HOME
variable and only uses the user’s home directory specified in the /etc/passwd
file. So each step in a task must symlink the /tekton/home/.ssh
directory to the home directory of the associated user.
However, explicit symlinks are not necessary when you use a pipeline resource of the git
type, or the git-clone
task available in the Tekton catalog.
As an example of using SSH authentication in git
type tasks, refer to authenticating-git-commands.yaml.
4.15.5. Using secrets as a non-root user
You might need to use secrets as a non-root user in certain scenarios, such as:
- The users and groups that the containers use to execute runs are randomized by the platform.
- The steps in a task define a non-root security context.
- A task specifies a global non-root security context, which applies to all steps in a task.
In such scenarios, consider the following aspects of executing task runs and pipeline runs as a non-root user:
-
SSH authentication for Git requires the user to have a valid home directory configured in the
/etc/passwd
directory. Specifying a UID that has no valid home directory results in authentication failure. -
SSH authentication ignores the
$HOME
environment variable. So you must or symlink the appropriate secret files from the$HOME
directory defined by Pipelines (/tekton/home
), to the non-root user’s valid home directory.
In addition, to configure SSH authentication in a non-root security context, refer to the example for authenticating git commands.
4.15.6. Limiting secret access to specific steps
By default, the secrets for Pipelines are stored in the $HOME/tekton/home
directory, and are available for all the steps in a task.
To limit a secret to specific steps, use the secret definition to specify a volume, and mount the volume in specific steps.
4.16. Using Tekton Chains for OpenShift Pipelines supply chain security
Tekton Chains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Tekton Chains is a Kubernetes Custom Resource Definition (CRD) controller. You can use it to manage the supply chain security of the tasks and pipelines created using Red Hat OpenShift Pipelines.
By default, Tekton Chains observes all task run executions in your OpenShift Container Platform cluster. When the task runs complete, Tekton Chains takes a snapshot of the task runs. It then converts the snapshot to one or more standard payload formats, and finally signs and stores all artifacts.
To capture information about task runs, Tekton Chains uses the Result
and PipelineResource
objects. When the objects are unavailable, Tekton Chains the URLs and qualified digests of the OCI images.
The PipelineResource
object is deprecated and will be removed in a future release; for manual use, the Results
object is recommended.
4.16.1. Key features
-
You can sign task runs, task run results, and OCI registry images with cryptographic key types and services such as
cosign
. -
You can use attestation formats such as
in-toto
. - You can securely store signatures and signed artifacts using OCI repository as a storage backend.
4.16.2. Installing Tekton Chains using the Red Hat OpenShift Pipelines Operator
Cluster administrators can use the TektonChain
custom resource (CR) to install and manage Tekton Chains.
Tekton Chains is an optional component of Red Hat OpenShift Pipelines. Currently, you cannot install it using the TektonConfig
CR.
Prerequisites
-
Ensure that the Red Hat OpenShift Pipelines Operator is installed in the
openshift-pipelines
namespace on your cluster.
Procedure
Create the
TektonChain
CR for your OpenShift Container Platform cluster.apiVersion: operator.tekton.dev/v1alpha1 kind: TektonChain metadata: name: chain spec: targetNamespace: openshift-pipelines
Apply the
TektonChain
CR.$ oc apply -f TektonChain.yaml 1
- 1
- Substitute with the file name of the
TektonChain
CR.
Check the status of the installation.
$ oc get tektonchains.operator.tekton.dev
4.16.3. Configuring Tekton Chains
Tekton Chains uses a ConfigMap
object named chains-config
in the openshift-pipelines
namespace for configuration.
To configure Tekton Chains, use the following example:
Example: Configuring Tekton Chains
$ oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"artifacts.oci.storage": "", "artifacts.taskrun.format":"tekton", "artifacts.taskrun.storage": "tekton"}}' 1
- 1
- Use a combination of supported key-value pairs in the JSON payload.
4.16.3.1. Supported keys for Tekton Chains configuration
Cluster administrators can use various supported keys and values to configure specifications about task runs, OCI images, and storage.
4.16.3.1.1. Supported keys for task run
Supported keys | Description | Supported values | Default values |
---|---|---|---|
| The format to store task run payloads. |
|
|
|
The storage backend for task run signatures. You can specify multiple backends as a comma-separated list, such as |
|
|
| The signature backend to sign task run payloads. |
|
|
4.16.3.1.2. Supported keys for OCI
Supported keys | Description | Supported values | Default values |
---|---|---|---|
| The format to store OCI payloads. |
|
|
|
The storage backend to for OCI signatures. You can specify multiple backends as a comma-separated list, such as |
|
|
| The signature backend to sign OCI payloads. |
|
|
4.16.3.1.3. Supported keys for storage
Supported keys | Description | Supported values | Default values |
---|---|---|---|
| The OCI repository to store OCI signatures. | Currently, Chains support only the internal OpenShift OCI registry; other popular options such as Quay is not supported. |
4.16.4. Signing secrets in Tekton Chains
Cluster administrators can generate a key pair and use Tekton Chains to sign artifacts using a Kubernetes secret. For Tekton Chains to work, a private key and a password for encrypted keys must exist as part of the signing-secrets
Kubernetes secret, in the openshift-pipelines
namespace.
Currently, Tekton Chains supports the x509
and cosign
signature schemes.
Use only one of the supported signature schemes.
4.16.4.1. Signing using x509
To use the x509
signing scheme with Tekton Chains, store the x509.pem
private key of the ed25519
or ecdsa
type in the signing-secrets
Kubernetes secret. Ensure that the key is stored as an unencrypted PKCS8 PEM file (BEGIN PRIVATE KEY
).
4.16.4.2. Signing using cosign
To use the cosign
signing scheme with Tekton Chains:
- Install cosign.
Generate the
cosign.key
andcosign.pub
key pairs.$ cosign generate-key-pair k8s://openshift-pipelines/signing-secrets
Cosign prompts you for a password, and creates a Kubernetes secret.
-
Store the encrypted
cosign.key
private key and thecosign.password
decryption password in thesigning-secrets
Kubernetes secret. Ensure that the private key is stored as an encrypted PEM file of theENCRYPTED COSIGN PRIVATE KEY
type.
4.16.4.3. Troubleshooting signing
If the signing secrets are already populated, you might get the following error:
Error from server (AlreadyExists): secrets "signing-secrets" already exists
To resolve the error:
Delete the secrets:
$ oc delete secret signing-secrets -n openshift-pipelines
- Recreate the key pairs and store them in the secrets using your preferred signing scheme.
4.16.5. Authenticating to an OCI registry
Before pushing signatures to an OCI registry, cluster administrators must configure Tekton Chains to authenticate with the registry. The Tekton Chains controller uses the same service account under which the task runs execute. To set up a service account with the necessary credentials for pushing signatures to an OCI registry, perform the following steps:
Procedure
Set the namespace and name of the Kubernetes service account.
$ export NAMESPACE=<namespace> 1 $ export SERVICE_ACCOUNT_NAME=<service_account> 2
Create a Kubernetes secret.
$ oc create secret registry-credentials \ --from-file=.dockerconfigjson \ 1 --type=kubernetes.io/dockerconfigjson \ -n $NAMESPACE
- 1
- Substitute with the path to your Docker config file. Default path is
~/.docker/config.json
.
Give the service account access to the secret.
$ oc patch serviceaccount $SERVICE_ACCOUNT_NAME \ -p "{\"imagePullSecrets\": [{\"name\": \"registry-credentials\"}]}" -n $NAMESPACE
If you patch the default
pipeline
service account that Red Hat OpenShift Pipelines assigns to all task runs, the Red Hat OpenShift Pipelines Operator will override the service account. As a best practice, you can perform the following steps:Create a separate service account to assign to user’s task runs.
$ oc create serviceaccount <service_account_name>
Associate the service account to the task runs by setting the value of the
serviceaccountname
field in the task run template.apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: build-push-task-run-2 spec: serviceAccountName: build-bot 1 taskRef: name: build-push ...
- 1
- Substitute with the name of the newly created service account.
4.16.5.1. Creating and verifying task run signatures without any additional authentication
To verify signatures of task runs using Tekton Chains with any additional authentication, perform the following tasks:
- Create an encrypted x509 key pair and save it as a Kubernetes secret.
- Configure the Tekton Chains backend storage.
- Create a task run, sign it, and store the signature and the payload as annotations on the task run itself.
- Retrieve the signature and payload from the signed task run.
- Verify the signature of the task run.
Prerequisites
Ensure that the following are installed on the cluster:
- Red Hat OpenShift Pipelines Operator
- Tekton Chains
- Cosign
Procedure
Create an encrypted x509 key pair and save it as a Kubernetes secret:
$ cosign generate-key-pair k8s://openshift-pipelines/signing-secrets
Provide a password when prompted. Cosign stores the resulting private key as part of the
signing-secrets
Kubernetes secret in theopenshift-pipelines
namespace.In the Tekton Chains configuration, disable the OCI storage, and set the task run storage and format to
tekton
.$ oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"artifacts.oci.storage": "", "artifacts.taskrun.format":"tekton", "artifacts.taskrun.storage": "tekton"}}'
Restart the Tekton Chains controller to ensure that the modified configuration is applied.
$ oc delete po -n openshift-pipelines -l app=tekton-chains-controller
Create a task run.
$ oc create -f https://raw.githubusercontent.com/tektoncd/chains/main/examples/taskruns/task-output-image.yaml 1 taskrun.tekton.dev/build-push-run-output-image-qbjvh created
- 1
- Substitute with the URI or file path pointing to your task run.
Check the status of the steps, and wait till the process finishes.
$ tkn tr describe --last [...truncated output...] NAME STATUS ∙ create-dir-builtimage-9467f Completed ∙ git-source-sourcerepo-p2sk8 Completed ∙ build-and-push Completed ∙ echo Completed ∙ image-digest-exporter-xlkn7 Completed
Retrieve the signature and payload from the object stored as
base64
encoded annotations:$ export TASKRUN_UID=$(tkn tr describe --last -o jsonpath='{.metadata.uid}') $ tkn tr describe --last -o jsonpath="{.metadata.annotations.chains\.tekton\.dev/signature-taskrun-$TASKRUN_UID}" > signature $ tkn tr describe --last -o jsonpath="{.metadata.annotations.chains\.tekton\.dev/payload-taskrun-$TASKRUN_UID}" | base64 -d > payload
Verify the signature.
$ cosign verify-blob --key k8s://openshift-pipelines/signing-secrets --signature ./signature ./payload Verified OK
4.16.6. Using Tekton Chains to sign and verify image and provenance
Cluster administrators can use Tekton Chains to sign and verify images and provenances, by performing the following tasks:
- Create an encrypted x509 key pair and save it as a Kubernetes secret.
- Set up authentication for the OCI registry to store images, image signatures, and signed image attestations.
- Configure Tekton Chains to generate and sign provenance.
- Create an image with Kaniko in a task run.
- Verify the signed image and the signed provenance.
Prerequisites
Ensure that the following are installed on the cluster:
Procedure
Create an encrypted x509 key pair and save it as a Kubernetes secret:
$ cosign generate-key-pair k8s://openshift-pipelines/signing-secrets
Provide a password when prompted. Cosign stores the resulting private key as part of the
signing-secrets
Kubernetes secret in theopenshift-pipelines
namespace, and writes the public key to thecosign.pub
local file.Configure authentication for the image registry.
- To configure the Tekton Chains controller for pushing signature to an OCI registry, use the credentials associated with the service account of the task run. For detailed information, see the "Authenticating to an OCI registry" section.
To configure authentication for a Kaniko task that builds and pushes image to the registry, create a Kubernetes secret of the docker
config.json
file containing the required credentials.$ oc create secret generic <docker_config_secret_name> \ 1 --from-file <path_to_config.json> 2
Configure Tekton Chains by setting the
artifacts.taskrun.format
,artifacts.taskrun.storage
, andtransparency.enabled
parameters in thechains-config
object:$ oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"artifacts.taskrun.format": "in-toto"}}' $ oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"artifacts.taskrun.storage": "oci"}}' $ oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"transparency.enabled": "true"}}'
Start the Kaniko task.
Apply the Kaniko task to the cluster.
$ oc apply -f examples/kaniko/kaniko.yaml 1
- 1
- Substitute with the URI or file path to your Kaniko task.
Set the appropriate environment variables.
$ export REGISTRY=<url_of_registry> 1 $ export DOCKERCONFIG_SECRET_NAME=<name_of_the_secret_in_docker_config_json> 2
Start the Kaniko task.
$ tkn task start --param IMAGE=$REGISTRY/kaniko-chains --use-param-defaults --workspace name=source,emptyDir="" --workspace name=dockerconfig,secret=$DOCKERCONFIG_SECRET_NAME kaniko-chains
Observe the logs of this task until all steps are complete. On successful authentication, the final image will be pushed to
$REGISTRY/kaniko-chains
.
Wait for a minute to allow Tekton Chains to generate the provenance and sign it, and then check the availability of the
chains.tekton.dev/signed=true
annotation on the task run.$ oc get tr <task_run_name> \ 1 -o json | jq -r .metadata.annotations { "chains.tekton.dev/signed": "true", ... }
- 1
- Substitute with the name of the task run.
Verify the image and the attestation.
$ cosign verify --key cosign.pub $REGISTRY/kaniko-chains $ cosign verify-attestation --key cosign.pub $REGISTRY/kaniko-chains
Find the provenance for the image in Rekor.
- Get the digest of the $REGISTRY/kaniko-chains image. You can search for it ing the task run, or pull the image to extract the digest.
Search Rekor to find all entries that match the
sha256
digest of the image.$ rekor-cli search --sha <image_digest> 1 <uuid_1> 2 <uuid_2> 3 ...
The search result displays UUIDs of the matching entries. One of those UUIDs holds the attestation.
Check the attestation.
$ rekor-cli get --uuid <uuid> --format json | jq -r .Attestation | base64 --decode | jq
4.16.7. Additional resources
4.17. Viewing pipeline logs using the OpenShift Logging Operator
The logs generated by pipeline runs, task runs, and event listeners are stored in their respective pods. It is useful to review and analyze logs for troubleshooting and audits.
However, retaining the pods indefinitely leads to unnecessary resource consumption and cluttered namespaces.
To eliminate any dependency on the pods for viewing pipeline logs, you can use the OpenShift Elasticsearch Operator and the OpenShift Logging Operator. These Operators help you to view pipeline logs by using the Elasticsearch Kibana stack, even after you have deleted the pods that contained the logs.
4.17.1. Prerequisites
Before trying to view pipeline logs in a Kibana dashboard, ensure the following:
- The steps are performed by a cluster administrator.
- Logs for pipeline runs and task runs are available.
- The OpenShift Elasticsearch Operator and the OpenShift Logging Operator are installed.
4.17.2. Viewing pipeline logs in Kibana
To view pipeline logs in the Kibana web console:
Procedure
- Log in to OpenShift Container Platform web console as a cluster administrator.
-
In the top right of the menu bar, click the grid icon
Observability Logging. The Kibana web console is displayed. Create an index pattern:
- On the left navigation panel of the Kibana web console, click Management.
- Click Create index pattern.
-
Under Step 1 of 2: Define index pattern
Index pattern, enter a *
pattern and click Next Step. -
Under Step 2 of 2: Configure settings
Time filter field name, select @timestamp from the drop-down menu, and click Create index pattern.
Add a filter:
- On the left navigation panel of the Kibana web console, click Discover.
Click Add a filter +
Edit Query DSL. Note- For each of the example filters that follows, edit the query and click Save.
- The filters are applied one after another.
Filter the containers related to pipelines:
Example query to filter pipelines containers
{ "query": { "match": { "kubernetes.flat_labels": { "query": "app_kubernetes_io/managed-by=tekton-pipelines", "type": "phrase" } } } }
Filter all containers that are not
place-tools
container. As an illustration of using the graphical drop-down menus instead of editing the query DSL, consider the following approach:Figure 4.6. Example of filtering using the drop-down fields
Filter
pipelinerun
in labels for highlighting:Example query to filter
pipelinerun
in labels for highlighting{ "query": { "match": { "kubernetes.flat_labels": { "query": "tekton_dev/pipelineRun=", "type": "phrase" } } } }
Filter
pipeline
in labels for highlighting:Example query to filter
pipeline
in labels for highlighting{ "query": { "match": { "kubernetes.flat_labels": { "query": "tekton_dev/pipeline=", "type": "phrase" } } } }
From the Available fields list, select the following fields:
-
kubernetes.flat_labels
message
Ensure that the selected fields are displayed under the Selected fields list.
-
The logs are displayed under the message field.
Figure 4.7. Filtered messages
4.17.3. Additional resources
4.18. Building of container images using Buildah as a non-root user
Running Pipelines as the root user on a container can expose the container processes and the host to other potentially malicious resources. You can reduce this type of exposure by running the workload as a specific non-root user in the container. To run builds of container images using Buildah as a non-root user, you can perform the following steps:
- Define custom service account (SA) and security context constraint (SCC).
-
Configure Buildah to use the
build
user with id1000
. - Start a task run with a custom config map, or integrate it with a pipeline run.
4.18.1. Configuring custom service account and security context constraint
The default pipeline
SA allows using a user id outside of the namespace range. To reduce dependency on the default SA, you can define a custom SA and SCC with necessary cluster role and role bindings for the build
user with user id 1000
.
At this time, enabling the allowPrivilegeEscalation
setting is required for Buildah to run successfully in the container. With this setting, Buildah can leverage SETUID
and SETGID
capabilities when running as a non-root user.
Procedure
Create a custom SA and SCC with necessary cluster role and role bindings.
Example: Custom SA and SCC for used id
1000
apiVersion: v1 kind: ServiceAccount metadata: name: pipelines-sa-userid-1000 1 --- kind: SecurityContextConstraints metadata: annotations: name: pipelines-scc-userid-1000 2 allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true 3 allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: - system:cluster-admins priority: 10 readOnlyRootFilesystem: false requiredDropCapabilities: - MKNOD - KILL runAsUser: 4 type: MustRunAs uid: 1000 seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pipelines-scc-userid-1000-clusterrole 5 rules: - apiGroups: - security.openshift.io resourceNames: - pipelines-scc-userid-1000 resources: - securitycontextconstraints verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: pipelines-scc-userid-1000-rolebinding 6 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: pipelines-scc-userid-1000-clusterrole subjects: - kind: ServiceAccount name: pipelines-sa-userid-1000
- 1
- Define a custom SA.
- 2
- Define a custom SCC created based on restricted privileges, with modified
runAsUser
field. - 3
- At this time, enabling the
allowPrivilegeEscalation
setting is required for Buildah to run successfully in the container. With this setting, Buildah can leverageSETUID
andSETGID
capabilities when running as a non-root user. - 4
- Restrict any pod that gets attached with the custom SCC through the custom SA to run as user id
1000
. - 5
- Define a cluster role that uses the custom SCC.
- 6
- Bind the cluster role that uses the custom SCC to the custom SA.
4.18.2. Configuring Buildah to use build
user
You can define a Buildah task to use the build
user with user id 1000
.
Procedure
Create a copy of the
buildah
cluster task as an ordinary task.$ oc get clustertask buildah -o yaml | yq '. |= (del .metadata |= with_entries(select(.key == "name" )))' | yq '.kind="Task"' | yq '.metadata.name="buildah-as-user"' | oc create -f -
Edit the copied
buildah
task.$ oc edit task buildah-as-user
Example: Modified Buildah task with
build
userapiVersion: tekton.dev/v1beta1 kind: Task metadata: name: buildah-as-user spec: description: >- Buildah task builds source into a container image and then pushes it to a container registry. Buildah Task builds source into a container image using Project Atomic's Buildah build tool.It uses Buildah's support for building from Dockerfiles, using its buildah bud command.This command executes the directives in the Dockerfile to assemble a container image, then pushes that image to a container registry. params: - name: IMAGE description: Reference of the image buildah will produce. - name: BUILDER_IMAGE description: The location of the buildah builder image. default: registry.redhat.io/rhel8/buildah@sha256:99cae35f40c7ec050fed3765b2b27e0b8bbea2aa2da7c16408e2ca13c60ff8ee - name: STORAGE_DRIVER description: Set buildah storage driver default: vfs - name: DOCKERFILE description: Path to the Dockerfile to build. default: ./Dockerfile - name: CONTEXT description: Path to the directory to use as context. default: . - name: TLSVERIFY description: Verify the TLS on the registry endpoint (for push/pull to a non-TLS registry) default: "true" - name: FORMAT description: The format of the built container, oci or docker default: "oci" - name: BUILD_EXTRA_ARGS description: Extra parameters passed for the build command when building images. default: "" - description: Extra parameters passed for the push command when pushing images. name: PUSH_EXTRA_ARGS type: string default: "" - description: Skip pushing the built image name: SKIP_PUSH type: string default: "false" results: - description: Digest of the image just built. name: IMAGE_DIGEST type: string workspaces: - name: source steps: - name: build securityContext: runAsUser: 1000 1 image: $(params.BUILDER_IMAGE) workingDir: $(workspaces.source.path) script: | echo "Running as USER ID `id`" 2 buildah --storage-driver=$(params.STORAGE_DRIVER) bud \ $(params.BUILD_EXTRA_ARGS) --format=$(params.FORMAT) \ --tls-verify=$(params.TLSVERIFY) --no-cache \ -f $(params.DOCKERFILE) -t $(params.IMAGE) $(params.CONTEXT) [[ "$(params.SKIP_PUSH)" == "true" ]] && echo "Push skipped" && exit 0 buildah --storage-driver=$(params.STORAGE_DRIVER) push \ $(params.PUSH_EXTRA_ARGS) --tls-verify=$(params.TLSVERIFY) \ --digestfile $(workspaces.source.path)/image-digest $(params.IMAGE) \ docker://$(params.IMAGE) cat $(workspaces.source.path)/image-digest | tee /tekton/results/IMAGE_DIGEST volumeMounts: - name: varlibcontainers mountPath: /home/build/.local/share/containers 3 volumes: - name: varlibcontainers emptyDir: {}
4.18.3. Starting a task run with custom config map, or a pipeline run
After defining the custom Buildah cluster task, you can create a TaskRun
object that builds an image as a build
user with user id 1000
. In addition, you can integrate the TaskRun
object as part of a PipelineRun
object.
Procedure
Create a
TaskRun
object with a customConfigMap
andDockerfile
objects.Example: A task run that runs Buildah as user id
1000
apiVersion: v1 data: Dockerfile: | ARG BASE_IMG=registry.access.redhat.com/ubi8/ubi FROM $BASE_IMG AS buildah-runner RUN dnf -y update && \ dnf -y install git && \ dnf clean all CMD git kind: ConfigMap metadata: name: dockerfile 1 --- apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: buildah-as-user-1000 spec: serviceAccountName: pipelines-sa-userid-1000 2 params: - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/test/buildahuser taskRef: kind: Task name: buildah-as-user workspaces: - configMap: name: dockerfile 3 name: source
(Optional) Create a pipeline and a corresponding pipeline run.
Example: A pipeline and corresponding pipeline run
apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: pipeline-buildah-as-user-1000 spec: params: - name: IMAGE - name: URL workspaces: - name: shared-workspace - name: sslcertdir optional: true tasks: - name: fetch-repository 1 taskRef: name: git-clone kind: ClusterTask workspaces: - name: output workspace: shared-workspace params: - name: url value: $(params.URL) - name: subdirectory value: "" - name: deleteExisting value: "true" - name: buildah taskRef: name: buildah-as-user 2 runAfter: - fetch-repository workspaces: - name: source workspace: shared-workspace - name: sslcertdir workspace: sslcertdir params: - name: IMAGE value: $(params.IMAGE) --- apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: pipelinerun-buildah-as-user-1000 spec: taskRunSpecs: - pipelineTaskName: buildah taskServiceAccountName: pipelines-sa-userid-1000 3 params: - name: URL value: https://github.com/openshift/pipelines-vote-api - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/test/buildahuser pipelineRef: name: pipeline-buildah-as-user-1000 workspaces: - name: shared-workspace 4 volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Mi
- 1
- Use the
git-clone
cluster task to fetch the source containing a Dockerfile and build it using the modified Buildah task. - 2
- Refer to the modified Buildah task.
- 3
- Use the service account that you created for the Buildah task.
- 4
- Share data between the
git-clone
task and the modified Buildah task using a persistent volume claim (PVC) created automatically by the controller.
- Start the task run or the pipeline run.
4.18.4. Limitations of unprivileged builds
The process for unprivileged builds works with most Dockerfile
objects. However, there are some known limitations might cause a build to fail:
-
Using the
--mount=type=cache
option might fail due to lack of necessay permissions issues. For more information, see this article. -
Using the
--mount=type=secret
option fails because mounting resources requires additionnal capabilities that are not provided by the custom SCC.
Additional resources