Chapter 1. Red Hat OpenShift Pipelines release notes
For additional information about the OpenShift Pipelines lifecycle and supported platforms, refer to the OpenShift Operator Life Cycles and Red Hat OpenShift Container Platform Life Cycle Policy.
Release notes contain information about new and deprecated features, breaking changes, and known issues. The following release notes apply for the most recent OpenShift Pipelines releases on OpenShift Container Platform.
Red Hat OpenShift Pipelines is a cloud-native CI/CD experience based on the Tekton project which provides:
- Standard Kubernetes-native pipeline definitions (CRDs).
- Serverless pipelines with no CI server management overhead.
- Extensibility to build images using any Kubernetes tool, such as S2I, Buildah, JIB, and Kaniko.
- Portability across any Kubernetes distribution.
- Powerful CLI for interacting with pipelines.
- Integrated user experience with the Developer perspective of the OpenShift Container Platform web console.
For an overview of Red Hat OpenShift Pipelines, see Understanding OpenShift Pipelines.
1.1. Compatibility and support matrix Copy linkLink copied to clipboard!
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use.
In the table, features are marked with the following statuses:
| TP | Technology Preview |
| GA | General Availability |
| Red Hat OpenShift Pipelines Version | Component Version | OpenShift Version | Support Status | |||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Operator | Pipelines | Triggers | CLI | Chains | Hub | Pipelines as Code | Results | Manual Approval Gate | ||
| 1.16 | 0.62.x | 0.29.x | 0.38.x | 0.22.x (GA) | 1.18.x (TP) | 0.28.x (GA) | 0.12.x (TP) | 0.3.x (TP) | 4.15, 4.16, 4.17, 4.18 | GA |
| 1.15 | 0.59.x | 0.27.x | 0.37.x | 0.20.x (GA) | 1.17.x (TP) | 0.27.x (GA) | 0.10.x (TP) | 0.2.x (TP) | 4.14, 4.15, 4.16 | GA |
| 1.14 | 0.56.x | 0.26.x | 0.35.x | 0.20.x (GA) | 1.16.x (TP) | 0.24.x (GA) | 0.9.x (TP) | NA | 4.12, 4.13, 4.14, 4.15, 4.16 | GA |
For questions and feedback, you can send an email to the product team at pipelines-interest@redhat.com.
1.2. Release notes for Red Hat OpenShift Pipelines 1.16 Copy linkLink copied to clipboard!
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.16 is available on OpenShift Container Platform 4.15 and later versions.
1.2.1. New features Copy linkLink copied to clipboard!
In addition to fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.16:
1.2.1.1. Pipelines Copy linkLink copied to clipboard!
With this update, you can configure the resync period for the pipelines controller. For every resync period, the controller reconciles all pipeline runs and task runs, regardless of events. The default resync period is 10 hours. If you have a large number of pipeline runs and task runs, a full reconciliation every 10 hours might consume too many resources. In this case, you can configure a longer resync period.
Example of configuring a resync period of 24 hours
Copy to Clipboard Copied! Toggle word wrap Toggle overflow With this update, when defining a pipeline, you can set the
onErrorparameter for a task tocontinue. If you make this setting and the task fails when the pipeline is executed, the pipeline logs the error and continues to the next task. By default, a pipeline fails if a task in it fails.Example of setting the
onErrorparameter. After thetask-that-failstask fails, thenext-taskexecutesCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
With this update, if a task fails, a
finallytask can access thereasonparameter in addition to thestatusparameter to distinguish if the failure was allowed or not. You can access thereasonparameter through$(tasks.<task_name>.reason). If the failure is allowed, thereasonis set toFailureIgnored. If the failure is not allowed, thereasonis set toFailed. This additional information can be used to identify that the checks failed, but the failure can be ignored. With this update, larger results are supported through sidecar logs as an alternative to the default configuration, which limits results to the size of 4 KB per step and 12 KB per task run. To enable larger results using sidecar logs, set the
pipeline.options.configMaps.feature-flags.data.results-fromspec tosidecar-logsin theTektonConfigCR.Example of enabling larger results using sidecar logs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Before this update, parameter propagation was allowed in
PipelineRunandTaskRunresources, but not in thePipelineresource. With this update, you can propagateparamsin thePipelineresource down to the inlined pipelinetasksand its inlinedsteps. Wherever a resource, such asPipelineTaskorStepAction, is referenced, you must pass the parameters explicitly.Example of using
paramswithin a pipelineCopy to Clipboard Copied! Toggle word wrap Toggle overflow With this update, you can use the task run or pipeline run definition to configure the compute resources for steps and sidecars in a task.
Example task for configuring resources
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
TaskRundefinition that configures the resourcesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2.1.2. Operator Copy linkLink copied to clipboard!
With this update, OpenShift Pipelines includes the
git-cloneStepActiondefinition for a step that clones a Git repository. Use the HTTP resolver to reference this definition. The URL for the definition ishttps://raw.githubusercontent.com/openshift-pipelines/tektoncd-catalog/p/stepactions/stepaction-git-clone/0.4.1/stepaction-git-clone.yaml. TheStepActiondefinition is also installed in theopenshift-pipelinesnamespace. However, the cluster resolver does not supportStepActiondefinitions.Example usage of the
git-clonestep action in a taskCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
With this update, the
openshift-pipelinesnamespace includes versioned tasks alongside standard tasks. For example, there is abuildahstandard task and abuildah-1-16-0versioned task. While the standard task might be updated in subsequent releases, the versioned task remains exactly the same as it was in a specified version, except for the correction of errors. With this update, you can configure the
FailurePolicy,TimeoutSeconds, andSideEffectsoptions for webhooks for several components of OpenShift Pipelines by using theTektonConfigCR. The following example shows the configuration for thepipelinecomponent. You can use similar configuration for webhooks in thetriggers,pipelinesAsCode, andhubcomponents.Example configuration of webhooks options for the
pipelinecomponentCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2.1.3. Triggers Copy linkLink copied to clipboard!
-
With this update, the
readOnlyRootFilesystemparameter for the triggers controller, webhook, Core Interceptor, and event listener is set totrueby default to improve security and avoid being flagged by the security scanner. With this update, you can configure OpenShift Pipelines triggers to run event listeners as a non-root user within the container. To set this option, set the parameters in the
TektonConfigCR as shown in the following example:Example of configuring trigger event listeners to run as non-root
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optionally, you can set the values of the
default-run-as-useranddefault-run-as-groupparameters to configure the numeric user ID and group ID for running the event listeners in containers. OpenShift Pipelines sets these values in the pod security context and container security context for event listeners. If you use empty values, the default user ID and group ID of65532are used.You can also set the
default-fs-groupparameter to define thefsGroupvalue for the pod security context, which is the group ID that the container processes use for the file system. If you use an empty value, the default group ID of65532is used.-
With this update, in triggers, the
EventListenerpod template now includessecurityContextsettings. Under these settings, you can configureseccompProfile,runAsUser,runAsGroup, andfsGroupparameters when theel-security-contextflag is set totrue.
1.2.1.4. Web console Copy linkLink copied to clipboard!
- Before this release, when using the web console, you could not see the timestamp for the logs that OpenShift Pipelines created. With this update, the web console includes timestamps for all OpenShift Pipelines logs.
-
With this update, the pipeline run and task run list pages in the web console now have a filter for the data source, such as
k8sandTektonResults API. - Before this update, when using the web console in the Developer perspective, you could not specify the timeout for pipeline runs. With this update, you can set a timeout while starting the pipeline run in the Developer perspective of the web console.
- Before this update, the Overview pipeline dashboard only appeared when Tekton Results was enabled. All the statistics came from only the Results API. With this update, the Overview pipeline dashboard is available regardless of whether Tekton Results is enabled or not. When Tekton Results is disabled, you can use the dashboard to see the statistics for objects in the cluster.
-
With this update, the sample pipelines displayed in the web console use the
v1version of the OpenShift Pipelines API.
1.2.1.5. CLI Copy linkLink copied to clipboard!
-
With this update, you can use the
tkn customrun delete <custom_run_names>command to delete one or more custom runs. -
With this update, when you run a
tkn <resource> listcommand with the-oYAML flag, the listed resources are now separated with---separators to enhance readability of the output.
1.2.1.6. Pipelines as Code Copy linkLink copied to clipboard!
-
With this update, if you create two
PipelineRundefinitions with the same name, Pipelines as Code logs an error and does not run either of these pipeline runs. -
With this update, the Pipelines as Code
pipelines_as_code_pipelinerun_countmetric allows filtering of thePipelineRuncount by repository or namespace. -
With this update, the
readOnlyRootFilesystemsecurity context for the Pipelines as Code controller, webhook, and watcher is set totrueby default to increase security and avoid being flagged by the security scanner.
1.2.1.7. Tekton Chains Copy linkLink copied to clipboard!
With this update, when using
docdbstorage in Tekton Chains, you can configure theMONGO_SERVER_URLvalue directly in theTektonConfigCR as thestorage.docdb.mongo-server-urlsetting. Alternatively, you can provide this value using a secret and set thestorage.docdb.mongo-server-url-dirsetting to the directory where theMONGO_SERVER_URLfile is located.Example of creating a secret with the
MONGO_SERVER_URLvalueoc create secret generic mongo-url -n tekton-chains \ #
$ oc create secret generic mongo-url -n tekton-chains \ # --from-file=MONGO_SERVER_URL=/home/user/MONGO_SERVER_URLCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example of configuring the
MONGO_SERVER_URLvalue using a secretCopy to Clipboard Copied! Toggle word wrap Toggle overflow With this update, when using KMS signing in Tekton Chains, instead of providing the KMS authentication token value directly in the configuration, you can provide the token value as a secret by using the
signers.kms.auth.token-pathsetting.To create a KMS token secret, enter the following command:
oc create secret generic <secret_name> -n tekton-chains \ --from-file=KMS_AUTH_TOKEN=/home/user/KMS_AUTH_TOKEN
$ oc create secret generic <secret_name> -n tekton-chains \ --from-file=KMS_AUTH_TOKEN=/home/user/KMS_AUTH_TOKEN1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<secret_name>with any name. The following example uses a KMS secret calledkms-secrets.
Example configuration of the KMS token value using a secret called
kms-secretsCopy to Clipboard Copied! Toggle word wrap Toggle overflow With this update, you can configure a list of namespaces as an argument to the Tekton Chains controller. If you provide this list, Tekton Chains watches pipeline runs and task runs only in the specified namespaces. If you do not provide this list, Tekton Chains watches pipeline runs and task runs in all namespaces.
Example configuration for watching only the
devandtestnamespacesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2.1.8. Tekton Results Copy linkLink copied to clipboard!
-
Before this update, Tekton Results used the
v1beta1API format to storeTaskRunandPipelineRunobject records. With this update, Tekton Results uses thev1API format to storeTaskRunandPipelineRunobject records. With this update, Tekton Results can automatically convert existing records to the
v1API format. To enable such conversion, set parameters in theTektonResultCR as shown in the following example:Example of configuring Tekton Results to convert existing records to the
v1API formatCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In the
CONVERTER_DB_LIMITvariable, set the number of records to convert at the same time in a single transaction.
-
With this update, Tekton Results now supports fetching forwarded logs from third party logging APIs. You can enable the logging API through the
TektonResultCR by setting thelogs_apitotrueandlogs_typetoLoki. With this update, you can configure automatic pruning of the Tekton Results database. You can specify the number of days for which records must be stored. You can also specify the schedule for running the pruner job that removes older records. To set these parameters, edit the
TektonResultCR, as shown in the following example:Example of configuring automatic pruning of the Tekton Results database
Copy to Clipboard Copied! Toggle word wrap Toggle overflow With this update, you can configure Tekton Results to store event logs for pipelines and tasks. To enable storage of event logs, edit the
TektonResultCR, as shown in the following example:Example of configuring automatic pruning of the Tekton Results database
Copy to Clipboard Copied! Toggle word wrap Toggle overflow With this update, you can configure Tekton Results to use the OpenShift Container Platform Cluster Log Forwarder to store all log data in a LokiStack instance, instead of placing it directly on a storage volume. This option enables scaling to a higher rate of pipeline runs and task runs.
To configure Tekton Results to use the OpenShift Container Platform Cluster Log Forwarder to store all log data in a LokiStack instance, you must deploy LokiStack in your cluster by using the Loki Operator and also install the OpenShift Logging Operator. Then you must create a
ClusterLogForwarderCR in theopenshift-loggingnamespace by using one of the following YAML manifests:YAML manifest for the
ClusterLogForwarderCR if you installed OpenShift Logging version 6Copy to Clipboard Copied! Toggle word wrap Toggle overflow YAML manifest for the
ClusterLogForwarderCR if you installed OpenShift Logging version 5Copy to Clipboard Copied! Toggle word wrap Toggle overflow Finally, in the
TektonResultCR in theopenshift-pipelinesnamespace, set the following additional parameters:-
loki_stack_name: The name of theLokiStackCR, typicallylogging-loki. -
loki_stack_namespace: The name of the namespace where LokiStack is deployed, typicallyopenshift-logging.
Example of configuring LokiStack log forwarding in the
TektonResultCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
1.2.2. Breaking changes Copy linkLink copied to clipboard!
-
With this update, the metric name for the
EventListenerobject for pipelines triggers that counts received events is changed fromeventlistener_event_counttoeventlistener_event_received_count. Before this update, when using Pipelines as Code, the pipeline run executed correctly if you specified the
podTemplateparameters for a pipeline run using one of the following wrong ways for the API version:-
For the
v1beta1API, in thetaskRunTemplate.podTemplatespec For the
v1API, in thepodTemplatespecWith this update, when a pipeline run includes either of the incorrect specifications, the
podTemplateparameters are disregarded.To avoid this problem, define the pod template correctly for the API version that you are using, as in one of the following examples:
Example of specifying a pod template in the
v1APICopy to Clipboard Copied! Toggle word wrap Toggle overflow Example of specifying a pod template in the
v1beta1APICopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
For the
1.2.3. Known issues Copy linkLink copied to clipboard!
-
The
jib-mavenClusterTaskdoes not work if you are using OpenShift Container Platform version 4.16 and later.
1.2.4. Fixed issues Copy linkLink copied to clipboard!
-
Before this update, when you uninstalled Tekton Hub by deleting the
TektonHubCR, the pod of thehub-db-migrationjob was not deleted. With this update, uninstalling Tekton Hub deletes the pod. -
Before this update, when you used Tekton Results to store pod logs from pipelines and tasks, the operation to store the logs sometimes failed. The logs would include the
UpdateLogaction failing with thecanceled contexterror. With this update, the operation completes correctly. -
Before this update, when you passed a parameter value to a pipeline or task and the value included more than one variable with both full and short reference formats, for example,
$(tasks.task-name.results.variable1) + $(variable2), OpenShift Pipelines did not interpret the value correctly. The pipeline run or task run could stop execution and the Pipelines controller could crash. With this update, OpenShift Pipelines interprets the value correctly and the pipeline run or task run completes. - Before this update, Tekton Chains failed to generate correct attestations when a task run included multiple tasks with the same name. For instance, when using a matrix of tasks, the attestation was generated for the first image. With this update, Tekton Chains generates attestations for all tasks within the task run, ensuring complete coverage.
-
Before this update, when you used the
skopeo-copytask defined in the OpenShift Pipelines installation namespace and set itsVERBOSEparameter tofalse, the task failed. With this update, the task completes normally. -
Before this update, when using Pipelines as Code, if you set the
concurrency_limitspec in the globalRepositoryCR namedpipelines-as-codein theopenshift-pipelinesorpipelines-as-codenamespace, which provides default settings for allRepositoryCRs, the Pipelines as Code watcher crashed. With this update, the Pipelines as Code watcher operates correctly with this setting. - Before this update, all tasks in OpenShift Pipelines included an extra step compared to the cluster tasks of the same name that were available in previous versions of OpenShift Pipelines. This extra step increased the load on the cluster. With this update, the tasks no longer include the extra step as it is integrated into the first step.
-
Before this update, when you used one of the
s2i-*tasks defined in the OpenShift Pipelines installation namespace and set itsCONTEXTparameter, the task did not interpret the parameter correctly and the task failed. With this update, the task interprets theCONTEXTparameter correctly and completes successfully. -
Before this update, in Tekton Chains the in-toto provenance metadata,
URIandDigestvalues, were incomplete. The values contained only the information of remotePipelineandTaskresources, but were missing the information of the remoteStepActionresource. With this update, the provenance of the remoteStepActionresource is recorded in the task run status and inserted into the in-toto provenance, which results in complete in-toto provenance metadata. -
Before this update, you could modify some of the parameters in the
specfield of thePipelineRunandTaskRunresources that should not be modifiable after the resources were created. With this update, you can only modify the allowed fields after the pipeline run and task run are created, such asstatusandstatusMessagefields. -
Before this update, if a step action parameter was an
arraytype but astringvalue was passed in a task, there was no error indicating inconsistent parameter types and the default parameter value was used instead. With this update, an error is added to indicate the inconsistent values:invalid parameter substitution: %s. Please check the types of the default value and the passed value. -
Before this update, task runs and pipeline runs were deleted by an external pruner when logs were streamed through the watcher. With this update, a finalizer is added to Tekton Results for
TaskRunandPipelineRunobjects to ensure that the runs are stored and not deleted. The runs are stored either as records or until the deadline has passed, which is calculated as the completion time plus thestore_deadlinetime. The finalizer does not prevent deletion if legacy log streaming from the watcher or pruner is enabled. -
Before this update, the web console supported the
v1beta1API format to display theTaskRunandPipelineRunobject records that are stored in Tekton Results. With this update, the console supports thev1API format to displayTaskRunandPipelineRunobject records stored in Tekton Results. -
Before this update, when using Pipelines as Code, if different
PipelineRundefinitions used the same task name but different versions, for example when fetching tasks from Tekton Hub,the wrong version was sometimes triggered, because Pipelines as Code used the same task version for all pipeline runs. With this update, Pipelines as Code triggers the correct version of the referenced task. - Before this update, when you used a resolver to reference remote pipelines or tasks, transient communication errors caused immediate failure retrieving those remote references. With this update, the resolver requeues the retrieval and eventually retries the retrieval.
- Before this update, Tekton Results could use an increasing amount of memory when storing log information for pipeline runs and task runs. This update fixes the memory leak and Tekton Results uses a normal amount of memory.
-
Before this update, when using Pipelines as Code, if your
.tektondirectory contained a pipeline that was not referenced by anyPipelineRundefinition triggered in the event, Pipelines as Code attempted to fetch all the required tasks for that pipeline even though it was not run. With this update, Pipelines as Code does not try to resolve pipelines that are not referenced in any pipeline run triggered by the current event.
1.2.5. Release notes for Red Hat OpenShift Pipelines General Availability 1.16.1 Copy linkLink copied to clipboard!
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.16.1 is available on OpenShift Container Platform 4.15 and later versions.
1.2.5.1. Fixed issues Copy linkLink copied to clipboard!
- Before this update, in the Pipelines overview page of the web console, users that do not have access to all namespaces could select All in the Projects list. The console displayed wrong information for that selection, because the statistics for some of the namespaces were not available to the user. With this update, users who do not have access to all namespaces cannot select All in the Projects list.
-
Before this update, when you tried to used the web console to start a pipeline or task that defined a parameter of the type
array, entering a value for this parameter resulted in an error and you could not start the pipeline or task. With this update, you can use the web console to start a pipeline or task that defines a parameter of the typearrayand entering a value for this parameter works normally. -
Before this update, when using Pipelines as Code with a Bitbucket Git repository, the Pipelines as Code controller sometimes crashed and a
panic: runtime error: invalid memory address or nil pointer dereferencemessage was logged. With this update, the Pipelines as Code controller does not crash. -
Before this update, when using Tekton Results, the
tekton-results-watcherpod sometimes crashed and apanic: runtime error: invalid memory address or nil pointer dereferencemessage was logged. With this update, thetekton-results-watcherpod does not crash. - Before this update, when using Tekton Results, if you enabled authentication in Tekton Results, you could not view information from Tekton Results in the web console, because the web console failed to pass an authentication token to the Tekton Results API. With this update, you can view information from Tekton Results in the web console when authentication is enabled.
- Before this update, when viewing information from Tekton Results in the web console, if you scrolled down to the end of a page, the console failed to fetch the next set of records and some of the information was not displayed. With this update, if you scroll to the end of the page, records from Tekton Results load correctly and all information displays correctly in the web console.
1.2.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.16.2 Copy linkLink copied to clipboard!
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.16.2 is available on OpenShift Container Platform 4.15 and later versions.
1.2.6.1. Fixed issues Copy linkLink copied to clipboard!
-
Before this update, in OpenShift Pipelines 1.16, you could not cancel a pipeline run by patching the
PipelineRunobject and setting thespec.statusparameter toCancelledif the first task run in the pipeline run was completed. Instead, an error message was logged:PipelineRun was cancelled but had errors trying to cancel the TaskRuns and/or Runs. With this update, the pipeline run is cancelled successfully.
1.2.7. Release notes for Red Hat OpenShift Pipelines General Availability 1.16.3 Copy linkLink copied to clipboard!
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.16.3 is available on OpenShift Container Platform 4.15 and later versions.
1.2.7.1. Fixed issues Copy linkLink copied to clipboard!
- Before this update, in some cases the Tekton Chains controller repeatedly crashed, making the Tekton Chains component unusable. With this update, the controller no longer crashes.
-
Before this update, if you defined a matrix task that included both regular parameters and
matrixparameters, thetekton-pipelines-controllercomponent crashed and logged a segmentation fault message. If the task was not removed, the component continued to crash and did not run any pipelines. With this update, the controller no longer crashes in such cases.
1.2.8. Release notes for Red Hat OpenShift Pipelines General Availability 1.16.4 Copy linkLink copied to clipboard!
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.16.4 is available on OpenShift Container Platform 4.12, 4.14, 4.15, 4.16, 4.17, and 4.18 versions.
1.2.9. Known issues Copy linkLink copied to clipboard!
-
The Pipelines controller no longer adds
taskrunandpipelinerunUID labels to pods. As a result, fetching logs via Results fails. Additionally, any tasks or jobs that rely on these labels might fail or behave unexpectedly.
1.2.9.1. Fixed issues Copy linkLink copied to clipboard!
-
Before this update, when you used the
buildahtask that is supplied in theopenshift-pipelinesnamespace and provided a credentials secret through thedockerconfigworkspace, the task failed with apermission deniederror. This failure occurred because directories passed as volumes were assigned defaultReadWritepermissions, while the entitlement secret was mounted asread-only. This mismatch caused the task to fail. With this update, thebuildahtask supports the entitlement use case without encountering permission issues. -
Before this update, the
git-clonetask supplied in theopenshift-pipelinesnamespace used an outdated image that failed on disconnected clusters. With this update, a newer image for thegit-clonetask supports operation in disconnected environments.