Este contenido no está disponible en el idioma seleccionado.
Chapter 1. Red Hat OpenShift Pipelines release notes
For additional information about the OpenShift Pipelines lifecycle and supported platforms, refer to the OpenShift Operator Life Cycles and Red Hat OpenShift Container Platform Life Cycle Policy.
Release notes contain information about new and deprecated features, breaking changes, and known issues. The following release notes apply for the most recent OpenShift Pipelines releases on OpenShift Container Platform.
Red Hat OpenShift Pipelines is a cloud-native CI/CD experience based on the Tekton project which provides:
- Standard Kubernetes-native pipeline definitions (CRDs).
- Serverless pipelines with no CI server management overhead.
- Extensibility to build images using any Kubernetes tool, such as S2I, Buildah, JIB, and Kaniko.
- Portability across any Kubernetes distribution.
- Powerful CLI for interacting with pipelines.
- Integrated user experience with the Developer perspective of the OpenShift Container Platform web console.
For an overview of Red Hat OpenShift Pipelines, see Understanding OpenShift Pipelines.
1.1. Compatibility and support matrix
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use.
In the table, features are marked with the following statuses:
TP | Technology Preview |
GA | General Availability |
Red Hat OpenShift Pipelines Version | Component Version | OpenShift Version | Support Status | |||||||
---|---|---|---|---|---|---|---|---|---|---|
Operator | Pipelines | Triggers | CLI | Chains | Hub | Pipelines as Code | Results | Manual Approval Gate | ||
1.18 | 0.68.x | 0.31.x | 0.40.x | 0.24.x (GA) | 1.20.x (TP) | 0.33.x (GA) | 0.14.x (GA) | 0.5.x (TP) | 4.15, 4.16, 4.17, 4.18 | GA |
1.17 | 0.65.x | 0.30.x | 0.39.x | 0.23.x (GA) | 1.19.x (TP) | 0.29.x (GA) | 0.13.x (TP) | 0.4.x (TP) | 4.15, 4.16, 4.17, 4.18 | GA |
1.16 | 0.62.x | 0.29.x | 0.38.x | 0.22.x (GA) | 1.18.x (TP) | 0.28.x (GA) | 0.12.x (TP) | 0.3.x (TP) | 4.15, 4.16, 4.17, 4.18 | GA |
1.15 | 0.59.x | 0.27.x | 0.37.x | 0.20.x (GA) | 1.17.x (TP) | 0.27.x (GA) | 0.10.x (TP) | 0.2.x (TP) | 4.14, 4.15, 4.16 | GA |
1.14 | 0.56.x | 0.26.x | 0.35.x | 0.20.x (GA) | 1.16.x (TP) | 0.24.x (GA) | 0.9.x (TP) | NA | 4.12, 4.13, 4.14, 4.15, 4.16 | GA |
For questions and feedback, you can send an email to the product team at pipelines-interest@redhat.com.
1.2. Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
1.3. Release notes for Red Hat OpenShift Pipelines 1.18
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.18 is available on OpenShift Container Platform 4.15 and later versions.
1.3.1. New features
In addition to fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.18:
1.3.1.1. Pipelines
- With this release, log messages for the OpenShift Pipelines controller are improved to enhance readability, especially when empty variables are present.
With this release, you can configure resolution timeout settings for pipeline resolvers to gain greater flexibility and control when running a pipeline.
Example of configuring the timeout settings for pipeline resolvers
apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: # ... pipeline: options: configMaps: config-defaults: data: default-maximum-resolution-timeout: 5m 1 bundleresolver-config: data: fetch-timeout: 1m 2 # ...
1.3.1.2. Operator
With this release, the OpenShift Pipelines now support community tasks.
The following community tasks are installed by default in the
openshift-pipelines
namespace:-
argocd-task-sync-and-wait
-
git-cli
-
helm-upgrade-from-repo
-
helm-upgrade-from-source
-
jib-maven
-
kubeconfig-creator
-
pull-request
-
trigger-jenkins-job
-
With this release, OpenShift Pipelines supports the deployment of new containers through the
TektonConfig
CR.Example of deploying a new
kube-rbac-proxy
container by using theTektonConfig
CRapiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config # ... spec: result: options: deployments: tekton-results-watcher: spec: template: spec: containers: - name: kube-rbac-proxy args: - --secure-listen-address=0.0.0.0:8443 - --upstream=http://127.0.0.1:9090/ - --logtostderr=true image: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.12 # ...
-
With this release, the high availability options of the OpenShift Pipelines controller are enhanced with
StatefulSet
ordinals as an alternative to the existing leader election mechanism. This enables the Operator to use either quick recovery with leader election or consistent workload distribution withStatefulSet
ordinals.
Leader election, which is the default option, offers failover capabilities but might cause hot-spotting. In contrast, the new StatefulSet
ordinals approach ensures keys are evenly spread across replicas for a more balanced workload distribution.
You can switch to the StatefulSet
ordinals method by setting the statefulset-ordinals
parameter to true
in the TektonConfig
custom resource:
Example of enabling the StatefulSet
ordinals feature
apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: # ... pipeline: performance: disable-ha: false buckets: 4 replicas: 4 statefulset-ordinals: true # ...
Using StatefulSet
ordinals for high availability is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
1.3.1.3. Triggers
With this release, the
EventListener
object for OpenShift Pipelines triggers now includes theImagePullSecrets
field to specify a secret that the listener uses to pull images from private registries.Example of using the
ImagePullSecrets
fieldapiVersion: triggers.tekton.dev/v1beta1 kind: EventListener metadata: name: imagepullsecrets-example # ... spec: serviceAccountName: triggers-example resources: kubernetesResource: spec: template: spec: imagePullSecrets: - name: docker-login # ...
1.3.1.4. CLI
With this release, the
opc
command line utility is now shipped with the following components:- Pipelines as Code version 0.33.0
- CLI version 0.40.0
- Results version 0.14.0
- Manual Approval Gate version 0.5.0
1.3.1.5. Pipelines as Code
With this release, a
PipelineRun
resource can be triggered based on which files have changed by using theon-path-change
andon-path-change-ignore
annotations, which simplifies the process by not requiring writing complex CEL expressions.Example of using the
on-path-change
andon-path-change-ignore
annotationsapiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: "on-path-change" annotations: pipelinesascode.tekton.dev/on-path-change: ["docs/**"] 1 pipelinesascode.tekton.dev/on-path-change-ignore: [".github/**"] 2 spec: # ...
With this release, a
PipelineRun
resource can now be triggered by pushed commits on GitHub by using theon-comment
annotation. This feature gives you more control over when aPipelineRun
is triggered, enabling you to match it based on specific comments.You must configure a pipeline run to enable the triggering with the
on-comment
annotation:Example of enabling triggering of a pipeline run through pushed commits
apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: "on-comment-pr" annotations: pipelinesascode.tekton.dev/on-comment: "^/hello-world" 1 spec: # ...
- 1
- The example configures the
/hello-world
command. When you use this command on a pushed commit, for example, on themain
branch, theon-comment-pr
pipeline run is triggered.
With this release, you can use the
tkn pac info globbing [-s | -d] “<pattern>”
command to test patterns with sample input data or in the working directory where the command is run. The command tests if the patterns are working properly in thePipelineRun
definition.For example, you can test the following
on-target-branch
annotation to ensure that it matches themain
branch in the git event payload:Example
on-target-branch
annotationapiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: "on-target-branch" annotations: pipelinesascode.tekton.dev/on-target-branch: "[refs/heads/*]" spec: # ...
Example command
$ tkn pac info globbing -s "refs/heads/main" "refs/heads/*"
With this release, Pipelines as Code supports commas within annotations by using the
,
HTML entity. Now, you can use comma-separated values together with commas used directly in the values.For example, to match two branches called
main
andbranchWith,comma
, specify the annotation as follows:Example annotation with commas
apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: "comma-annotation" annotations: pipelinesascode.tekton.dev/on-target-branch: "[main, branchWith,comma]" spec: # ...
With this release, after a pull request is closed or merged, all associated
PipelineRun
resources that have thecancel-in-progress: true
annotation are automatically canceled. To enable this feature, specify thecancel-in-progress
annotation in yourPipelineRun
definition:Example
cancel-in-progress
annotationapiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: "cancel-in-progress" annotations: pipelinesascode.tekton.dev/cancel-in-progress: "true" spec: # ...
With this release, an active
PipelineRun
resource that has thecancel-in-progress: true
annotation is automatically canceled when a newPipelineRun
resource with the same name is triggered by Pipelines as Code. Now, after a pull request is raised and triggers aPipelineRun
, and subsequent commits to the same pull request trigger a newPipelineRun
, the older obsoletePipelineRun
is canceled to save resources.To enable the feature, set the
pipelinesascode.tekton.dev/cancel-in-progress
annotation totrue
in thePipelineRun
resource:Example of enabling the cancellation of older pipeline runs
apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: "cancel-in-progress" annotations: pipelinesascode.tekton.dev/cancel-in-progress: "true" spec: # ...
-
Before this update, the
.pathChanged()
function used in CEL expressions with Bitbucket Data Center did not work. With this release, the functionality is implemented. With this release, you can match pipeline runs to pull requests by using labels. To match a pipeline run to a label, set the
pipelinesascode.tekton.dev/on-label
annotation inPipelineRun
resources. Adding the label to a pull request or push event immediately triggers the pipeline run. If a push request or pull event still has the label and is updated with a commit, the pipeline run is triggered again.This functionality is supported on GitHub, GitLab, and Gitea repository providers. It is not supported on Bitbucket Cloud and Bitbucket Data Center.
1.3.1.6. Tekton Results
-
With this release, the Operator installs and adds configuration for Tekton Results by default through the
TektonConfig
CR. - With this release, the Tekton Results API server uses proxy environment variables in the OpenShift Pipelines console plugin to send authorization headers to the Tekton Results API.
- With this release, the Tekton Results logging information fetched from LokiStack includes the container name for each log entry.
1.3.1.7. Tekton Cache
With this release, OpenShift Pipelines includes the new tekton-caches
tool functionality.
Tekton Cache is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The tekton-caches
tool includes the following features:
Use the
cache-upload
andcache-fetch
step actions to preserve the cache directory where a build process keeps its dependencies, storing it in an S3 bucket, Google Cloud Services (GCS) bucket, or an OCI repository.The
cache-fetch
andcache-upload
step actions are installed by default in theopenshift-pipelines
namespace.
1.3.2. Breaking changes
- With this release, Tekton Results no longer supports forwarding of logs by the Tekton Results watcher to storage in a persistent volume (PV), an S3 bucket, or a GCS bucket.
-
Before this update, older versioned tasks and step actions were installed in the
openshift-pipelines
namespace. With this update, these older versions are removed, and only the latest two minor versions of tasks and step actions are installed. For example, in OpenShift Pipelines 1.18, the installed versioned tasks and step actions are*-1-17-0
and*-1-18-0
.
1.3.3. Known issues
-
If you add or change any parameters in the
result:
section of theTektonConfig
CR, the changes do not apply automatically. To apply the changes, restart the deployment or pod of the Results API server in theopenshift-pipelines
namespace.
1.3.4. Fixed issues
-
Before this update, if you defined a task that included both regular parameters and
matrix
parameters, thetekton-pipelines-controller
component crashed and logged a segmentation fault message. If the task was not removed, the component continued to crash and did not run any pipelines. With this update, the controller no longer crashes in such cases. -
Before this update, when a user tried to rerun a resolver-based
PipelineRun
resource by using the OpenShift Container Platform web console, the attempt resulted in an error with theInvalid PipelineRun configuration, unable to start Pipeline.
message. With this update, the rerun no longer causes an error and works as expected. -
Before this update, when the
PipelineRun
resource failed, the Output tab of thePipelineRun
resource in the OpenShift Container Platform web console displayed an error message instead of available results. With this update, the web console correctly shows the results. -
Before this update, the
buildah
task from theopenshift-pipelines
namespace failed when values forCONTEXT
andDOCKERFILE
parameters pointed to different directories. With this update, the issue is fixed. Before this update, referencing parameters as default values in step actions caused the
TaskRun
resource referencing that step action to fail. With this release, the issue is fixed.Example of using a parameter as a value
apiVersion: tekton.dev/v1alpha1 kind: StepAction metadata: name: simple-step-action # ... spec: params: - name: param1 # This parameter is required type: string - name: param2 # This parameter uses the value of `param1` as default value type: string default: $(params.param1) # ...
-
Before this update, when you created a
PipelineRun
resource that referenced a remote Git repository that contained a symlink pointing outside of that repository, thePipelineRun
would fail. With this release, thePipelineRun
no longer fails even if the symlink is invalid. - Before this update, the resource requests for memory and ephemeral storage could sometimes exceed the defined limits. With this release, the issue is fixed.
- Before this update, running pipelines in a cluster that uses injected sidecars sometimes did not affect the OpenShift Pipelines controller. This was caused by faulty minor Kubernetes version checking. With this update, the issue is fixed.
- Before this update, when the list of results contained duplicate keys with an invalid result, the duplicates were removed and replaced with the last result for each key, which could change the order of the result list. With this release, when duplicate keys are removed, the order of the results is preserved, ensuring that the last valid result for each key is kept as the final result.
-
Before this update, when using Pipelines as Code, GitLab instances hosted under a relative path, for example,
https://example.servehttp.com/gitlab
, failed to correctly update status for merge requests. Although the initial event updates worked correctly, subsequent updates, for example, marking the pipeline run asFinished
, did not appear in GitLab, leaving the status in theRunning
state. With this update, the status updates work correctly. -
Before this update, when using Pipelines as Code, if you passed the
onEvent
oronTargetBranch
annotations in aPipelineRun
resource with an empty[]
value, it prevented matching of anyPipelineRun
resources. With this update, passing the annotations with an empty value returns an error. -
Before this update, when using Pipelines as Code, you could create the
Repository
custom resource (CR) without a URL or with an invalid URL. With this update, you can only create theRepository
CR if you provide a valid URL. You can still create theRepository
CR in theopenshift-pipelines
namespace without a URL to provide default settings for all repositories. -
Before this update, when using Pipelines as Code, if a pull request was raised through a forked repository in GitLab, the
PipelineRun
resource was created successfully, but its final status was not reported in the GitLab UI. With this update, the issue is fixed. - Before this update, when using Pipelines as Code, GitLab displayed check names for pipeline runs as a single entry for a pipeline. This caused issues when multiple pipeline runs were running simultaneously and one of them failed. If the other pipeline run succeeded, it would override the failed status and the pipeline would appear as successful. With this update, every check name shows separately for each pipeline run. Therefore, the status of pipelines shows correctly as failed if any of the pipeline runs fail.
-
Before this update, when using Pipelines as Code, if you used the
{{ trigger_comment }}
variable in thePipelineRun
resource definition and then submitted a multiline comment in GitHub, Pipelines as Code could repost YAML validation errors for the pipeline run. With this update, the issue is fixed. -
Before this update, when using Pipelines as Code, a
/test branch:<branch>
command where a branch name contained parts that could be interepreted as other commands did not work properly. For example, in the command/test branch:a/testbranch
the branch branch name contains/test
, and this command would fail. With this update, only the first/test
substring in a comment is considered a command to prevent any malfunction. -
Before this update, when using Pipelines as Code, if an unauthorized user sent
/test
,/retest
, or other GitOps commands on pushed commits in GitHub,PipelineRun
resources were triggered, even though the user was not authorized to trigger them. With this update, an additional user authorization check is added, preventing thePipelineRun
resources from being triggered without proper verification. -
Before this update, when using Pipelines as Code, if an unauthorized user raised a pull request and the repository administrator sent the
/ok-to-test
command, a pending check was created, even if there was no matchingPipelineRun
resource for the pull request event. With this update, no pending checks are created if there is no matchingPipelineRun
resource. Instead, a neutral check is created describing that there is no matchingPipelineRun
. Before this update, when using Pipelines as Code with Bitbucket Data Center, in a push event users could not reference all fields of event payload in the
PipelineRun
resource, for example, thebody.changes
object. This update fixes the issue.Example of using the {{ body.changes }} dynamic variable
apiVersion: tekton.dev/v1 kind: PipelineRun # ... spec: params: - name: ref_id value: "{{ body.changes[0].ref.id }}" # ...
-
Before this update, when using Pipelines as Code, if you used the
generateName
field in aPipelineRun
resource to match the resource to theincoming
webhook, the match would not work correctly. With this update, the issue is fixed. -
Before this update, when using Pipelines as Code, the
on-cel-expression
annotation was not working on push events in Bitbucket Data Center. With this update, the issue is fixed.