Este conteúdo não está disponível no idioma selecionado.
Chapter 1. Red Hat OpenShift Pipelines release notes
For additional information about the OpenShift Pipelines lifecycle and supported platforms, refer to the OpenShift Operator Life Cycles and Red Hat OpenShift Container Platform Life Cycle Policy.
Release notes contain information about new and deprecated features, breaking changes, and known issues. The following release notes apply for the most recent OpenShift Pipelines releases on OpenShift Container Platform.
Red Hat OpenShift Pipelines is a cloud-native CI/CD experience based on the Tekton project which provides:
- Standard Kubernetes-native pipeline definitions (CRDs).
- Serverless pipelines with no CI server management overhead.
- Extensibility to build images using any Kubernetes tool, such as S2I, Buildah, JIB, and Kaniko.
- Portability across any Kubernetes distribution.
- Powerful CLI for interacting with pipelines.
- Integrated user experience with the OpenShift Container Platform web console, up to OpenShift Container Platform version 4.20.
For an overview of Red Hat OpenShift Pipelines, see Understanding OpenShift Pipelines.
1.1. Compatibility and support matrix Copiar o linkLink copiado para a área de transferência!
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use.
In the table, features are marked with the following statuses:
| TP | Technology Preview |
| GA | General Availability |
| Red Hat OpenShift Pipelines Version | Component Version | OpenShift Version | Support Status | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Operator | Pipelines | Triggers | CLI | Chains | Hub | Pipelines as Code | Results | Manual Approval Gate | Pruner | Cache | ||
| 1.21 | 1.6.x | 0.34.x | 0.43.x | 0.26.x (GA) | 1.23.x (TP) | 0.39.x (GA) | 0.17.x (GA) | 0.7.x (TP) | 0.3.x (GA) | 0.3.x (GA) | 4.14, 4.16, 4.17, 4.18, 4.19, 4.20 | GA |
| 1.20 | 1.3.x | 0.33.x | 0.42.x | 0.25.x (GA) | 1.22.x (TP) | 0.37.x (GA) | 0.16.x (GA) | 0.6.x (TP) | 0.2.x (TP) | 0.2.x (TP) | 4.14, 4.16, 4.17, 4.18, 4.19, 4.20 | GA |
| 1.19 | 1.0.x | 0.32.x | 0.41.x | 0.25.x (GA) | 1.21.x (TP) | 0.35.x (GA) | 0.15.x (GA) | 0.6.x (TP) | 0.2.x (TP) | 4.14, 4.16, 4.17, 4.18, 4.19, 4.20 | GA | |
The OpenShift console plugin for OpenShift Pipelines follows the same version as the OpenShift Pipelines Operator.
For questions and feedback, you can send an email to the product team at pipelines-interest@redhat.com.
1.2. Release notes for Red Hat OpenShift Pipelines 1.21 Copiar o linkLink copiado para a área de transferência!
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.21 is available on OpenShift Container Platform 4.14 and later supported versions.
For more information about the supported versions of OpenShift Container Platform, see Life Cycle Dates.
1.2.1. New features and enhancements Copiar o linkLink copiado para a área de transferência!
In addition to fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.21:
Pipelines
- Read-only root filesystems enabled for OpenShift Pipelines containers
-
With this update, all OpenShift Pipelines containers, including controllers and webhooks, are configured with the
readOnlyRootFilesystemparameter set totrue. This change follows security best practices for Kubernetes-based workloads. By enforcing a read-only root filesystem, OpenShift Pipelines improves its security posture by helping to prevent unauthorized modifications to the container runtime environment.
- Override individual TaskRun timeouts in a PipelineRun
With this update, you can override the timeout for individual
TaksRunobjects within aPipelineRunusing thespec.taskRunSpecs[].timeoutfield.Copy to Clipboard Copied! Toggle word wrap Toggle overflow This allows finer-grained control over task execution duration without affecting the overall
PipelineRuntimeout.
- Resolver caching for bundle, Git, and cluster resolvers
With this update, resolver caching is supported for bundle, Git, and cluster resolvers. This helps reduce redundant fetches, minimize external API calls, and improve pipeline execution reliability, especially when external services impose rate limits or are temporarily unavailable.
Global settings: You can configure caching using your
TektonConfigcustom resource (CR), where you can set the cache size and adjust the time to live (TTL) value without restarting controllers:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
max-size: defines the maximum number of cached entries. The default value is"1000". -
ttl: defines the time to live of the cache entry. The default value is"5m".
-
Per-resolver defaults: You can set the default caching mode for specific resolvers using the
bundleresolver-config,git-resolver-config, orcluster-resolver-configconfig maps:#... data: cache: "auto" #...
#... data: cache: "auto" #...Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
The available modes are
auto(cache only immutable references),always(cache everything), andnever(disable caching). The default value isauto.
-
The available modes are
TaskRunorPipelineRunoverrides: You can override the default caching mode for individual runs by adding thecacheparameter to theTaskRunorPipelineRunspecification:#... params: - name: cache value: "always" #...#... params: - name: cache value: "always" #...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Resolver caching helps improve reliability, reduce latency for frequently accessed resources, and decrease load on external services such as GitHub and OCI registries. Cache hits, misses, and timestamps are added to resource annotations for resolved resources.
- Array values can be resolved in when expressions
With this update, array value resolution is enabled in the
inputattribute ofwhenexpressions.The following Tekton
PipelineRuncustom resource configures a parameter array.Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following example demonstrates how a task can produce an array result that is consumed by a subsequent task.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Support for display names added to steps
With this update, a
displayNamefield is added toStepobjects. The following example configures aTaskresource withdisplayNamevalues set:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Operator
- A new parameter for controlling pipeline service account permissions
Before this update, the pipeline service account automatically received the
editClusterRole within its namespace, following legacy RBAC behavior. With this update, you can use the newlegacyPipelineRbacparameter to control permissions. Set it tofalseto prevent the pipeline service account from receiving theeditClusterRole, enforcing more restricted permissions by default. The default value istrue.Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantExisting role bindings are not automatically removed from namespaces with the pipeline service account. You must remove them manually when changing this parameter on existing deployments.
- Route is automatically created for Tekton Results API endpoint
-
With this update, the Tekton Results component automatically creates an OpenShift
RouteCRD for its API endpoint. You can optionally configure a custom host and path for theRoute. This helps ensure the Tekton Results API is accessible externally without requiring additional user configuration.
User interface
- Group support for Approval Tasks
-
With this update, Approval Tasks support group approvers. You can specify a group using the
group:<groupName>syntax in theparamslist. Any member of the group can approve or reject the task. Approvals by any group member count as a single approval, while a rejection by any member immediately counts as a single rejection and fails the task. Group members also receive notifications about the task, just like individual approvers. - Pipeline Overview page retains filter selections
- With this update, the Pipeline Overview page persists user selections for the Namespace, Time Range, and Refresh Interval filters. These selections are stored in the application state and URL query parameters. This helps ensure more consistent experience when navigating away, returning to the page, or refreshing the browser. The filters reset only when the user switches namespaces.
- Time-range filter label updated for clarity
- With this update, the time-range filter previously labeled "Last weeks" is updated to "Last week". This change resolves customer confusion regarding the intended single-week time range and helps ensure consistency between the UI and the Tekton Results API.
Pipelines as Code
- Improved performance for GitLab project access control checks
-
With this update,
Pipelines as Codecaches the results of GitLab Project Access Control List (ACL) membership queries. This optimization reduces repeated API calls to the GitLab API, improving performance and efficiency during permission checks.
- Configure the number of lines in error log snippets
With this update, you can configure how many lines appear in error log snippets using the new
error-log-snippet-number-of-linessetting:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log snippets are automatically truncated to 65,000 characters, preventing failures when posting check-run updates to the GitHub API.
- GitLab commit status set on the source and target project by default
- With this update, Pipelines as Code attempts to set the commit status on both the source and target upstream projects in GitLab. If permission issues cause both attempts to fail, the system falls back to posting a comment instead. This ensures that the commit status is communicated in the most relevant location for merge requests.
- Improved error message for GitLab private repository access failures
With this update, Pipelines as Code proactively checks whether the configured GitLab token has the required read access to the source repository when processing merge requests from private forks. If the token lacks the necessary
read_repositoryscope, Pipelines as Code fails early with the following error:an error occurred: failed to access GitLab source repository ID REPOSITORY_ID: please ensure token has 'read_repository' scope on that repository
an error occurred: failed to access GitLab source repository ID REPOSITORY_ID: please ensure token has 'read_repository' scope on that repositoryCopy to Clipboard Copied! Toggle word wrap Toggle overflow This helps ensure that permission issues are easier to identify and prevents pipelines from failing unexpectedly later in the process.
- Trigger and cancel PipelineRuns for Git tags
With this update, you can trigger
PipelineRunsby using comments on commits associated with a Git tag. This provides flexible, version-specific control of CI/CD workflows.To trigger a
PipelineRun, add a comment on the tagged commit using the format/test <pipeline-name> tag:<tag-name>, for example:/test xyz-pipeline-run tag:v1.0.0
/test xyz-pipeline-run tag:v1.0.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow To cancel a
PipelineRun, use the format/cancel <pipeline-name> tag:<tag-name>, for example:/cancel xyz-pipeline-run tag:v1.0.0
/cancel xyz-pipeline-run tag:v1.0.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow This feature is supported for GitHub and GitLab, enabling teams to trigger or cancel
PipelineRunstied to specific tagged versions.- SHA validation added to /ok-to-test commands to prevent race conditions
-
With this update, a new setting,
require-ok-to-test-sha, is introduced in Pipelines as Code to enforce commit SHA validation when using the/ok-to-testcomment command on GitHub pull requests. This feature mitigates a critical TOCTOU (Time-of-Check to Time-of-Use) race condition vulnerability, specific to the GitHub provider, where an attacker could execute a pipeline on an unapproved SHA. When the setting is enabled, users must specify the exact commit SHA, for example,/ok-to-test <sha>, for approval. As a result, this ties the approval directly to a specific commit, preventing pipeline execution on any subsequent malicious force-pushed SHA. - Incoming webhook targets support glob patterns
With this update, the
targetsfield in the Pipelines as CodeRepositorycustom resource (CR) is enhanced to support glob patterns for incoming webhook events. In addition to exact string matching, you can utilize patterns to match multiple branch names with a single rule, helping simplify configuration and management for complex repository structures.The following shell-style glob patterns are supported:
-
*: Matches any sequence of characters, for example,feature/\*matchesfeature/loginorfeature/api. -
?: Matches exactly one single character, for example,v?matchesv1orv2. -
[abc]: Matches a single character that is part of the defined set, for example,[A-Z]\*matches any branch starting with an uppercase letter. -
[0-9]: Matches a single digit, for example,v[0-9]*.[0-9]\*matchesv1.2orv10.5. {a,b,c}: Matches any of the alternatives separated by a comma, for example,{dev,staging}/\*matchesdev/testorstaging/test.If multiple incoming webhooks match the same branch, the first matching webhook defined in the YAML order is used. To ensure expected behavior, place more specific webhooks before general catch-all webhooks in your configuration.
-
- Incoming webhook requests support a new namespace parameter
-
An optional
namespaceparameter is added to Pipelines as Code incoming webhook requests. When specified along with the existingrepositoryparameter, it uniquely identifies the targetedRepositorycustom resource (CR). This ensures correct routing even when multiple repositories share the same name across the cluster. If multipleRepositoryCRs exist with the same name and thenamespaceparameter is omitted, Pipelines as Code returns a400status code, requiring the user to provide the namespace for unambiguous identification. - Default secret key added for incoming webhooks
-
With this update, when no
secretkey is explicitly specified in theRepositorycustom resource incoming webhook configuration, the system defaults to using"secret"as the key name when retrieving the secret value from theSecretresource.
Tekton Results
- Fine-grained retention policies
With this update, Tekton Results supports fine-grained retention policies. You can set different retention periods for
PipelineRunandTaskRunresults based on namespace, labels, annotations, and status. The first matching policy is applied; if none match, thedefaultRetentionperiod is used.Configure policies in the
tekton-results-config-results-retention-policyconfig map using thepolicieskey. Each policy includes aselectorand aretentionperiod:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This config map example:
-
Runs the pruning job daily at 2:00 AM, as specified by the
runAtcron schedule. -
Keeps failed Results in
productionorprod-eastwithcriticality: highfor 180 days. -
Keeps Results with annotation
debug/retain: "true"for 14 days. -
Keeps other Results in
productionorprod-eastfor 60 days. -
Keeps Results in the
cinamespace for 10 hours. - Keeps all other Results for the default retention period (30 days).
-
Runs the pruning job daily at 2:00 AM, as specified by the
- PostgreSQL support updated to version 17.5
- With this update, Tekton Results supports PostgreSQL version 17.5.
- New metrics for runs not stored in the database
With this update, Tekton Results adds the
runs_not_stored_countmetric, emitted by the defaultwatchercontainer. This metric tracks the number ofPipelineRunandTaskRuninstances that are deleted before they can be persisted in the database. Supported tags include:-
kind– the type of run (PipelineRunorTaskRun), -
namespace– the namespace where the run was created.
For example:
-
watcher_runs_not_stored_count{kind="PipelineRun",namespace="default"} 5
watcher_runs_not_stored_count{kind="PipelineRun",namespace="default"} 5
- Metrics for run storage latency
With this update, Tekton Results adds the
run_storage_latency_secondsmetric, emitted by the defaultwatchercontainer. This metric measures the time between run completion and its successful storage in the database.Supported tags include:
-
kind– the type of run (PipelineRunorTaskRun) namespace– the namespace where the run was createdwatcher_run_storage_latency_seconds{kind="PipelineRun",namespace="default"} 0.5watcher_run_storage_latency_seconds{kind="PipelineRun",namespace="default"} 0.5Copy to Clipboard Copied! Toggle word wrap Toggle overflow This metric is emitted only when a run transitions from completed to stored, helping ensure accurate measurement of storage latency without being skewed by multiple reconciliations.
-
- CLI configuration persists across namespaces
-
Before this update, the API configuration for the Tekton Results CLI was fetched from the current namespace context. As a consequence, users had to re-authenticate or reconfigure after switching namespaces. With this update, the configuration persists across namespace switches, removing the need to run the
opc results config setcommand repeatedly. - Default database migration to PostgreSQL version 15
The PostgreSQL image used by the default Tekton Results deployment is upgraded from version 13 to version 15, addressing the upcoming end of life (EOL) for PostgreSQL 13. This process implements an automated migration, which uses the slower yet more reliable data copy mechanism from the original PostgreSQL image.
ImportantIf you are using the default PostgreSQL deployment, ensure you have backed up your data and that the underlying Persistent Volume Claim (PVC) has more than 50% free space before you start the OpenShift Pipelines upgrade. If you are using an external database with Tekton Results, you are not affected by this change.
Tekton Cache
- Tekton Cache is generally available
- With this update, Tekton Cache is generally available (GA) and is fully supported for production use. Tekton Cache was previously available as a Technology Preview (TP) feature.
- Tekton Cache binaries available for public download
-
With this update, the Tekton Cache product binaries are available for download without authentication. This accessibility enables customers to use the Red Hat binaries for their custom
StepActionconfigurations.
- Improved support for Docker credentials
-
Before this update,
Tekton Cacherequired Docker secrets to include aconfig.jsonkey. With this update, Docker secrets without aconfig.jsonkey are supported. TheDOCKER_CONFIGparameter can point to any location containing either aconfig.jsonfile or a.dockerconfigjsonfile, improving flexibility for private registry authentication.
Tekton Triggers
- GitHub interceptor enforces SHA-256 signature validation
-
Before this update, the GitHub interceptor supported both SHA-1 (
X-Hub-Signature) and SHA-256 (X-Hub-Signature-256) signatures for webhook validation. With this update, the GitHub interceptor enforces a stricter security posture and only accepts SHA-256 signatures via theX-Hub-Signature-256header, dropping support for SHA-1. As a result, standard GitHub webhooks remain unaffected, but any custom webhook implementations must update their HMAC signature generation from SHA-1 to SHA-256 to avoid validation errors.
Tekton Hub
- Default database migration to PostgreSQL version 15
- The PostgreSQL database version used by the Tekton Hub is migrated from version 13 to version 15 to address the upcoming end of life (EOL) for PostgreSQL 13. This upgrade ensures continued stability and support for the Tekton Hub. Additionally, the process implements an automated transition from version 13 to version 15 for existing deployments.
Tekton Chains
- Flexible provenance and signing configuration
- With this update, you can choose to disable image signing while still enabling provenance generation and attestation signing. This enhancement helps provide more flexibility in managing security artifacts within your CI/CD pipelines.
- New option to disable OCI image signing
-
With this update, a new configuration option,
artifacts.oci.disable-signing, is added to the Tekton Chains config map. This option enables you to skip OCI image signing performed by Tekton Chains while still maintaining provenance generation and attestation signing. This feature is intended for users who prefer to sign images using an external workflow, such ascosign sign, but still require Tekton Chains to maintain supply-chain integrity for metadata. By default, this option is set tofalse, ensuring no change in behavior for existing configurations unless explicitly enabled.
CLI
- Support for rerunning resolver-based PipelineRuns
-
With this update, the
tknCLI introduces a new--resolvertypeflag to thetkn p startcommand. This flag allows you to specify the resolver type, such asgit,http,hub,cluster,bundle, orremote, for any supported resolver type, when re-running a resolver-basedPipelineRun. You can reference an existingPipelineRunname to re-run it using the specified resolver type.
- New --resolvertype to support rerunning resolver-based PipelineRuns
-
With this update, the
tkn p start --lastcommand introduces the--resolvertypeflag. This flag enables users to specify the resolver type, such asgit,hub, orbundle, when re-running a previous resolver-basedPipelineRun. Additionally, the help text for the command has been updated to use the correct pronoun.
Pruner
- Event-driven pruner is generally available
With this update, the event-driven pruner
tektonpruneris generally available (GA) and is fully supported as a pruning mechanism for OpenShift Pipelines with centralized and hierarchical configuration.While existing pruning mechanisms, such as the default job-based pruner, Pipelines as Code
keep-max-run, and Tekton Results basedretention, continue to function, users currently relying on legacy pruning approaches are encouraged to adopt the event-driven prunertektonprunerto help ensure smoother performance, more predictable cleanup, and reduced operational overhead.The following enhancements are present in this release:
-
Namespace-level pruner configuration: With this update, the event-driven pruner supports custom pruning policies at the namespace level. You can define custom pruning policies, such as time to live and history limits, that override global defaults by creating a
tekton-pruner-namespace-specconfig map in your namespace. -
Selector-based pruning configuration: With this update, the event-driven pruner supports selector-based resource matching with the
matchLabelsandmatchAnnotationsselectors. The selector matching logic when you specify bothmatchLabelsandmatchAnnotationsselectors is AND and name matching has absolute precedence regardless of selector presence. The selector-based resource matching is supported only in the namespace-leveltekton-pruner-namespace-specconfig maps, not in the globalTektonConfigCR configuration. -
Cluster-wide maximum limits added for the event-driven pruner configuration: With this update, the event-driven pruner enforces cluster-wide maximum limits for configuration fields when global limits are not specified. If global limits are set, they take precedence. This validation helps ensure that namespace-specific pruner settings do not exceed the defined maximum values, helping prevent potential resource overuse. The maximum TTL in seconds is
2592000and the maximum history limit is100. Pruner config map validation: With this update, the event-driven pruner validates config maps at apply-time using a Kubernetes admission webhook. Invalid configurations, such as unsupported formats, negative values, or namespace settings that exceed global limits, are rejected with clear error messages instead of failing silently. For validation to apply, pruner config maps must include the following labels:
labels: app.kubernetes.io/part-of: tekton-pruner pruner.tekton.dev/config-type: namespace
labels: app.kubernetes.io/part-of: tekton-pruner pruner.tekton.dev/config-type: namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Namespace-level pruner configuration: With this update, the event-driven pruner supports custom pruning policies at the namespace level. You can define custom pruning policies, such as time to live and history limits, that override global defaults by creating a
1.2.2. Technology preview features Copiar o linkLink copiado para a área de transferência!
Pipelines as Code
- A new command for evaluating CEL expressions (Technology Preview)
A new
tknCLI command,tkn pac cel, allows administrators to interactively evaluate Common Expression Language (CEL) expressions against webhook payloads and headers. You can use the following syntax:tkn pac cel -b <body.json> -H <headers.txt>
$ tkn pac cel -b <body.json> -H <headers.txt>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
-bor--body: Specify a path to a JSON body file. This is a webhook payload. -
-Hor--headers: Specify a path to headers file. This can be a plain text, a JSON file, or a gosmee script. -
-por--provider: Specify the provider. This can begithub,gitlab,bitbucket-cloud,bitbucket-datacenter,gitea, orautofor automatic detection from the payload.
Key capabilities include:
- Interactive mode: Provides a prompt in the terminal to type CEL expressions, with tab completion for variables and payload fields.
Variable access using:
-
direct variables, such as
eventortarget_branch, -
webhook payload fields, for example,
body.action, -
HTTP headers, for example,
headers['X-GitHub-Event'], -
Pipelines as Code parameters, for example,
pac.revision.
-
direct variables, such as
-
Debugging: Quickly test and debug CEL expressions used in
PipelineRunresource configurations against real webhook data.
-
Manual Approval Gate
- Group support for Approval Task
-
With this update, the Approval Task supports group approvers. Specify group approvers using the
group:<groupName>syntax in theparamslist. Any member of the group can approve or reject the task. An approval by any group member counts as a single approval, while a rejection by any member immediately counts as a single rejection and fails the task. This enhancement provides more flexible approval workflows by allowing teams to delegate approvals to groups as well as individuals.
- ApprovalTask preserves messages from all group members
-
With this update, messages added by any member of a group when approving or rejecting an
ApprovalTaskare preserved in both the group input and thestatus.approverResponsearray. This helps ensure that all context and comments provided by group members remain visible for audit and review purposes.
1.2.3. Breaking changes Copiar o linkLink copiado para a área de transferência!
User interface
- Pipelines console navigation requires explicit plugin enablement
- With this update, the legacy static console plugin is fully deprecated. After installing the Red Hat OpenShift Pipelines Operator, you must explicitly enable the console plugin to access the Pipelines section in the OpenShift Container Platform console. The previous fallback behavior, which displayed a limited Pipelines entry when the plugin was disabled, is removed, and the Pipelines navigation menu is only visible when the console plugin is active.
Pipelines as Code
- pipelinerun_status field in Repository custom resource is depracated
-
With this update, the
pipelinerun_statusfield of theRepositorycustom resource (CR) is deprecated and will be removed in a future release. Update any integrations or automation that reference this field to ensure compatibility with upcoming versions.
Tekton Chains
- Cosign v2.6.0 update affects keyless signing
- With this update, Tekton Chains uses Cosign version 2.6.0, which no longer accepts HS256 JWT tokens for keyless signing. If your private OIDC provider uses HS256 tokens for authentication, you must switch to RS256 before upgrading to this release. If you use perform key-based signing, or use a private OIDC provider already configured with RS256, you are not affected by this change.
1.2.4. Known issues Copiar o linkLink copiado para a área de transferência!
User interface
- Duplicate Pipelines navigation entry in the OpenShift Console
During the transition from static to dynamic console plugins, the OpenShift Console might temporarily display two Pipelines entries in the navigation menu. This is a UI-only issue and does not affect pipeline execution or data.
To work around this problem, apply the corresponding updates to the OpenShift Console and the Red Hat OpenShift Pipelines Operator. Users running older Red Hat OpenStack Platform versions should coordinate upgrades with OpenShift Container Platform to prevent a temporary disappearance of the Pipelines menu.
1.2.5. Fixed issues Copiar o linkLink copiado para a área de transferência!
Pipelines
- PipelineRuns fail clearly on invalid apiVersion
Before this update, setting the
spec.tasks[].taskRef.apiVersionfield to an invalid value causedPipelineRunexecution to fail silently. With this update,PipelineRundisplays a clear error whentaskRef.apiVersionis invalid.- PipelineRuns no longer fail on temporary TaskRef reconciliation errors
Before this update,
PipelineRunsfailed whenTaskRefreconciliation encountered retryable errors. This led to unnecessary pipeline failures on transient issues. With this update, the controller logic ensuresPipelineRunsonly fail on explicit validation errors, not retryable errors. As a result, the reliability of pipelines resolving external tasks is improved.- Kubernetes-native sidecars no longer cause repeated init container restarts
Before this update, Kubernetes-native sidecars had issues with repeated init container restarts. With this update, signal handling is added to
SidecarLogresults. As a result, the sidecar gracefully handles signals, stabilizing the lifecycle and preventing unnecessary restarts of the init containers.- Pods for timed-out TaskRuns are retained
Before this update, pods for timed-out
TaskRunswere not retained when thekeep-pod-on-cancelfeature flag was enabled. With this update, the system ensures pods are retained when the flag is enabled. As a result, debugging and analysis of timed-out tasks are consistently supported when the feature is active.- StepAction status steps no longer display in incorrect order
Before this update, status steps displayed in an incorrect order when using
StepAction. This made it difficult to interpret the chronological flow of actions. With this update, the system ensures status steps display in the correct sequential order. As a result, the timeline and history ofStepActionexecutions are accurately presented.- TaskRuns no longer fail on arm64 clusters due to platform mismatch
Before this update,
arm64Kubernetes clusters experiencedTaskRunfailures due to platform variant mismatch in entrypoint lookup. This prevented successful execution on this architecture. With this update, the entrypoint logic correctly handles Linux platform variants. As a result,TaskRunsexecute reliably onarm64clusters.
Operator
- Improved proxy webhook performance by replacing synchronous checks
Before this update, the proxy webhook could timeout under high-concurrency workloads because it performed synchronous API calls to verify config map existence during pod admission. With this update, the webhook uses optional config map volumes that gracefully handle missing CA bundles without blocking pod creation. As a result, the defaulting webhook is less affected by etcd performance issues, the CA bundles configmaps are always mounted as Optional volumes, and the environment variable
SSL_CERT_DIRis always set on TaskRun step-containers.- Corrected prioritySemaphore locking to prevent deadlocks and race conditions
Before this update, the
prioritySemaphoreimplementation could cause deadlocks, race conditions, and panics due to unsynchronized data access. With this update, the locking logic is corrected and all shared data is properly synchronized, preventing these concurrency issues.- Retained pods on TaskRun timeout when the keep-pod-on-cancel flag is true
Before this update, when the
keep-pod-on-cancelsetting was set totrue,TaskRunpods were retained only if theTaskRunwas canceled. When aTaskRuntimed out its pods would be deleted. With this update,TaskRunpods are not deleted if theirTaskRuntimes out when thekeep-pod-on-cancelseeting is set totrue.- Normalized TektonConfig container args to -key=value format
Before this update, TektonConfig container args could contain duplicates and
["-key","value"]pairs. With this update, flags are normalized to the"-key=value"format and duplicates are removed, simplifying configuration.- Default catalog name updates correctly during upgrade
Before this update, upgrading from versions 1.19.x to 1.20.0 incorrectly set the
hub-catalog-namefield in the Pipelines-as-Code config map to the deprecated Tekton Hub catalog name,tekton. As a consequence, this led to unexpected behavior when resolving catalog tasks. With this update, the default value points to the Artifact Hub catalog name. As a result, the upgrade process ensures consistent and expected behavior.- nodeSelector and tolerations propagate correctly to Results pods
Before this update, the
nodeSelectorandtolerationssettings configured in the Tekton Config under the Results section were not applied to Tekton Results pods. As a consequence, pod scheduling behavior did not reflect the user-configured preferences. With this update, thenodeSelectorandtolerationsconfigurations from the Tekton Config propagate correctly to all Tekton Results pods.- Webhook validation no longer targets control-plane namespaces
Before this update, the logic for the
tekton-operator-proxy-webhookparameter attempted to validate resources in control-plane namespaces, such askube-*andopenshift-\*. This behavior caused unintended webhook certificate issues that affected unrelated system components. With this update, the webhook logic excludes all control-plane namespaces from admission validation. This improvement ensures better isolation between Tekton components and other cluster operators.- Custom hub catalog configuration is preserved during conversion
Before this update, the OpenShift Pipelines Operator removed the
catalog-{INDEX}-typefield during conversion, which caused the loss of custom hub catalog types. With this update, the Operator preserves thecatalog-{INDEX}-typefield in its config map.
User interface
- Fixed PipelineRun cancelling status in OpenShift Console after TaskRuns complete
Before this update, the OpenShift Container Platform Console showed
PipelineRunsin a cancelling state even after all associatedTaskRunscompleted, due to an internal UI inconsistency. With this update, the OpenShift ConsolePipelineRunstatus mechanism is corrected. As a result,PipelineRunstatus accurately reflects the state of completedTaskRuns.- Fixed validation error preventing saving of Buildah tasks in Pipeline builder UI
Before this update, the
Pipelinebuilder UI failed to saveBuildahtasks due to a validation error with the defaultBUILD_ARGSparameter. With this update, the validation error is resolved. As a result, thePipelinebuilder UI correctly savesBuildahtasks, even when using the defaultBUILD_ARGSparameter.- Fixed incorrect sorting of PipelineRuns by duration
Before this update, sorting
PipelineRunsby duration used a string sort instead of actual duration values, causing misleading results. With this update, sorting correctly uses the duration in seconds. As a result,PipelineRunsare accurately sorted by actual duration.- Fixed TaskRun sorting by duration in the OpenShift Container Platform console
Before this update, sorting
TaskRunresources by duration in the OpenShift Container Platform console incorrectly used completion time instead of elapsed time. As a result, the list displayed durations in an incorrect chronological order, such as a six-minute task appearing before a fifty-second task. With this update, the console correctly calculates the duration before sorting.- Added strict navigation URLs task name matching
Before this update, when two tasks had similar names, such as
tknandkn, the application returned the first partial string match. This issue caused incorrect navigation via the URL. With this update, the task names are matched using strict equality checks to avoid partial matches and guarantee correct URL navigation.- Fixed the Overview page displaying an error message
Previously, when the
tekton-results-postgresortekton-results-apipods were restarted, the Overview page displayed an "Oh No! Something Went Wrong" error. With this update, when pipeline results data is unavailable, an empty state is displayed instead of an error message. This provides a smoother and more consistent user experience.- Fixed immediate YAML editor updates using useEffect hook
Before this update, YAML editor changes were not applied until the component was remounted. With this update, changes are immediately reflected using the
useEffecthook, improving the editing experience.- Pagination fix for archived PipelineRun results
Before this update, switching the
data_sourcefilter toarchivedprevented additional PipelineRun results from loading when scrolling. The UI expected anextPageTokenfield, while the API returnednext_page_token, causing the on-scroll callback to never request the next page. As a result, pagination stopped after the initial page of results. With this update, the client correctly handles thenext_page_tokenfield, ensuring that pagination proceeds as expected and all archived PipelineRun data loads properly.- Pipeline Builder no longer displays stale task parameter data across namespaces
Before this update, the Pipeline Builder displayed stale or incorrect task parameter data when multiple tasks with the same name existed in different namespaces. As a consequence, users configured pipelines with invalid parameters. With this update, the system performs stronger validation to detect task name conflicts, generates unique task names when duplicates appear, and cleans up task data during removal. As a result, the side panel shows accurate task information from the selected namespace.
- Pagination works correctly for archived PipelineRun results
Before this update, scrolling failed to load additional archived
PipelineRunresults because the UI client expected thenextPageTokenfield while the API returnednext_page_token. With this update, the UI is updated to supportnext_page_token. As a result, pagination functions correctly when thedata_sourcefilter is set toarchived.- Time range and refresh interval selections now persist on the Pipeline Overview page
Before this update, the Time Range and Refresh Interval selections on the Pipeline Overview page did not persist across navigation or page refreshes, leading to an inconsistent user experience. With this update, these selections persist across navigation and page refreshes, and are reset to default values when switching namespaces.
- Pipeline Overview page reliability is improved for slow backend responses
Before this update, slow or intermittent backend responses caused the Pipeline Overview page to display stale or partially loaded data, or fail silently in slower clusters. With this update, additional safeguards prevent stale or incomplete data from being displayed, and API timeouts have been increased to help improve reliability.
- Ambiguous time-range filter label is corrected
Before this update, the time-range filter label Last weeks was ambiguous. With this update, the label is changed to Last week, and the associated API payload has been aligned to ensure more consistent behavior.
- Loading indicators added to Pipeline Overview cards
Before this update, the Pipeline Overview page did not display loading indicators on individual cards while data was being fetched, making it unclear whether data was still loading. With this update, loading spinners are displayed on each card during data retrieval, helping provide clearer visual feedback.
Pipelines as Code
- GitOps commands in GitLab MR discussion replies are recognized
Before this update, GitOps commands, such as
/ok-to-test, posted as replies within GitLab merge request discussion threads were ignored; only commands in the top-level comment of a discussion were recognized. With this update, the GitLab provider honors commands posted in replies, improving command recognition and workflow reliability.- CI status correctly shows Pending for unauthorized Bitbucket PRs
Before this update, when a pull request was opened by an unauthorized user on Bitbucket Data Center, the CI status was incorrectly shown as
Runninginstead ofPending. With this update, the status correctly showsPendingwhile awaiting administrator approval.- install info and namespace binding corrected
Before this update, the
opc-pacCLI incorrectly bound--namespace/-nto kubeconfig, and theinstall infocommand did not show repositories for a single CR. With this update, the CLI binds correctly andinstall infodisplays repositories properly.- Tag push events no longer affected by skip-push-event-for-pr-commits setting
Before this update, push events triggered by Git tag pushes were incorrectly skipped when the
skip-push-event-for-pr-commitssetting was enabled. With this update, tag push events are no longer affected by this setting and proceed as expected.- Unauthorized /ok-to-test approvals are correctly invalidated on new commits
Before this update, when
remember-ok-to-testwas set tofalse, a single/ok-to-testapproval on a merge request (MR) from an unauthorized user was incorrectly remembered for all subsequent commits pushed to that MR in GitLab. With this update, permissions are re-evaluated on every new commit, correctly halting Continuous Integration (CI) and requiring a new/ok-to-testfor each change as intended.- GitLab API compatibility fix for canceled PipelineRuns
Before this update, an issue in the GitLab provider system did not properly recognize canceled
PipelineRunsdue to a spelling mismatch between "cancelled" and "canceled", which is expected by the GitLab API. This issue caused GitLab merge requests to automatically merge even though the associatedPipelineRunwas canceled. With this update, this issue is fixed. An explicit mapping from "cancelled" to "canceled" for GitLab API compatibility helps ensure that cancelledPipelineRunsare correctly reported and merge requests remain open as expected.- Placeholder variable evaluation no longer fails when data sources are missing
Before this update, placeholder variable evaluation failed when either the event
payloadorheaderssources were missing. This failure occurred because the evaluation logic attempted to evaluatebody.*,headers.\*, andfiles.\*placeholders together. With this update, the evaluation logic processes these placeholders independently. As a result, each placeholder works if its corresponding data is present.
- GitHub App installation ID retrieval is optimized
Before this update, retrieving GitHub App installation IDs involved unnecessary API listings, impacting performance and increasing API calls. With this update, the system optimizes the retrieval process by removing these listings and directly fetching the installation using the repository URL, with fallback to organization installation.
- Incorrect commit IDs for Bitbucket merge commits are fixed
Before this update, a change caused the
revisionvariable to fetch incorrect commit IDs for Bitbucket merge commits. With this update, the problematic change is reverted. As a result, the expected behavior is restored, and the correct commit IDs are fetched for Bitbucket merge commits.
- Cancellation of running PipelineRuns is correctly scoped
Before this update, cancellations triggered by PR-close events could accidentally cancel
push-triggeredPipelineRuns. With this update, the cancellation logic correctly targets onlyPipelineRunsthat were triggered by the pull request (PR).
- GitLab merge request comments post correctly from forks
Before this update, GitLab merge request comments were not posted correctly from forks due to the use of an incorrect Project ID. With this update, the system is fixed to use the correct
TargetProjectIDfor merge request comments. As a result, comments are posted successfully even when originating from a fork.
- Duplicate secret creation is handled gracefully
Before this update, the system would fail when encountering an existing secret during creation (duplicate secret error). With this update, the secret creation logic is modified to gracefully reuse the existing secret instead of failing.
- GitHub check runs and patching logic is corrected
Before this update, Pipelines as Code created check runs incorrectly by always using the hardcoded state
'in_progress', omitting key output fields, and attempting to patchPipelineRunresources even after validation failed. With this update, the system uses the proper status and conclusion fromstatusOpts, adds theTitle,Summary, andTextoutput fields to check runs, and prevents patch attempts when thePipelineRunname is invalid.
- Failed commit status is correctly updated after validation fixes in GitLab
Before this update, when a
PipelineRunexecution failed due to validation errors in GitLab, the failed commit status persisted even after the user fixed the validation error. This caused the merge request to appear failed and blocked auto-merge. With this update, the system ensures that the commit status is correctly updated after validation fixes.- Commit status updates correctly after validation fixes in GitLab
Before this update,
PipelineRunobjects that failed validation in GitLab displayed incorrect names in the pipeline status, causing the failed commit status to persist even after the errors were resolved. With this update, the commit status updates correctly after validation issues are fixed. As a result, merge requests reflect the correct status and the auto-merge functionality is enabled.
Tekton Ecosystem
- Multiple image copy enabled in skopeo-copy task using url.txt
Before this update, the
skopeo-copytask failed to copy multiple images when source and destination image URLs were not provided, as it required non-empty image URLs and bypassed theurl.txtfile method. With this update, theskopeo-copytask parameters are optional, allowing the use of theurl.txtfile for multiple image copies regardless of source and destination URLs. As a result, the task supports copying multiple images usingurl.txt.
Tekton Results
- Description for PipelineRun deletion metric is accurate
Before this update, the metrics exposed by Tekton Results had an inaccurate description for the metric tracking the duration of
PipelineRundeletion. This caused confusion and reduced the reliability of metrics reporting. With this update, the description for theprDeleteDurationmetric is corrected to accurately reflect the time betweenPipelineRuncompletion and final deletion.- Race condition causing database constraint violations is fixed
Before this update, a race condition existed in the Tekton Results watcher where
PipelineRunandTaskRunhandlers could concurrently attempt to create the sameResultrecord. This led to PostgreSQL unique-constraint violations and ambiguousUnknowngRPC errors. With this update, proper handling of SQLSTATE duplicate key errors is introduced by refetching already-created records, and a PostgreSQL error-to-gRPC translator is added.- Annotations management updated for reliability
Before this update, Tekton Results used a multi-step logic and merge patches for managing annotation updates, which could lead to unreliable and conflict-prone updates to Kubernetes objects. With this update, Tekton Results refactors its annotation handling to use a single
Patchoperation and switches from merge patches to Server-Side Apply (SSA). These changes are internal and do not introduce user-facing changes but provide more reliable and conflict-aware updates.- The defaultRetention field takes precedence over the deprecated maxRetention field
Before this update, the configuration behavior was inconsistent if the deprecated maxRetention field was carried over during an upgrade. With this update, the defaultRetention field correctly takes precedence over the deprecated field. As a result, the retention policy remains consistent with the TektonConfig CR specification, maintaining backward compatibility during the deprecation period.
Tekton Hub
- The system no longer downloads an outdated version of the git-clone task
Before this update, the system downloaded an outdated version of the
git-clonetask (0.9.0) instead of the latest version (0.10). As a result, end users installed the older 0.9 version, causing inconsistencies and missing improvements available in 0.10. With this update, thegit-clonetask is updated to version 0.10.
Tekton Chains
- Anti-affinity rule added to tekton-chains-controller spec
Before this update, the
tekton-chains-controllerspec did not include an anti-affinity rule, causing pods to be unevenly distributed across nodes and potentially leading to resource contention. With this update, an anti-affinity rule is added to thetekton-chains-controllerspec, improving pod scheduling and ensuring better resource distribution.- TaskRun finalizer no longer remains on resources
Before this update, an old
TaskRunfinalizer could remain on resources, which prevented the proper cleanup of completed tasks. With this update, the unnecessary finalizer is removed. As a result, TaskRun resources are cleaned up as expected.- End-to-End testing suite stability is restored
Before this update, a build error caused failures in the End-to-End (E2E) testing suite. With this update, the build error is resolved.
Pruner
- Namespace-level pruner configuration updates takes effect immediately after upgrade
Before this update, if event based pruner was enabled before an upgrade, the Operator reverted the pruner config values to default values after the upgrade. With this update, the values of pruner config are retained after an upgrade.
1.2.6. Deprecated features Copiar o linkLink copiado para a área de transferência!
- pipelinerun_status field in Repository custom resource is deprecated
With this update, the
pipelinerun_statusfield available in theRepositorycustom resource is deprecated and will be removed in a future release.- OpenCensus is deprecated
OpenCensus is supported in this release but is deprecated and might be removed in a future release. It is anticipated that a future version of Red Hat OpenShift Pipelines will migrate from OpenCensus to OpenTelemetry for observability and tracing. This migration might require updates to existing PromQL queries.