Este conteúdo não está disponível no idioma selecionado.

Chapter 1. Red Hat OpenShift Pipelines release notes


Note

For additional information about the OpenShift Pipelines lifecycle and supported platforms, refer to the OpenShift Operator Life Cycles and Red Hat OpenShift Container Platform Life Cycle Policy.

Release notes contain information about new and deprecated features, breaking changes, and known issues. The following release notes apply for the most recent OpenShift Pipelines releases on OpenShift Container Platform.

Red Hat OpenShift Pipelines is a cloud-native CI/CD experience based on the Tekton project which provides:

  • Standard Kubernetes-native pipeline definitions (CRDs).
  • Serverless pipelines with no CI server management overhead.
  • Extensibility to build images using any Kubernetes tool, such as S2I, Buildah, JIB, and Kaniko.
  • Portability across any Kubernetes distribution.
  • Powerful CLI for interacting with pipelines.
  • Integrated user experience with the OpenShift Container Platform web console, up to OpenShift Container Platform version 4.20.

For an overview of Red Hat OpenShift Pipelines, see Understanding OpenShift Pipelines.

1.1. Compatibility and support matrix

Some features in this release are currently in Technology Preview. These experimental features are not intended for production use.

In the table, features are marked with the following statuses:

TP

Technology Preview

GA

General Availability

Expand
Table 1.1. Compatibility and support matrix
Red Hat OpenShift Pipelines VersionComponent VersionOpenShift VersionSupport Status

Operator

Pipelines

Triggers

CLI

Chains

Hub

Pipelines as Code

Results

Manual Approval Gate

Pruner

Cache

  

1.21

1.6.x

0.34.x

0.43.x

0.26.x (GA)

1.23.x (TP)

0.39.x (GA)

0.17.x (GA)

0.7.x (TP)

0.3.x (GA)

0.3.x (GA)

4.14, 4.16, 4.17, 4.18, 4.19, 4.20

GA

1.20

1.3.x

0.33.x

0.42.x

0.25.x (GA)

1.22.x (TP)

0.37.x (GA)

0.16.x (GA)

0.6.x (TP)

0.2.x (TP)

0.2.x (TP)

4.14, 4.16, 4.17, 4.18, 4.19, 4.20

GA

1.19

1.0.x

0.32.x

0.41.x

0.25.x (GA)

1.21.x (TP)

0.35.x (GA)

0.15.x (GA)

0.6.x (TP)

 

0.2.x (TP)

4.14, 4.16, 4.17, 4.18, 4.19, 4.20

GA

Note

The OpenShift console plugin for OpenShift Pipelines follows the same version as the OpenShift Pipelines Operator.

For questions and feedback, you can send an email to the product team at pipelines-interest@redhat.com.

1.2. Release notes for Red Hat OpenShift Pipelines 1.21

With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.21 is available on OpenShift Container Platform 4.14 and later supported versions.

For more information about the supported versions of OpenShift Container Platform, see Life Cycle Dates.

1.2.1. New features and enhancements

In addition to fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.21:

Pipelines

Read-only root filesystems enabled for OpenShift Pipelines containers
With this update, all OpenShift Pipelines containers, including controllers and webhooks, are configured with the readOnlyRootFilesystem parameter set to true. This change follows security best practices for Kubernetes-based workloads. By enforcing a read-only root filesystem, OpenShift Pipelines improves its security posture by helping to prevent unauthorized modifications to the container runtime environment.
Override individual TaskRun timeouts in a PipelineRun

With this update, you can override the timeout for individual TaksRun objects within a PipelineRun using the spec.taskRunSpecs[].timeout field.

apiVersion: tekton.dev/v1
  kind: PipelineRun
  metadata:
    name: example-with-timeout-override
  spec:
    pipelineRef:
      name: my-pipeline
    timeouts:
      pipeline: "2h"
    taskRunSpecs:
      - pipelineTaskName: build-task
        timeout: "30m"
      - pipelineTaskName: test-task
        timeout: "1h"
Copy to Clipboard Toggle word wrap

This allows finer-grained control over task execution duration without affecting the overall PipelineRun timeout.

Resolver caching for bundle, Git, and cluster resolvers

With this update, resolver caching is supported for bundle, Git, and cluster resolvers. This helps reduce redundant fetches, minimize external API calls, and improve pipeline execution reliability, especially when external services impose rate limits or are temporarily unavailable.

  • Global settings: You can configure caching using your TektonConfig custom resource (CR), where you can set the cache size and adjust the time to live (TTL) value without restarting controllers:

    apiVersion: operator.tekton.dev/v1alpha1
    kind: TektonConfig
    metadata:
      name: config
    spec:
      pipeline:
        options:
          configMaps:
            resolver-cache-config:
              data:
                max-size: "1000"
                ttl: "5m"
    #...
    Copy to Clipboard Toggle word wrap
    • max-size: defines the maximum number of cached entries. The default value is "1000".
    • ttl: defines the time to live of the cache entry. The default value is "5m".
  • Per-resolver defaults: You can set the default caching mode for specific resolvers using the bundleresolver-config, git-resolver-config, or cluster-resolver-config config maps:

    #...
    data:
      cache: "auto"
      #...
    Copy to Clipboard Toggle word wrap
    • The available modes are auto (cache only immutable references), always (cache everything), and never (disable caching). The default value is auto.
  • TaskRun or PipelineRun overrides: You can override the default caching mode for individual runs by adding the cache parameter to the TaskRun or PipelineRun specification:

    #...
    params:
      - name: cache
        value: "always"
    #...
    Copy to Clipboard Toggle word wrap

Resolver caching helps improve reliability, reduce latency for frequently accessed resources, and decrease load on external services such as GitHub and OCI registries. Cache hits, misses, and timestamps are added to resource annotations for resolved resources.

Array values can be resolved in when expressions

With this update, array value resolution is enabled in the input attribute of when expressions.

The following Tekton PipelineRun custom resource configures a parameter array.

apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
  name: pipelinerun-with-array-when
spec:
  params:
    - name: branches
      type: array
      value:
        - main
        - develop
        - release
  pipelineSpec:
    params:
      - name: branches
        type: array
    tasks:
      - name: deploy-if-valid-branch
        when:
          - input: "main"
            operator: in
            values: ["$(params.branches[*])"]
        taskSpec:
          steps:
            - name: deploy
              image: alpine
              script: |-
                echo "Deploying..."
Copy to Clipboard Toggle word wrap

The following example demonstrates how a task can produce an array result that is consumed by a subsequent task.

# ...
    tasks:
      - name: get-environments
        taskSpec:
          results:
            - name: envs
              type: array
          steps:
            - name: produce-array
              image: bash
              script: |-
                echo -n '["dev", "staging", "prod"]' | tee $(results.envs.path)

      - name: deploy-to-staging
        when:
          - input: "staging"
            operator: in
            values: ["$(tasks.get-environments.results.envs[*])"]
        taskRef:
          name: deploy
Copy to Clipboard Toggle word wrap
Support for display names added to steps

With this update, a displayName field is added to Step objects. The following example configures a Task resource with displayName values set:

apiVersion: tekton.dev/v1
kind: Task
metadata:
  name: build-and-test
spec:
  params:
    - name: app-name
      type: string
  steps:
    - name: build
      displayName: "Build the application"
      image: golang:1.21
      script: |
        go build ./...

    - name: test
      displayName: "Run unit tests for $(params.app-name)" # Supports param substitution
      image: golang:1.21
      script: |
        go test ./...

    - name: lint
      displayName: "Lint source code"
      image: golangci/golangci-lint
      script: |
        golangci-lint run
  # ...
Copy to Clipboard Toggle word wrap

Operator

A new parameter for controlling pipeline service account permissions

Before this update, the pipeline service account automatically received the edit ClusterRole within its namespace, following legacy RBAC behavior. With this update, you can use the new legacyPipelineRbac parameter to control permissions. Set it to false to prevent the pipeline service account from receiving the edit ClusterRole, enforcing more restricted permissions by default. The default value is true.

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  targetNamespace: openshift-pipelines
  params:
  - name: createRbacResource
    value: "true"
  - name: legacyPipelineRbac
    value: "true"
Copy to Clipboard Toggle word wrap
Important

Existing role bindings are not automatically removed from namespaces with the pipeline service account. You must remove them manually when changing this parameter on existing deployments.

Route is automatically created for Tekton Results API endpoint
With this update, the Tekton Results component automatically creates an OpenShift Route CRD for its API endpoint. You can optionally configure a custom host and path for the Route. This helps ensure the Tekton Results API is accessible externally without requiring additional user configuration.

User interface

Group support for Approval Tasks
With this update, Approval Tasks support group approvers. You can specify a group using the group:<groupName> syntax in the params list. Any member of the group can approve or reject the task. Approvals by any group member count as a single approval, while a rejection by any member immediately counts as a single rejection and fails the task. Group members also receive notifications about the task, just like individual approvers.
Pipeline Overview page retains filter selections
With this update, the Pipeline Overview page persists user selections for the Namespace, Time Range, and Refresh Interval filters. These selections are stored in the application state and URL query parameters. This helps ensure more consistent experience when navigating away, returning to the page, or refreshing the browser. The filters reset only when the user switches namespaces.
Time-range filter label updated for clarity
With this update, the time-range filter previously labeled "Last weeks" is updated to "Last week". This change resolves customer confusion regarding the intended single-week time range and helps ensure consistency between the UI and the Tekton Results API.

Pipelines as Code

Improved performance for GitLab project access control checks
With this update, Pipelines as Code caches the results of GitLab Project Access Control List (ACL) membership queries. This optimization reduces repeated API calls to the GitLab API, improving performance and efficiency during permission checks.
Configure the number of lines in error log snippets

With this update, you can configure how many lines appear in error log snippets using the new error-log-snippet-number-of-lines setting:

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  platforms:
    openshift:
      pipelinesAsCode:
        enable: true
        settings:
          error-log-snippet-number-of-lines: "3"
          #...
Copy to Clipboard Toggle word wrap

Log snippets are automatically truncated to 65,000 characters, preventing failures when posting check-run updates to the GitHub API.

GitLab commit status set on the source and target project by default
With this update, Pipelines as Code attempts to set the commit status on both the source and target upstream projects in GitLab. If permission issues cause both attempts to fail, the system falls back to posting a comment instead. This ensures that the commit status is communicated in the most relevant location for merge requests.
Improved error message for GitLab private repository access failures

With this update, Pipelines as Code proactively checks whether the configured GitLab token has the required read access to the source repository when processing merge requests from private forks. If the token lacks the necessary read_repository scope, Pipelines as Code fails early with the following error:

an error occurred: failed to access GitLab source repository ID REPOSITORY_ID: please ensure token has 'read_repository' scope on that repository
Copy to Clipboard Toggle word wrap

This helps ensure that permission issues are easier to identify and prevents pipelines from failing unexpectedly later in the process.

Trigger and cancel PipelineRuns for Git tags

With this update, you can trigger PipelineRuns by using comments on commits associated with a Git tag. This provides flexible, version-specific control of CI/CD workflows.

To trigger a PipelineRun, add a comment on the tagged commit using the format /test <pipeline-name> tag:<tag-name>, for example:

/test xyz-pipeline-run tag:v1.0.0
Copy to Clipboard Toggle word wrap

To cancel a PipelineRun, use the format /cancel <pipeline-name> tag:<tag-name>, for example:

/cancel xyz-pipeline-run tag:v1.0.0
Copy to Clipboard Toggle word wrap

This feature is supported for GitHub and GitLab, enabling teams to trigger or cancel PipelineRuns tied to specific tagged versions.

SHA validation added to /ok-to-test commands to prevent race conditions
With this update, a new setting, require-ok-to-test-sha, is introduced in Pipelines as Code to enforce commit SHA validation when using the /ok-to-test comment command on GitHub pull requests. This feature mitigates a critical TOCTOU (Time-of-Check to Time-of-Use) race condition vulnerability, specific to the GitHub provider, where an attacker could execute a pipeline on an unapproved SHA. When the setting is enabled, users must specify the exact commit SHA, for example, /ok-to-test <sha>, for approval. As a result, this ties the approval directly to a specific commit, preventing pipeline execution on any subsequent malicious force-pushed SHA.
Incoming webhook targets support glob patterns

With this update, the targets field in the Pipelines as Code Repository custom resource (CR) is enhanced to support glob patterns for incoming webhook events. In addition to exact string matching, you can utilize patterns to match multiple branch names with a single rule, helping simplify configuration and management for complex repository structures.

The following shell-style glob patterns are supported:

  • *: Matches any sequence of characters, for example, feature/\* matches feature/login or feature/api.
  • ?: Matches exactly one single character, for example, v? matches v1 or v2.
  • [abc]: Matches a single character that is part of the defined set, for example, [A-Z]\* matches any branch starting with an uppercase letter.
  • [0-9]: Matches a single digit, for example, v[0-9]*.[0-9]\* matches v1.2 or v10.5.
  • {a,b,c}: Matches any of the alternatives separated by a comma, for example, {dev,staging}/\* matches dev/test or staging/test.

    If multiple incoming webhooks match the same branch, the first matching webhook defined in the YAML order is used. To ensure expected behavior, place more specific webhooks before general catch-all webhooks in your configuration.

Incoming webhook requests support a new namespace parameter
An optional namespace parameter is added to Pipelines as Code incoming webhook requests. When specified along with the existing repository parameter, it uniquely identifies the targeted Repository custom resource (CR). This ensures correct routing even when multiple repositories share the same name across the cluster. If multiple Repository CRs exist with the same name and the namespace parameter is omitted, Pipelines as Code returns a 400 status code, requiring the user to provide the namespace for unambiguous identification.
Default secret key added for incoming webhooks
With this update, when no secret key is explicitly specified in the Repository custom resource incoming webhook configuration, the system defaults to using "secret" as the key name when retrieving the secret value from the Secret resource.

Tekton Results

Fine-grained retention policies

With this update, Tekton Results supports fine-grained retention policies. You can set different retention periods for PipelineRun and TaskRun results based on namespace, labels, annotations, and status. The first matching policy is applied; if none match, the defaultRetention period is used.

Configure policies in the tekton-results-config-results-retention-policy config map using the policies key. Each policy includes a selector and a retention period:

result:
    disabled: false
    is_external_db: false
    options:
      configMaps:
        tekton-results-config-results-retention-policy:
          data:
            defaultRetention: "30d"
            policies:
              - name: "retain-critical-failures-long-term"
                selector:
                  matchNamespaces:
                    - "production"
                    - "prod-east"
                  matchLabels:
                    "criticality": ["high"]
                  matchStatuses:
                    - "Failed"
                retention: "180d"
              - name: "retain-annotated-for-debug"
                selector:
                  matchAnnotations:
                    "debug/retain": ["true"]
                retention: "14d"
              - name: "default-production-policy"
                selector:
                  matchNamespaces:
                    - "production"
                    - "prod-east"
                retention: "60d"
              - name: "short-term-ci-retention"
                selector:
                  matchNamespaces:
                    - "ci"
                retention: "10h"
            runAt: "0 2 * * *"
Copy to Clipboard Toggle word wrap

This config map example:

  • Runs the pruning job daily at 2:00 AM, as specified by the runAt cron schedule.
  • Keeps failed Results in production or prod-east with criticality: high for 180 days.
  • Keeps Results with annotation debug/retain: "true" for 14 days.
  • Keeps other Results in production or prod-east for 60 days.
  • Keeps Results in the ci namespace for 10 hours.
  • Keeps all other Results for the default retention period (30 days).
PostgreSQL support updated to version 17.5
With this update, Tekton Results supports PostgreSQL version 17.5.
New metrics for runs not stored in the database

With this update, Tekton Results adds the runs_not_stored_count metric, emitted by the default watcher container. This metric tracks the number of PipelineRun and TaskRun instances that are deleted before they can be persisted in the database. Supported tags include:

  • kind – the type of run (PipelineRun or TaskRun),
  • namespace – the namespace where the run was created.

For example:

watcher_runs_not_stored_count{kind="PipelineRun",namespace="default"} 5
Copy to Clipboard Toggle word wrap
Metrics for run storage latency

With this update, Tekton Results adds the run_storage_latency_seconds metric, emitted by the default watcher container. This metric measures the time between run completion and its successful storage in the database.

Supported tags include:

  • kind – the type of run (PipelineRun or TaskRun)
  • namespace – the namespace where the run was created

    watcher_run_storage_latency_seconds{kind="PipelineRun",namespace="default"} 0.5
    Copy to Clipboard Toggle word wrap

    This metric is emitted only when a run transitions from completed to stored, helping ensure accurate measurement of storage latency without being skewed by multiple reconciliations.

CLI configuration persists across namespaces
Before this update, the API configuration for the Tekton Results CLI was fetched from the current namespace context. As a consequence, users had to re-authenticate or reconfigure after switching namespaces. With this update, the configuration persists across namespace switches, removing the need to run the opc results config set command repeatedly.
Default database migration to PostgreSQL version 15

The PostgreSQL image used by the default Tekton Results deployment is upgraded from version 13 to version 15, addressing the upcoming end of life (EOL) for PostgreSQL 13. This process implements an automated migration, which uses the slower yet more reliable data copy mechanism from the original PostgreSQL image.

Important

If you are using the default PostgreSQL deployment, ensure you have backed up your data and that the underlying Persistent Volume Claim (PVC) has more than 50% free space before you start the OpenShift Pipelines upgrade. If you are using an external database with Tekton Results, you are not affected by this change.

Tekton Cache

Tekton Cache is generally available
With this update, Tekton Cache is generally available (GA) and is fully supported for production use. Tekton Cache was previously available as a Technology Preview (TP) feature.
Tekton Cache binaries available for public download
With this update, the Tekton Cache product binaries are available for download without authentication. This accessibility enables customers to use the Red Hat binaries for their custom StepAction configurations.
Improved support for Docker credentials
Before this update, Tekton Cache required Docker secrets to include a config.json key. With this update, Docker secrets without a config.json key are supported. The DOCKER_CONFIG parameter can point to any location containing either a config.json file or a .dockerconfigjson file, improving flexibility for private registry authentication.

Tekton Triggers

GitHub interceptor enforces SHA-256 signature validation
Before this update, the GitHub interceptor supported both SHA-1 (X-Hub-Signature) and SHA-256 (X-Hub-Signature-256) signatures for webhook validation. With this update, the GitHub interceptor enforces a stricter security posture and only accepts SHA-256 signatures via the X-Hub-Signature-256 header, dropping support for SHA-1. As a result, standard GitHub webhooks remain unaffected, but any custom webhook implementations must update their HMAC signature generation from SHA-1 to SHA-256 to avoid validation errors.

Tekton Hub

Default database migration to PostgreSQL version 15
The PostgreSQL database version used by the Tekton Hub is migrated from version 13 to version 15 to address the upcoming end of life (EOL) for PostgreSQL 13. This upgrade ensures continued stability and support for the Tekton Hub. Additionally, the process implements an automated transition from version 13 to version 15 for existing deployments.

Tekton Chains

Flexible provenance and signing configuration
With this update, you can choose to disable image signing while still enabling provenance generation and attestation signing. This enhancement helps provide more flexibility in managing security artifacts within your CI/CD pipelines.
New option to disable OCI image signing
With this update, a new configuration option, artifacts.oci.disable-signing, is added to the Tekton Chains config map. This option enables you to skip OCI image signing performed by Tekton Chains while still maintaining provenance generation and attestation signing. This feature is intended for users who prefer to sign images using an external workflow, such as cosign sign, but still require Tekton Chains to maintain supply-chain integrity for metadata. By default, this option is set to false, ensuring no change in behavior for existing configurations unless explicitly enabled.

CLI

Support for rerunning resolver-based PipelineRuns
With this update, the tkn CLI introduces a new --resolvertype flag to the tkn p start command. This flag allows you to specify the resolver type, such as git, http, hub, cluster, bundle, or remote, for any supported resolver type, when re-running a resolver-based PipelineRun. You can reference an existing PipelineRun name to re-run it using the specified resolver type.
New --resolvertype to support rerunning resolver-based PipelineRuns
With this update, the tkn p start --last command introduces the --resolvertype flag. This flag enables users to specify the resolver type, such as git, hub, or bundle, when re-running a previous resolver-based PipelineRun. Additionally, the help text for the command has been updated to use the correct pronoun.

Pruner

Event-driven pruner is generally available

With this update, the event-driven pruner tektonpruner is generally available (GA) and is fully supported as a pruning mechanism for OpenShift Pipelines with centralized and hierarchical configuration.

While existing pruning mechanisms, such as the default job-based pruner, Pipelines as Code keep-max-run, and Tekton Results based retention, continue to function, users currently relying on legacy pruning approaches are encouraged to adopt the event-driven pruner tektonpruner to help ensure smoother performance, more predictable cleanup, and reduced operational overhead.

The following enhancements are present in this release:

  • Namespace-level pruner configuration: With this update, the event-driven pruner supports custom pruning policies at the namespace level. You can define custom pruning policies, such as time to live and history limits, that override global defaults by creating a tekton-pruner-namespace-spec config map in your namespace.
  • Selector-based pruning configuration: With this update, the event-driven pruner supports selector-based resource matching with the matchLabels and matchAnnotations selectors. The selector matching logic when you specify both matchLabels and matchAnnotations selectors is AND and name matching has absolute precedence regardless of selector presence. The selector-based resource matching is supported only in the namespace-level tekton-pruner-namespace-spec config maps, not in the global TektonConfig CR configuration.
  • Cluster-wide maximum limits added for the event-driven pruner configuration: With this update, the event-driven pruner enforces cluster-wide maximum limits for configuration fields when global limits are not specified. If global limits are set, they take precedence. This validation helps ensure that namespace-specific pruner settings do not exceed the defined maximum values, helping prevent potential resource overuse. The maximum TTL in seconds is 2592000 and the maximum history limit is 100.
  • Pruner config map validation: With this update, the event-driven pruner validates config maps at apply-time using a Kubernetes admission webhook. Invalid configurations, such as unsupported formats, negative values, or namespace settings that exceed global limits, are rejected with clear error messages instead of failing silently. For validation to apply, pruner config maps must include the following labels:

    labels:
      app.kubernetes.io/part-of: tekton-pruner
      pruner.tekton.dev/config-type: namespace
    Copy to Clipboard Toggle word wrap

1.2.2. Technology preview features

Pipelines as Code

A new command for evaluating CEL expressions (Technology Preview)

A new tkn CLI command, tkn pac cel, allows administrators to interactively evaluate Common Expression Language (CEL) expressions against webhook payloads and headers. You can use the following syntax:

$ tkn pac cel -b <body.json> -H <headers.txt>
Copy to Clipboard Toggle word wrap
  • -b or --body: Specify a path to a JSON body file. This is a webhook payload.
  • -H or --headers: Specify a path to headers file. This can be a plain text, a JSON file, or a gosmee script.
  • -p or --provider: Specify the provider. This can be github, gitlab, bitbucket-cloud, bitbucket-datacenter, gitea, or auto for automatic detection from the payload.

Key capabilities include:

  • Interactive mode: Provides a prompt in the terminal to type CEL expressions, with tab completion for variables and payload fields.
  • Variable access using:

    • direct variables, such as event or target_branch,
    • webhook payload fields, for example, body.action,
    • HTTP headers, for example, headers['X-GitHub-Event'],
    • Pipelines as Code parameters, for example, pac.revision.
  • Debugging: Quickly test and debug CEL expressions used in PipelineRun resource configurations against real webhook data.

Manual Approval Gate

Group support for Approval Task
With this update, the Approval Task supports group approvers. Specify group approvers using the group:<groupName> syntax in the params list. Any member of the group can approve or reject the task. An approval by any group member counts as a single approval, while a rejection by any member immediately counts as a single rejection and fails the task. This enhancement provides more flexible approval workflows by allowing teams to delegate approvals to groups as well as individuals.
ApprovalTask preserves messages from all group members
With this update, messages added by any member of a group when approving or rejecting an ApprovalTask are preserved in both the group input and the status.approverResponse array. This helps ensure that all context and comments provided by group members remain visible for audit and review purposes.

1.2.3. Breaking changes

User interface

Pipelines console navigation requires explicit plugin enablement
With this update, the legacy static console plugin is fully deprecated. After installing the Red Hat OpenShift Pipelines Operator, you must explicitly enable the console plugin to access the Pipelines section in the OpenShift Container Platform console. The previous fallback behavior, which displayed a limited Pipelines entry when the plugin was disabled, is removed, and the Pipelines navigation menu is only visible when the console plugin is active.

Pipelines as Code

pipelinerun_status field in Repository custom resource is depracated
With this update, the pipelinerun_status field of the Repository custom resource (CR) is deprecated and will be removed in a future release. Update any integrations or automation that reference this field to ensure compatibility with upcoming versions.

Tekton Chains

Cosign v2.6.0 update affects keyless signing
With this update, Tekton Chains uses Cosign version 2.6.0, which no longer accepts HS256 JWT tokens for keyless signing. If your private OIDC provider uses HS256 tokens for authentication, you must switch to RS256 before upgrading to this release. If you use perform key-based signing, or use a private OIDC provider already configured with RS256, you are not affected by this change.

1.2.4. Known issues

User interface

Duplicate Pipelines navigation entry in the OpenShift Console

During the transition from static to dynamic console plugins, the OpenShift Console might temporarily display two Pipelines entries in the navigation menu. This is a UI-only issue and does not affect pipeline execution or data.

To work around this problem, apply the corresponding updates to the OpenShift Console and the Red Hat OpenShift Pipelines Operator. Users running older Red Hat OpenStack Platform versions should coordinate upgrades with OpenShift Container Platform to prevent a temporary disappearance of the Pipelines menu.

SRVKP-10006

1.2.5. Fixed issues

Pipelines

PipelineRuns fail clearly on invalid apiVersion

Before this update, setting the spec.tasks[].taskRef.apiVersion field to an invalid value caused PipelineRun execution to fail silently. With this update, PipelineRun displays a clear error when taskRef.apiVersion is invalid.

SRVKP-8514

PipelineRuns no longer fail on temporary TaskRef reconciliation errors

Before this update, PipelineRuns failed when TaskRef reconciliation encountered retryable errors. This led to unnecessary pipeline failures on transient issues. With this update, the controller logic ensures PipelineRuns only fail on explicit validation errors, not retryable errors. As a result, the reliability of pipelines resolving external tasks is improved.

SRVKP-9135

Kubernetes-native sidecars no longer cause repeated init container restarts

Before this update, Kubernetes-native sidecars had issues with repeated init container restarts. With this update, signal handling is added to SidecarLog results. As a result, the sidecar gracefully handles signals, stabilizing the lifecycle and preventing unnecessary restarts of the init containers.

SRVKP-9135

Pods for timed-out TaskRuns are retained

Before this update, pods for timed-out TaskRuns were not retained when the keep-pod-on-cancel feature flag was enabled. With this update, the system ensures pods are retained when the flag is enabled. As a result, debugging and analysis of timed-out tasks are consistently supported when the feature is active.

SRVKP-9135

StepAction status steps no longer display in incorrect order

Before this update, status steps displayed in an incorrect order when using StepAction. This made it difficult to interpret the chronological flow of actions. With this update, the system ensures status steps display in the correct sequential order. As a result, the timeline and history of StepAction executions are accurately presented.

SRVKP-9135

TaskRuns no longer fail on arm64 clusters due to platform mismatch

Before this update, arm64 Kubernetes clusters experienced TaskRun failures due to platform variant mismatch in entrypoint lookup. This prevented successful execution on this architecture. With this update, the entrypoint logic correctly handles Linux platform variants. As a result, TaskRuns execute reliably on arm64 clusters.

SRVKP-9135

Operator

Improved proxy webhook performance by replacing synchronous checks

Before this update, the proxy webhook could timeout under high-concurrency workloads because it performed synchronous API calls to verify config map existence during pod admission. With this update, the webhook uses optional config map volumes that gracefully handle missing CA bundles without blocking pod creation. As a result, the defaulting webhook is less affected by etcd performance issues, the CA bundles configmaps are always mounted as Optional volumes, and the environment variable SSL_CERT_DIR is always set on TaskRun step-containers.

SRVKP-8377

Corrected prioritySemaphore locking to prevent deadlocks and race conditions

Before this update, the prioritySemaphore implementation could cause deadlocks, race conditions, and panics due to unsynchronized data access. With this update, the locking logic is corrected and all shared data is properly synchronized, preventing these concurrency issues.

SRVKP-8198

Retained pods on TaskRun timeout when the keep-pod-on-cancel flag is true

Before this update, when the keep-pod-on-cancel setting was set to true, TaskRun pods were retained only if the TaskRun was canceled. When a TaskRun timed out its pods would be deleted. With this update, TaskRun pods are not deleted if their TaskRun times out when the keep-pod-on-cancel seeting is set to true.

SRVKP-9176

Normalized TektonConfig container args to -key=value format

Before this update, TektonConfig container args could contain duplicates and ["-key","value"] pairs. With this update, flags are normalized to the "-key=value" format and duplicates are removed, simplifying configuration.

SRVKP-7889

Default catalog name updates correctly during upgrade

Before this update, upgrading from versions 1.19.x to 1.20.0 incorrectly set the hub-catalog-name field in the Pipelines-as-Code config map to the deprecated Tekton Hub catalog name, tekton. As a consequence, this led to unexpected behavior when resolving catalog tasks. With this update, the default value points to the Artifact Hub catalog name. As a result, the upgrade process ensures consistent and expected behavior.

SRVKP-8930

nodeSelector and tolerations propagate correctly to Results pods

Before this update, the nodeSelector and tolerations settings configured in the Tekton Config under the Results section were not applied to Tekton Results pods. As a consequence, pod scheduling behavior did not reflect the user-configured preferences. With this update, the nodeSelector and tolerations configurations from the Tekton Config propagate correctly to all Tekton Results pods.

SRVKP-8922

Webhook validation no longer targets control-plane namespaces

Before this update, the logic for the tekton-operator-proxy-webhook parameter attempted to validate resources in control-plane namespaces, such as kube-* and openshift-\*. This behavior caused unintended webhook certificate issues that affected unrelated system components. With this update, the webhook logic excludes all control-plane namespaces from admission validation. This improvement ensures better isolation between Tekton components and other cluster operators.

SRVKP-8891

Custom hub catalog configuration is preserved during conversion

Before this update, the OpenShift Pipelines Operator removed the catalog-{INDEX}-type field during conversion, which caused the loss of custom hub catalog types. With this update, the Operator preserves the catalog-{INDEX}-type field in its config map.

SRVKP-9472

User interface

Fixed PipelineRun cancelling status in OpenShift Console after TaskRuns complete

Before this update, the OpenShift Container Platform Console showed PipelineRuns in a cancelling state even after all associated TaskRuns completed, due to an internal UI inconsistency. With this update, the OpenShift Console PipelineRun status mechanism is corrected. As a result, PipelineRun status accurately reflects the state of completed TaskRuns.

SRVKP-6960

Fixed validation error preventing saving of Buildah tasks in Pipeline builder UI

Before this update, the Pipeline builder UI failed to save Buildah tasks due to a validation error with the default BUILD_ARGS parameter. With this update, the validation error is resolved. As a result, the Pipeline builder UI correctly saves Buildah tasks, even when using the default BUILD_ARGS parameter.

SRVKP-8571

Fixed incorrect sorting of PipelineRuns by duration

Before this update, sorting PipelineRuns by duration used a string sort instead of actual duration values, causing misleading results. With this update, sorting correctly uses the duration in seconds. As a result, PipelineRuns are accurately sorted by actual duration.

SRVKP-6211

Fixed TaskRun sorting by duration in the OpenShift Container Platform console

Before this update, sorting TaskRun resources by duration in the OpenShift Container Platform console incorrectly used completion time instead of elapsed time. As a result, the list displayed durations in an incorrect chronological order, such as a six-minute task appearing before a fifty-second task. With this update, the console correctly calculates the duration before sorting.

SRVKP-8835

Added strict navigation URLs task name matching

Before this update, when two tasks had similar names, such as tkn and kn, the application returned the first partial string match. This issue caused incorrect navigation via the URL. With this update, the task names are matched using strict equality checks to avoid partial matches and guarantee correct URL navigation.

SRVKP-8976

Fixed the Overview page displaying an error message

Previously, when the tekton-results-postgres or tekton-results-api pods were restarted, the Overview page displayed an "Oh No! Something Went Wrong" error. With this update, when pipeline results data is unavailable, an empty state is displayed instead of an error message. This provides a smoother and more consistent user experience.

SRVKP-8076

Fixed immediate YAML editor updates using useEffect hook

Before this update, YAML editor changes were not applied until the component was remounted. With this update, changes are immediately reflected using the useEffect hook, improving the editing experience.

SRVKP-8205

Pagination fix for archived PipelineRun results

Before this update, switching the data_source filter to archived prevented additional PipelineRun results from loading when scrolling. The UI expected a nextPageToken field, while the API returned next_page_token, causing the on-scroll callback to never request the next page. As a result, pagination stopped after the initial page of results. With this update, the client correctly handles the next_page_token field, ensuring that pagination proceeds as expected and all archived PipelineRun data loads properly.

SRVKP-9397

Pipeline Builder no longer displays stale task parameter data across namespaces

Before this update, the Pipeline Builder displayed stale or incorrect task parameter data when multiple tasks with the same name existed in different namespaces. As a consequence, users configured pipelines with invalid parameters. With this update, the system performs stronger validation to detect task name conflicts, generates unique task names when duplicates appear, and cleans up task data during removal. As a result, the side panel shows accurate task information from the selected namespace.

SRVKP-8998

Pagination works correctly for archived PipelineRun results

Before this update, scrolling failed to load additional archived PipelineRun results because the UI client expected the nextPageToken field while the API returned next_page_token. With this update, the UI is updated to support next_page_token. As a result, pagination functions correctly when the data_source filter is set to archived.

SRVKP-9397

Time range and refresh interval selections now persist on the Pipeline Overview page

Before this update, the Time Range and Refresh Interval selections on the Pipeline Overview page did not persist across navigation or page refreshes, leading to an inconsistent user experience. With this update, these selections persist across navigation and page refreshes, and are reset to default values when switching namespaces.

SRVKP-9607

Pipeline Overview page reliability is improved for slow backend responses

Before this update, slow or intermittent backend responses caused the Pipeline Overview page to display stale or partially loaded data, or fail silently in slower clusters. With this update, additional safeguards prevent stale or incomplete data from being displayed, and API timeouts have been increased to help improve reliability.

SRVKP-9427

Ambiguous time-range filter label is corrected

Before this update, the time-range filter label Last weeks was ambiguous. With this update, the label is changed to Last week, and the associated API payload has been aligned to ensure more consistent behavior.

SRVKP-9428

Loading indicators added to Pipeline Overview cards

Before this update, the Pipeline Overview page did not display loading indicators on individual cards while data was being fetched, making it unclear whether data was still loading. With this update, loading spinners are displayed on each card during data retrieval, helping provide clearer visual feedback.

SRVKP-9436

Pipelines as Code

GitOps commands in GitLab MR discussion replies are recognized

Before this update, GitOps commands, such as /ok-to-test, posted as replies within GitLab merge request discussion threads were ignored; only commands in the top-level comment of a discussion were recognized. With this update, the GitLab provider honors commands posted in replies, improving command recognition and workflow reliability.

SRVKP-8324

CI status correctly shows Pending for unauthorized Bitbucket PRs

Before this update, when a pull request was opened by an unauthorized user on Bitbucket Data Center, the CI status was incorrectly shown as Running instead of Pending. With this update, the status correctly shows Pending while awaiting administrator approval.

SRVKP-8269

install info and namespace binding corrected

Before this update, the opc-pac CLI incorrectly bound --namespace/-n to kubeconfig, and the install info command did not show repositories for a single CR. With this update, the CLI binds correctly and install info displays repositories properly.

SRVKP-7152

Tag push events no longer affected by skip-push-event-for-pr-commits setting

Before this update, push events triggered by Git tag pushes were incorrectly skipped when the skip-push-event-for-pr-commits setting was enabled. With this update, tag push events are no longer affected by this setting and proceed as expected.

SRVKP-9111

Unauthorized /ok-to-test approvals are correctly invalidated on new commits

Before this update, when remember-ok-to-test was set to false, a single /ok-to-test approval on a merge request (MR) from an unauthorized user was incorrectly remembered for all subsequent commits pushed to that MR in GitLab. With this update, permissions are re-evaluated on every new commit, correctly halting Continuous Integration (CI) and requiring a new /ok-to-test for each change as intended.

SRVKP-9200

GitLab API compatibility fix for canceled PipelineRuns

Before this update, an issue in the GitLab provider system did not properly recognize canceled PipelineRuns due to a spelling mismatch between "cancelled" and "canceled", which is expected by the GitLab API. This issue caused GitLab merge requests to automatically merge even though the associated PipelineRun was canceled. With this update, this issue is fixed. An explicit mapping from "cancelled" to "canceled" for GitLab API compatibility helps ensure that cancelled PipelineRuns are correctly reported and merge requests remain open as expected.

SRVKP-9050

Placeholder variable evaluation no longer fails when data sources are missing

Before this update, placeholder variable evaluation failed when either the event payload or headers sources were missing. This failure occurred because the evaluation logic attempted to evaluate body.*, headers.\*, and files.\* placeholders together. With this update, the evaluation logic processes these placeholders independently. As a result, each placeholder works if its corresponding data is present.

SRVKP-8984

GitHub App installation ID retrieval is optimized

Before this update, retrieving GitHub App installation IDs involved unnecessary API listings, impacting performance and increasing API calls. With this update, the system optimizes the retrieval process by removing these listings and directly fetching the installation using the repository URL, with fallback to organization installation.

SRVKP-9141

Incorrect commit IDs for Bitbucket merge commits are fixed

Before this update, a change caused the revision variable to fetch incorrect commit IDs for Bitbucket merge commits. With this update, the problematic change is reverted. As a result, the expected behavior is restored, and the correct commit IDs are fetched for Bitbucket merge commits.

SRVKP-9141

Cancellation of running PipelineRuns is correctly scoped

Before this update, cancellations triggered by PR-close events could accidentally cancel push-triggered PipelineRuns. With this update, the cancellation logic correctly targets only PipelineRuns that were triggered by the pull request (PR).

SRVKP-9141

GitLab merge request comments post correctly from forks

Before this update, GitLab merge request comments were not posted correctly from forks due to the use of an incorrect Project ID. With this update, the system is fixed to use the correct TargetProjectID for merge request comments. As a result, comments are posted successfully even when originating from a fork.

SRVKP-9141

Duplicate secret creation is handled gracefully

Before this update, the system would fail when encountering an existing secret during creation (duplicate secret error). With this update, the secret creation logic is modified to gracefully reuse the existing secret instead of failing.

SRVKP-9141

GitHub check runs and patching logic is corrected

Before this update, Pipelines as Code created check runs incorrectly by always using the hardcoded state 'in_progress', omitting key output fields, and attempting to patch PipelineRun resources even after validation failed. With this update, the system uses the proper status and conclusion from statusOpts, adds the Title, Summary, and Text output fields to check runs, and prevents patch attempts when the PipelineRun name is invalid.

SRVKP-9141

Failed commit status is correctly updated after validation fixes in GitLab

Before this update, when a PipelineRun execution failed due to validation errors in GitLab, the failed commit status persisted even after the user fixed the validation error. This caused the merge request to appear failed and blocked auto-merge. With this update, the system ensures that the commit status is correctly updated after validation fixes.

SRVKP-9141

Commit status updates correctly after validation fixes in GitLab

Before this update, PipelineRun objects that failed validation in GitLab displayed incorrect names in the pipeline status, causing the failed commit status to persist even after the errors were resolved. With this update, the commit status updates correctly after validation issues are fixed. As a result, merge requests reflect the correct status and the auto-merge functionality is enabled.

SRVKP-9044

Tekton Ecosystem

Multiple image copy enabled in skopeo-copy task using url.txt

Before this update, the skopeo-copy task failed to copy multiple images when source and destination image URLs were not provided, as it required non-empty image URLs and bypassed the url.txt file method. With this update, the skopeo-copy task parameters are optional, allowing the use of the url.txt file for multiple image copies regardless of source and destination URLs. As a result, the task supports copying multiple images using url.txt.

SRVKP-6491

Tekton Results

Description for PipelineRun deletion metric is accurate

Before this update, the metrics exposed by Tekton Results had an inaccurate description for the metric tracking the duration of PipelineRun deletion. This caused confusion and reduced the reliability of metrics reporting. With this update, the description for the prDeleteDuration metric is corrected to accurately reflect the time between PipelineRun completion and final deletion.

SRVKP-9138

Race condition causing database constraint violations is fixed

Before this update, a race condition existed in the Tekton Results watcher where PipelineRun and TaskRun handlers could concurrently attempt to create the same Result record. This led to PostgreSQL unique-constraint violations and ambiguous Unknown gRPC errors. With this update, proper handling of SQLSTATE duplicate key errors is introduced by refetching already-created records, and a PostgreSQL error-to-gRPC translator is added.

SRVKP-9138

Annotations management updated for reliability

Before this update, Tekton Results used a multi-step logic and merge patches for managing annotation updates, which could lead to unreliable and conflict-prone updates to Kubernetes objects. With this update, Tekton Results refactors its annotation handling to use a single Patch operation and switches from merge patches to Server-Side Apply (SSA). These changes are internal and do not introduce user-facing changes but provide more reliable and conflict-aware updates.

SRVKP-9138

The defaultRetention field takes precedence over the deprecated maxRetention field

Before this update, the configuration behavior was inconsistent if the deprecated maxRetention field was carried over during an upgrade. With this update, the defaultRetention field correctly takes precedence over the deprecated field. As a result, the retention policy remains consistent with the TektonConfig CR specification, maintaining backward compatibility during the deprecation period.

SRVKP-9425

Tekton Hub

The system no longer downloads an outdated version of the git-clone task

Before this update, the system downloaded an outdated version of the git-clone task (0.9.0) instead of the latest version (0.10). As a result, end users installed the older 0.9 version, causing inconsistencies and missing improvements available in 0.10. With this update, the git-clone task is updated to version 0.10.

SRVKP-8568

Tekton Chains

Anti-affinity rule added to tekton-chains-controller spec

Before this update, the tekton-chains-controller spec did not include an anti-affinity rule, causing pods to be unevenly distributed across nodes and potentially leading to resource contention. With this update, an anti-affinity rule is added to the tekton-chains-controller spec, improving pod scheduling and ensuring better resource distribution.

SRVKP-8892

TaskRun finalizer no longer remains on resources

Before this update, an old TaskRun finalizer could remain on resources, which prevented the proper cleanup of completed tasks. With this update, the unnecessary finalizer is removed. As a result, TaskRun resources are cleaned up as expected.

SRVKP-9137

End-to-End testing suite stability is restored

Before this update, a build error caused failures in the End-to-End (E2E) testing suite. With this update, the build error is resolved.

SRVKP-9137

Pruner

Namespace-level pruner configuration updates takes effect immediately after upgrade

Before this update, if event based pruner was enabled before an upgrade, the Operator reverted the pruner config values to default values after the upgrade. With this update, the values of pruner config are retained after an upgrade.

SRVKP-10008

1.2.6. Deprecated features

pipelinerun_status field in Repository custom resource is deprecated

With this update, the pipelinerun_status field available in the Repository custom resource is deprecated and will be removed in a future release.

SRVKP-8663

OpenCensus is deprecated

OpenCensus is supported in this release but is deprecated and might be removed in a future release. It is anticipated that a future version of Red Hat OpenShift Pipelines will migrate from OpenCensus to OpenTelemetry for observability and tracing. This migration might require updates to existing PromQL queries.

SRVKP-8534

Red Hat logoGithubredditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar. Explore nossas atualizações recentes.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja o Blog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

Theme

© 2026 Red Hat
Voltar ao topo