Questo contenuto non è disponibile nella lingua selezionata.

Chapter 1. Red Hat OpenShift Pipelines release notes


Note

For additional information about the OpenShift Pipelines lifecycle and supported platforms, refer to the OpenShift Operator Life Cycles and Red Hat OpenShift Container Platform Life Cycle Policy.

Release notes contain information about new and deprecated features, breaking changes, and known issues. The following release notes apply for the most recent OpenShift Pipelines releases on OpenShift Container Platform.

Red Hat OpenShift Pipelines is a cloud-native CI/CD experience based on the Tekton project which provides:

  • Standard Kubernetes-native pipeline definitions (CRDs).
  • Serverless pipelines with no CI server management overhead.
  • Extensibility to build images using any Kubernetes tool, such as S2I, Buildah, JIB, and Kaniko.
  • Portability across any Kubernetes distribution.
  • Powerful CLI for interacting with pipelines.
  • Integrated user experience with the Developer perspective of the OpenShift Container Platform web console, up to OpenShift Container Platform version 4.19.

For an overview of Red Hat OpenShift Pipelines, see Understanding OpenShift Pipelines.

1.1. Compatibility and support matrix

Some features in this release are currently in Technology Preview. These experimental features are not intended for production use.

In the table, features are marked with the following statuses:

TP

Technology Preview

GA

General Availability

Expand
Table 1.1. Compatibility and support matrix
Red Hat OpenShift Pipelines VersionComponent VersionOpenShift VersionSupport Status

Operator

Pipelines

Triggers

CLI

Chains

Hub

Pipelines as Code

Results

Manual Approval Gate

  

1.19

1.0.x

0.32.x

0.41.x

0.25.x (GA)

1.21.x (TP)

0.35.x (GA)

0.15.x (GA)

0.6.x (TP)

4.15, 4.16, 4.17, 4.18, 4.19

GA

1.18

0.68.x

0.31.x

0.40.x

0.24.x (GA)

1.20.x (TP)

0.33.x (GA)

0.14.x (GA)

0.5.x (TP)

4.15, 4.16, 4.17, 4.18

GA

1.17

0.65.x

0.30.x

0.39.x

0.23.x (GA)

1.19.x (TP)

0.29.x (GA)

0.13.x (TP)

0.4.x (TP)

4.15, 4.16, 4.17

GA

1.16

0.62.x

0.29.x

0.38.x

0.22.x (GA)

1.18.x (TP)

0.28.x (GA)

0.12.x (TP)

0.3.x (TP)

4.15, 4.16, 4.17

GA

For questions and feedback, you can send an email to the product team at pipelines-interest@redhat.com.

1.2. Release notes for Red Hat OpenShift Pipelines 1.19

With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.19 is available on OpenShift Container Platform 4.15 and later versions.

1.2.1. New features

In addition to fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.19:

1.2.1.1. Pipelines

  • With this update, you can now specify custom securityContext settings in the EventListener resource. When you enable a custom securityContext, user-defined values override the default configuration. Otherwise, the default securityContext settings are applied automatically.

    Example securityContext configuration in the EventListener resource

    apiVersion: triggers.tekton.dev/v1beta1
    kind: EventListener
    metadata:
      name: listener-securitycontext
    spec:
      serviceAccountName: pipeline
      resources:
        kubernetesResource:
          spec:
            template:
              spec:
                securityContext:
                  runAsNonRoot: true
                containers:
                  - resources:
                      requests:
                        memory: "64Mi"
                        cpu: "250m"
                      limits:
                        memory: "128Mi"
                        cpu: "500m"
                    securityContext:
                      readOnlyRootFilesystem: true
      triggers:
        - name: foo-trig
          bindings:
            - ref: pipeline-binding
            - ref: message-binding
          template:
            ref: pipeline-template
    # ...
    Copy to Clipboard Toggle word wrap

1.2.1.2. Tekton Results

  • With this update, you can configure custom database credentials for Tekton Results by using the TektonConfig custom resource (CR). This eliminates the need to rely on the default PostgreSQL secrets that use default usernames and passwords.

    Example for adding custom database credentials for Tekton Results

    apiVersion: operator.tekton.dev/v1alpha1
    kind: TektonResult
    metadata:
      name: result
    spec:
      db_secret_name: # optional: custom database secret name
      db_secret_user_key: # optional
      db_secret_password_key: # optional
    ...
    Copy to Clipboard Toggle word wrap

  • With this update, the Tekton Results API supports response field filtering or partial responses to reduce payload size and improve network efficiency. You can specify what fields to include in API responses, which benefits List operations by preventing the retrieval of entire objects, thus optimizing response latency and I/O performance.
  • With this update, you can configure retry timings for OCI bundle lookups, such as initial retry delay, backoff factor, and maximum retry duration, in the config-resolver-bundle config map under bundle.resolver.backoff. This helps reduce load on busy registries by preventing aggressive retry behavior.

    Example for configuring retry timings

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: bundleresolver-config
      namespace: tekton-pipelines-resolvers
      labels:
        app.kubernetes.io/component: resolvers
        app.kubernetes.io/instance: default
        app.kubernetes.io/part-of: tekton-pipelines
    data:
      # The initial duration for a backoff.
      backoff-duration: "500ms"
      # The factor by which the sleep duration increases every step
      backoff-factor: "2.5"
      # A random amount of additional sleep between 0 and duration * jitter.
      backoff-jitter: "0.1"
      # The number of backoffs to attempt.
      backoff-steps: "3"
      # The maxumum backoff duration. If reached, remaining steps are zeroed.
      backoff-cap: "10s"
      # The default layer kind in the bundle image.
      default-kind: "task"
    Copy to Clipboard Toggle word wrap

  • With this update, the Git resolver can now use personal access tokens to authenticate with GitHub or GitLab, avoiding rate limits associated with anonymous git clone API usage. To enable this feature, add a gitToken: field to your git resolver parameter specification. Tekton automatically injects the token as an HTTP header during resolution to reduce the risk of quota-related errors during remote resolution.

    Example for configuring the gitToken: field

    apiVersion: tekton.dev/v1beta1
    kind: PipelineRun
    metadata:
      name: git-clone-demo-pr
    spec:
      pipelineRef:
        resolver: git
        params:
        - name: url
          value: https://github.com/tektoncd/catalog.git
        - name: revision
          value: main
        - name: pathInRepo
          value: pipeline/simple/0.1/simple.yaml
        - name: gitToken
          value: "secret-with-token"
        - name: gitTokenKey (optional, defaults to "token")
          value: "token"
      params:
      - name: name
        value: Ranni
    Copy to Clipboard Toggle word wrap

  • With this update, the default log level for SQL in Tekton Results has been set to warn. You can override this setting by specifying the SQL_LOG_LEVEL environment variable in the Tekton Results deployment.

    Example for enabling the SQL_LOG_LEVEL environment variable

    apiVersion: operator.tekton.dev/v1alpha1
    kind: TektonConfig
    metadata:
      name: config
      options:
        deployments:
           tekton-results-api:
             spec:
               template:
                 spec:
                   containers:
                   - name: api
                     env:
                     - name: SQL_LOG_LEVEL
                       value: debug
    # ...
    Copy to Clipboard Toggle word wrap

  • With this update, the Tekton Results watcher retries reconciliation before removing the finalizer, until the storedDeadline duration is reached. This reduces the risk of missing TaskRun or PipelineRun storage.
  • With this update, Tekton Results users can retrieve logs from Splunk that were forwarded by OpenShift Logging. To enable this functionality, set the following environment variables in the Tekton Results API deployment:

    • SPLUNK\_SEARCH\_TOKEN, LOGGING\_PLUGIN\_QUERY\_PARAMS
    • LOGGING\_PLUGIN\_API\_URL

      Example for retrieving forwarded logs by OpenShift Logging

      apiVersion: operator.tekton.dev/v1alpha1
      kind: TektonConfig
      metadata:
        name: config
        options:
          deployments:
             tekton-results-api:
               spec:
                 template:
                   spec:
                     containers:
                     - name: api
                       env:
      Copy to Clipboard Toggle word wrap

      Note
      • The LOGGING\_PLUGIN\_API\_URL variable must be configured with the Splunk endpoint and port number.
  • With this update, the Tekton Results watcher uses StatefulSet ordinals to improve high availability and workload distribution as an alternative to the leader election mechanism.

    Example for enabling StatefulSet ordinals for the Tekton Results watcher

    apiVersion: operator.tekton.dev/v1alpha1
    kind: TektonConfig
    metadata:
      name: config
    spec:
    # ...
      result:
        performance:
          disable-ha: false
          buckets: 4
          replicas: 4
          statefulset-ordinals: true
    # ...
    Copy to Clipboard Toggle word wrap

    Important

    Using StatefulSet ordinals for high availability is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

    For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

1.2.1.3. Pipelines as Code

  • With this update, Pipelines as Code no longer creates a Pending status on GitHub pull requests when an unauthorized bot user attempts to trigger a PipelineRun. Instead of generating a blocking status check, such requests are now silently disallowed.
  • With this update, a new pipelines_as_code_git_provider_api_request_count metric tracks the number of API calls made by Pipelines as Code to Git providers, such as GitHub, GitLab, and Gitea. The metric also helps monitor API rate limit usage per Git provider, namespace, event type, and repository.
  • With this update, URLs in the Repository CR are now validated during creation to ensure they are properly formatted and use valid schemes, such as http or https. This enhancement helps prevent configuration errors and runtime errors.
  • With this update, PipelineRun status comments now render correctly in markdown on Bitbucket Data Center and Bitbucket Cloud, instead of appearing as raw strings in the pull request UI.

1.2.1.4. Operator

  • With this update, you can generate the cosign key pair by setting the generateSigningSecret field in the TektonConfig custom resource (CR) to true. The Red Hat OpenShift Pipelines Operator generates a cosign key pair, a cosign.key private key and a cosign.pub public key.

    Example of enabling cosign key pairs

    apiVersion: operator.tekton.dev/v1
    kind: TektonConfig
    metadata:
      name: config
    spec:
      chain:
        disabled: false
        generateSigningSecret: true
    # ...
    Copy to Clipboard Toggle word wrap

  • With this update, the /ok-to-test memory feature is disabled by default. This precaution helps mitigate the risk of malicious code execution within testing environments.
  • With this update, dynamic variables can be expanded from remote pipeline definitions. This enhancement improves pipeline composition capabilities.
  • With this update, the Git resolver included in the remote resolution feature now uses the native git binary instead of the pure Go go-git library. This change reduces memory consumption and improves clone performance, especially for large repositories. This enhancement uses shallow-clone flags, for example --depth 1, to reduce resource usage. No changes to pipeline manifests are required.
  • With this update, the onError field in Red Hat OpenShift Pipelines supports Tekton parameter substitution. Previously, the onError field only accepted the literal values stopAndFail and continue. You can use the $(params.strategy) substitution token to dynamically determine failure handling behavior at runtime. This allows a single Pipeline definition to adapt its onError policy based on parameters, context, or results.
  • With this release, StepAction definitions are updated from alpha to stable and are now enabled by default. The enable-step-actions flag used in the earlier versions is no longer used and will be removed in a future release.
  • With this update, the Pipeline scheduler now correctly evaluates result references in fan-out/fan-in patterns. Previously, such pipelines could fail unpredictably when matrix tasks relied on result refs.
  • With this update, the remember-ok-to-test value in the TektonConfig CR is set to false by default to reduce the risk of running untrusted code in test environments.

1.2.1.5. Tekton Cache

  • With this update, parameter naming conventions across the StepAction feature are unified for consistency. The casing of cache-fetch and cache-upload step actions is now consistent with that of git-clone.
  • With this update, the tekton-caches tool can push to and retrieve from Google Cloud Storage (GCS) buckets, in addition to existing OCI registry support. To enable this, set the cache backend to a gs://bucket/path URI.
  • With this update, you can store cache archives in any S3 compatible bucket, including on-premises solutions such as MinIO or cloud providers such as AWS. To use this feature, specify a URL, such as s3://my-bucket/cache as the cache backend.
  • With this update, cache archives are compressed using Gzip before being uploaded. This reduces object storage costs and speeds up data transfer, especially for large caches such as Go modules.
  • With this update, restored caches default to 0777 permission, ensuring that executable scripts and other permission-sensitive files function correctly. Previously, restored files defaulted to 0600 permissions, which could prevent scripts from running as expected.
  • With this update, running on Google Kubernetes Engine (GKE) with Workload Identity Federation (WIF) no longer requires embedding key files in tasks. Instead, you can now mount projected volume tokens, eliminating the need for long-lived credentials and improving security.
  • With this update, the code paths for GCS and S3 backends are unified using the gocloud.dev library. This abstraction simplifies support of additional storage providers, such as Azure Blob Storage or local filesystems.
  • With this update, the fetch command is improved to automatically create the destination folder if it does not exist in a new workspace. Previously, the command would fail in such cases, requiring you to create a directory manually.
  • With this update, registry authentication is no longer limited to the /tekton/home/.docker/config.json default path. You can now mount any Docker configuration file and specify its location by using the dockerConfig parameter in your Task resource. However, the custom location for DOCKER_CONFIG must include a valid config.json file.

    Example for enabling the dockerConfig parameter in a task

    apiVersion: tekton.dev/v1
    kind: Task
    metadata:
      name: build-task
    spec:
      workspaces:
        - name: source
        - name: cred
      params:
        - name: cachePatterns
          default: $(params.cachePatterns)
      steps:
        - name: cache-fetch
          ref:
            resolver: cluster
            params:
              - name: name
                value: cache-fetch
              - name: namespace
                value: openshift-pipelines
              - name: kind
                value: stepaction
          params:
            - name: PATTERNS
              value: $(params.cachePatterns)
            - name: SOURCE
              value: oci://$(params.registry)/cache-go:{{hash}}
            - name: CACHE_PATH
              value: $(workspaces.source.path)/cache
            - name: WORKING_DIR
              value: $(workspaces.source.path)/repo
            - name: DOCKER_CONFIG
              value: $(workspaces.cred.path)/
        - name: cache-upload
          ref:
            resolver: cluster
            params:
              - name: name
                value: cache-upload
              - name: namespace
                value: openshift-pipelines
              - name: kind
                value: stepaction
          params:
            - name: PATTERNS
              value: $(params.cachePatterns)
            - name: TARGET
              value: oci://$(params.registry)/cache-go:{{hash}}
            - name: CACHE_PATH
              value: $(workspaces.source.path)/cache
            - name: WORKING_DIR
              value: $(workspaces.source.path)/repo
            - name: DOCKER_CONFIG
              value: $(workspaces.cred.path)/
    			...
    # ...
    Copy to Clipboard Toggle word wrap

1.2.1.6. Tekton Chains

  • With this update, the Tekton Chains controller uses StatefulSet ordinals to improve high availability and workload distribution as an alternative to the leader election mechanism.

    Example of enabling the StatefulSet ordinals for the the Chains controller

    apiVersion: operator.tekton.dev/v1alpha1
    kind: TektonChains
    metadata:
      name: chain
    spec:
      chain:
        performance:
          disable-ha: false
          buckets: 4
          replicas: 4
          statefulset-ordinals: true
    Copy to Clipboard Toggle word wrap

    Important

    Using StatefulSet ordinals for high availability is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

    For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

1.2.1.7. Pipelines as Code

  • With this release, Pipelines as Code introduces the pipelines_as_code_git_provider_api_request_count metric. This metric tracks the number of API requests made by Pipelines as Code to a Git provider in response to an event.
  • With this release, the TektonConfig custom resource provides support for two new fields to enable the cancel-in-progress feature for pipeline runs in Pipelines as Code globally:

    • enable-cancel-in-progress-on-pull-requests
    • enable-cancel-in-progress-on-push

      When set to true, these fields automatically cancel any in-progress pipeline run triggered by pull request or push events when there is a new commit. By default, both these fields are set to false.

      Note

      If a PipelineRun resource includes the pipelinesascode.tekton.dev/cancel-in-progress annotation, it overrides the corresponding TektonConfig setting.

      Example enabling auto-cancel on pull requests and push events with TektonConfig CR

      apiVersion: operator.tekton.dev/v1alpha1
      kind: TektonConfig
      metadata:
        name: config
      # ...
      platforms:
          openshift:
            pipelinesAsCode:
              # ...
              settings:
                # ...
                enable-cancel-in-progress-on-pull-requests: "false"
                enable-cancel-in-progress-on-push: "false"
              # ...
      Copy to Clipboard Toggle word wrap

  • With this release, Pipelines as Code supports the git_tag dynamic variable. This variable is used during tag push events and reflects the value of the Git tag. For example, if the tag v1.0 is pushed to the Repository CR, the git_tag variable holds the value v1.0.

    Example configuration for git_tag

    ---
    apiVersion: tekton.dev/v1
    kind: PipelineRun
    metadata:
      name: pull-pr-3
      annotations:
        pipelinesascode.tekton.dev/on-event: ["push"]
        pipelinesascode.tekton.dev/on-target-branch: ["refs/tags/*"]
    spec:
      params:
        - name: tag
          value: "{{ git_tag }}"
      pipelineSpec:
        tasks:
            # ...
        tasks:
            ...
            taskSpec:
              steps:
                ...
                  script: |
    Copy to Clipboard Toggle word wrap

  • With this release, the TektonConfig CR includes the skip-push-event-for-pr-commits field. When enabled, Pipelines as Code does not trigger pipeline runs for push events if the commit SHA is included in an open pull request. This prevents duplicate pipeline runs for the same commit. By default, this field is set to true.

    Example configuration for skip-push-event-for-pr-commits in TektonConfig

    apiVersion: operator.tekton.dev/v1alpha1
    kind: TektonConfig
    metadata:
      name: config
    # ...
    platforms:
      openshift:
        pipelinesAsCode:
          additionalPACControllers:
            <controllerName>:
              enable: true
              configMapName:
              secretName:
              settings:
          enable: true
            # ...
            settings:
              # ...
              hub-url: https://api.hub.tekton.dev/v1
              skip-push-event-for-pr-commits: "true"
              remote-tasks: "true"
              secret-auto-create: "true"
              # ...
    Copy to Clipboard Toggle word wrap

  • With this release, an OpenAPI schema is now integrated into Pipelines as Code for the Repository CR. This schema enables IDE autocompletion for Repository CR writing and allows repository explanations via the oc explain command.
  • With this update, when you set the on-cel-expression, on-event, or on-target-branch annotations in a repository, the on-cel-expression annotation takes precedence. The on-event and on-target-branch annotations are ignored in this case. To alert users, a warning log and Kubernetes event are generated to indicate this behavior.

1.2.1.8. Event-based Pruner

Important

The Pruner component is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

  • With this update, the Pruner component introduces automated cleanup of PipelineRun and TaskRun resources in Red Hat OpenShift Pipelines. It supports the following features:

    • Time-based pruning (TTL). This feature automatically deletes completed PipelineRun and TaskRun resources after a specified duration. This is controlled through the ttlSecondsAfterFinished setting.
    • History-based pruning. This feature retains a limited number of successful and failed runs. it is configured through the following parameters:

      • successfulHistoryLimit
      • failedHistoryLimit
      • historyLimit
    • Flexible configuration levels. There are two configurable levels:

      • Global. This option applies to all namespaces except those prefixed with kube- and openshift-.
      • Namespace. This option applies to all resources in a specific namespace

        Note

        In this release, only Global and Namespace configurations are available.

  • With this update, you can disable or enable the event-based pruner in the TektonConfig CR by setting spec.tektonpruner.disabled to true or false. Fine-grained configuration is not yet supported in the TektonConfig CR and must be managed through config maps.

    Note

    The existing job-based pruner must be disabled before enabling the event-based pruner.

1.2.2. Breaking changes

  • With this release, the hub clustertask command is removed from the CLI because the ClusterTask functionality is no longer available on Tekton Hub.
  • With this release, support for ClusterTask objects is removed. As a result, the tkn clustertask and tkn task create commands are no longer available.
  • With this release, the opc results list command is replaced with the opc results result list command.
  • With this update, the disable-affinity-assistant flag is removed from the Red Hat OpenShift Pipelines Operator. This flag is deprecated in Red Hat OpenShift Pipelines v1.13 and does not have any effect in Red Hat OpenShift Pipelines v1.19. The disable-affinity-assistant flag is still available in the `TektonConfig custom resource (CR) for backward compatibility, but it does not impact the behavior of the Red Hat OpenShift Pipelines Operator. To maintain the same behavior, set the coschedule feature flag to disabled.

1.2.3. Known issues

  • Currently, the event-based pruner does not strictly validate the contents of the tekton-pruner-default-spec config map. If invalid configuration keys or malformed values with incorrect field names are provided, the configuration is ignored. As a result, the pruner might fall back to default behavior or skip pruning altogether.

1.2.4. Fixed issues

  • Before this update, the s2i-java task failed with an error message, /usr/libexec/s2i/assemble: No such file or directory. This error occurred due to incorrect script path references. With this update, the default script path for the s2i-java task is modified to /usr/local/s2i. Other S2I tasks, such as those for Go or .NET, continue to use the /usr/libexec/s2i/assemble script path.
  • Before this update, YAML syntax errors in PipelineRun were only reported in logs and Kubernetes events, making them difficult to detect and troubleshoot. With this update, Pipelines as Code comments directly on pull requests when PipelineRun YAML validation errors occur. This improves error visibility and simplifies troubleshooting on GitHub, GitLab, and Gitea providers.
  • Before this update, in the GitLab integration, Pipelines as Code posted comments on merge requests at the start and end of each PipelineRun. This behavior led to excessive comments appearing on merge requests when multiple pipelineruns were triggered. With this update, you can now disable all GitLab comments by setting disable_all to true in the Repository custom resource (CR).

    Example of enabling the Repository CR

    ---
    apiVersion: "pipelinesascode.tekton.dev/v1alpha1"
    kind: Repository
    metadata:
      name: test-pac
    spec:
      # other fields
      settings:
        gitlab:
          comment_strategy: "disable_all"
    Copy to Clipboard Toggle word wrap

  • Before this update, logs retrieved from the Amazon Web Services (AWS) S3 bucket were displayed in a random order, making debugging and troubleshooting difficult. With this update, logs from AWS S3 are now correctly ordered chronologically, improving readability and the overall debugging experience.
  • Before this update, Pipelines as Code required all custom parameters defined in a Repository CR to include predefined values. With this update, custom parameters can now be defined without specifying default values in the Repository CR. This change enables values to be supplied through webhook payloads and preserves backward compatibility.
  • Before this update, when using the github-push ClusterTriggerBinding, the git-clone command could fail with HTTP 403 errors. This issue occurred because the $(body.repository.url) parameter pointed to the GitHub API URL instead of a valid Git clone URL. With this update, a new git-repo-clone-url parameter uses $(body.repository.html_url) to ensure that the cloning uses the correct repository URL.
  • Before this update, the buildah task failed to process build arguments that contained spaces when used with the cluster resolver. This issue affected users migrating from the deprecated ClusterTask custom resource (CR). With this update, the BUILD_ARGS parameter in the buildah task now correctly supports arguments with spaces, for example, EXAMPLE="abc def", restoring compatibility with previous functionality.
  • Before this update, the PipelineRun details page in the OpenShift Container Platform web console failed to load correctly, preventing users from viewing pipeline run details. With this update, the web console displays the PipelineRun information correctly.
  • Before this update, the console plugin styling was outdated due to the upgrade to PatternFly 6 and the removal of deprecated co- classes. This caused alignment and spacing issues in the Pipelines section of the OpenShift Container Platform web console. With this update, the console plugin styling is updated to use the appropriate PatternFly equivalent classes, ensuring consistent alignment and visual integration with the current OpenShift Container Platform web console design standards
  • Before this update, the OpenShift Pipelines console plugin failed due to a default Tekton Results TLS secret creation issue in OpenShift Pipelines 1.18. This caused the console to be inaccessible, making pipeline details unviewable. With this release, the default Tekton Results TLS secret creation is skipped in OpenShift Pipelines 1.18, resolving the issue.
  • Before this update, links for PipelineRun in the OpenShift Container Platform web console incorrectly pointed to the deprecated v1beta1 Red Hat OpenShift Pipelines APIs instead of the current v1 APIs. With this update, the links point to the appropriate v1 APIs.
  • Before this update, Red Hat OpenShift Pipelines and Tekton Results incorrectly displayed TaskRun resources from previous PipelineRun resource that shared the same name. This led to confusion about which TaskRun resources were associated with the current execution. With this update, Tekton Results now correctly isolates and displays only the TaskRun resources associated with the current PipelineRun resource, preventing the mixing of archived and active execution data.
  • Before this update, end-to-end (E2E) tests were unstable due to GitOps comments being incorrectly associated with cancelled pipeline runs. This behavior caused intermittent test failures and reduced reliability in CI/CD pipelines. With this update, GitOps comments are no longer mixed with cancelled pipeline runs, resulting in stable and predictable E2E tests.
  • Before this update, the tekton-caches tarit tool did not keep file permissions when compressing cached directories. As a result, executable files and scripts sometimes stopped working after being unpacked. This caused problems especially when artifacts were used by different users or SELinux-enforcing base images. With this update, file permissions are kept correctly during caching and files work as expected in all user environments.
  • Before this update, when a TaskRun failed due to ImagePullBackOff errors, the PipelineRun log snippet displayed unclear messages, such as “pods not found,” after switching between tabs in the Pipelines section of the OpenShift Container Platform web console. With this update, errors include clear error messages, such as TaskRunImagePullFailed or failing to pull image, to improve the troubleshooting experience.
  • Before this update, certain elements within the Red Hat OpenShift Pipelines Start interface in the OpenShift Container Platform web console, such as deployment-name, Hr, Min, and Sec, always showed in English, no matter the user’s regional settings. With this update, all interface elements are fully localized and now display according to the user’s selected region.
  • Before this update, the Tekton pruner job encountered ImagePullBackOff errors during Helm-based installation due to missing SHA256 digests in image tags. With this update, image tag includes the required SHA256 digests and the error no longer occurs.
  • Before this update, the Pipelines as Code controller could crash with an index out of range error during push events from Bitbucket Data Center. This behavior occurred when the changes array in the event payload was empty. With this update, Pipelines as Code now handles empty changes arrays gracefully, preventing the controller from crashing.
  • Before this update, adding labels to pull requests would unintentionally trigger a PipelineRun. With this update, this issue is resolved.
  • Before this update, closing a pull request would cancel ongoing PipelineRun even if the cancel-in-progress annotation was not set. With this update, pipeline runs are only canceled on pull request closure when you configure the cancel-in-progress annotation.
  • Before this update, the GitLab integration in Pipelines as Code encountered API call failures caused by an incorrect API URL. With this update, this issue is fixed by introducing URL validation, which prevents such misconfigurations and ensures successful API communication.
  • Before this update, Pipelines as Code did not cancel a PipelineRun created with the generateName field, even when the cancel-in-progress annotation was set. With this update, Pipelines as Code correctly cancels an in-progress PipelineRun that contains the generateName field.
  • Before this update, when provenance was configured in GitLab, Pipelines as Code retrieved an incorrect PipelineRun template from the Git repository. With this update, Pipelines as Code correctly identifies and retrieves the intended template in GitLab provenance setups.
  • Before this update, if you used the /ok-to-test GitOps command in a push commit comment, it triggered a pipeline run. With this update, the /ok-to-test command no longer triggers a pipeline run when used outside of pull requests.
  • Before this update, TaskRun and PipelineRun resources failed with the kind param must be task or pipeline error when referencing StepAction definitions by the Artifact Hub resolver. This happened because the StepAction definition was not recognized as a valid resource type. With this update, the Artifact Hub resolver supports StepAction references, allowing users to include remote step actions in their tasks and pipelines.
  • Before this update, PipelineRun failed with the error failed to create subPath directory for volumeMount, even though OpenShift Container Platform would eventually recover and create the required pod. This led to unnecessary PipelineRun failures and a poor user experience, often requiring manual restarts. With this update, PipelineRun implements a grace period and retry mechanism for subPath directory creation errors. This allows OpenShift Container Platform time to resolve the issue automatically, reducing false failures and improving reliability.
  • Before this update, the Tekton Results API server encountered an error when a log query was made for a non-existent TaskRun or PipelineRun. With this update, the issue is fixed.
  • Before this update, users had to wrap the CLI in separate Task resources to back up or restore build caches. With this update, StepAction definitions support fetch and upload, so you can handle cache operations with a single step inside any Task or Pipeline.
  • Before this update, pushing a cache to a registry with a self-signed certificate failed due to TLS errors. With this update, the CLI and Task resource support new --insecure flag, which enables those pushes and makes it easier to work with air-gapped development clusters and local registries.
  • Before this update, in GitLab, pipeline runs for Merge Requests (MRs) were automatically re-triggered when non-code changes occurred in Pipelines as Code, such as updating the MR description or modifying reviewers. This behavior caused unnecessary pipeline executions. With this update, this issue is fixed, and pipeline runs are only triggered by new commits.
  • Before this update, in Pipelines as Code, a push PipelineRun with the on-path-change annotation was not triggered on pull request merge events, even when the merged pull request modified the specified paths. With this update, this issue is fixed by ensuring that the pipeline is correctly triggered when relevant path changes are introduced through a pull request merge.
  • Before this update, Pipelines as Code attempted to parse and validate every YAML file in the .tekton directory, resulting in false errors for unrelated or invalid non-Tekton resources. With this update, Pipelines as Code validates only explicitly defined Tekton resources, reducing noise in pull request feedback and improving the accuracy of CI validation.
  • Before this update, a non-domain-qualified finalizer name was used, leading to warnings in the Pipelines as Code watcher from the Kubernetes API. This issue is now resolved by using a domain-qualified finalizer name that aligns with Kubernetes conventions.
  • Before this update, the Pipelines as Code controller terminated unexpectedly when validating GitHub webhook secrets with an invalid or expired token. With this update, this issue is fixed. The controller logs a clear error message and continues running, ensuring webhook functionality and controller availability are not disrupted.

1.3. Release notes for Red Hat OpenShift Pipelines 1.19.1

With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.19.1 is available on OpenShift Container Platform 4.15 and later versions.

1.3.1. Fixed issues

  • Before this update, installing the Red Hat OpenShift Pipelines 1.19 displayed the Tekton Pruner API in the console. With this update, this issue is fixed and the API is no longer displayed.
  • Before this update, installing OpenShift Pipelines on a FIPS-enabled cluster caused the Tekton entrypoint binary to display an error with the FIPS mode is enabled, but this binary is not compiled with FIPS compliant mode enabled message. With this update, this issue is fixed. OpenShift Pipelines functions correctly on FIPS-enabled clusters, and the Red Hat OpenShift Pipelines Operator is designed and validated for use on FIPS enabled clusters.

1.4. Release notes for Red Hat OpenShift Pipelines 1.19.2

With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.19.2 is available on OpenShift Container Platform 4.15 and later versions.

1.4.1. Fixed issues

  • Before this update, when you installed the OpenShift Pipelines Operator 1.19.0 by using the web console, the Tekton Pruner API was incorrectly visible in the Details section. With this update, the Tekton Pruner` API is no longer visible in the web console, as intended by the pruner design.
  • Before this update, there were only limited tasks fetched from ArtifactHUB in the pipeline builder page, and on search for a task, if the task was not available in the first fetched list, then that task was not displayed. With this update, UI makes an API call to ArtifactHub to search for a task in Pipeline builder quick search and lists them to install.
  • Before this update, the Pipeline Builder page only fetched a limited number of tasks from Artifact Hub. If a searched task was not included in that initial list, it would not appear in the results. With this update, the UI now makes a direct API call to Artifact Hub during quick search, allowing users to find and list additional tasks for installation.
  • Before this update, the Events tab for a PipelineRun did not display any events. With this update, the tab now correctly shows events, allowing you to view relevant activity for each PipelineRun.
  • Before this update, onError variable substitution did not work with the v1beta1 API. With this update, the issue has been fixed.
  • Before this update, the Git resolver did not use the provided gitToken and gitTokenKey for remote authentication, which caused HTTP token-based authentication to fail. With this update, all Git resolver operations now use the provided gitToken for remote authentication.

1.5. Release notes for Red Hat OpenShift Pipelines 1.19.3

With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.19.3 is available on OpenShift Container Platform 4.15 and later versions.

1.5.1. Fixed issues

  • Before this update, upgrading to version 1.19.0 of the Red Hat OpenShift Pipelines Operator could cause the Operator pod to crash. This issue was caused by an empty pointer reference at startup when the spec.tektonpruner.disabled field in the tektonconfig custom resource (CR) was nil. With this update, the issue is fixed.
  • Before this update, the tekton_pipelines_controller_running_pipelineruns metric included pending PipelineRun objects in its count, causing inaccurate reporting. With this update, the metric counts only actively running PipelineRun objects, excluding those in a pending state, improving monitoring accuracy.
  • Before this update, the default Tekton Results TLS secret was mistakenly deleted during pre-upgrade steps on the OpenShift Container Platform. This deletion caused the Tekton Results API to fail during the upgrade process. With this update, OpenShift Pipelines prevents the deletion of the TLS secret, ensuring the Tekton Results API remains operational throughout the upgrade.
  • Before this update, a recent regression caused the git resolver to bypass the Proxy custom-configured public key infrastructure (PKI), which could prevent it from resolving references to self-hosted Git providers. With this update, the git resolver trusts the full CA bundle configured in the Proxy, including any certificates in your custom PKI, restoring secure connectivity to self-hosted Git providers.
  • Before this update, if a TaskRun object failed to create a PersistentVolumeClaim (PVC) due to a ResourceQuota conflict, such as a concurrent update or exhausted quota, the TaskRun was immediately marked as Failed. With this update, when a TaskRun encounters a PVC creation failure caused by a ResourceQuota issue, it remains in a pending state and retries PVC creation instead of failing immediately.
Torna in cima
Red Hat logoGithubredditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi. Esplora i nostri ultimi aggiornamenti.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita il Blog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

Theme

© 2025 Red Hat