Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 1. Red Hat OpenShift Pipelines release notes
For additional information about the OpenShift Pipelines lifecycle and supported platforms, refer to the OpenShift Operator Life Cycles and Red Hat OpenShift Container Platform Life Cycle Policy.
Release notes contain information about new and deprecated features, breaking changes, and known issues. The following release notes apply for the most recent OpenShift Pipelines releases on OpenShift Container Platform.
Red Hat OpenShift Pipelines is a cloud-native CI/CD experience based on the Tekton project which provides:
- Standard Kubernetes-native pipeline definitions (CRDs).
- Serverless pipelines with no CI server management overhead.
- Extensibility to build images using any Kubernetes tool, such as S2I, Buildah, JIB, and Kaniko.
- Portability across any Kubernetes distribution.
- Powerful CLI for interacting with pipelines.
- Integrated user experience with the OpenShift Container Platform web console.
For an overview of Red Hat OpenShift Pipelines, see Understanding OpenShift Pipelines.
1.1. Compatibility and support matrix Link kopierenLink in die Zwischenablage kopiert!
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use.
In the table, the following statuses mark each feature:
| TP | Technology Preview |
| GA | General Availability |
| Red Hat OpenShift Pipelines Version | Component Version | OpenShift Version | Support Status | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Operator | Pipelines | Triggers | CLI | Chains | Hub | Pipelines as Code | Results | Manual Approval Gate | Pruner | Cache | ||
| 1.22 | 1.9.x | 0.35.x | 0.44.x | 0.26.x (GA) | 1.23.x (TP) | 0.42.x (GA) | 0.18.x (GA) | 0.8.x (TP) | 0.3.x (GA) | 0.3.x (GA) | 4.14, 4.16, 4.17, 4.18, 4.19, 4.20, 4.21 | GA |
| 1.21 | 1.6.x | 0.34.x | 0.43.x | 0.26.x (GA) | 1.23.x (TP) | 0.39.x (GA) | 0.17.x (GA) | 0.7.x (TP) | 0.3.x (GA) | 0.3.x (GA) | 4.14, 4.16, 4.17, 4.18, 4.19, 4.20, 4.21 | GA |
| 1.20 | 1.3.x | 0.33.x | 0.42.x | 0.25.x (GA) | 1.22.x (TP) | 0.37.x (GA) | 0.16.x (GA) | 0.6.x (TP) | 0.2.x (TP) | 0.2.x (TP) | 4.14, 4.16, 4.17, 4.18, 4.19, 4.20, 4.21 | GA |
The OpenShift console plugin for OpenShift Pipelines follows the same version as the OpenShift Pipelines Operator.
For questions and feedback, you can send an email to the product team at pipelines-interest@redhat.com.
1.2. Release notes for Red Hat OpenShift Pipelines 1.22 Link kopierenLink in die Zwischenablage kopiert!
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.22 is available on OpenShift Container Platform 4.14 and later supported versions.
For more information about supported OpenShift Container Platform versions, see Life Cycle Dates.
1.2.1. New features and enhancements Link kopierenLink in die Zwischenablage kopiert!
In addition to fixes and stability improvements, these sections highlight what is new in Red Hat OpenShift Pipelines 1.22:
Pipelines
- hostUsers support in podTemplate for user namespace isolation
With this update, OpenShift Pipelines supports the
hostUserssetting in thepodTemplatefield. You can configure the Kubernetes-native user namespace isolation on OpenShift Container Platform 4.20 and later without using legacy CRI-O annotations. SettinghostUserslets you explicitly control host user namespace sharing for both TaskRun and PipelineRun workloads.- The HTTP resolver supports content verification with a hash parameter
With this update, the HTTP resolver includes an optional
hashparameter that accepts SHA-256 or SHA-512 hashes of the expected content. When you provide a hash, the resolver verifies content integrity after fetching by comparing the hash of the received content against the expected value. This enhancement helps improve security by ensuring that the content you fetch from HTTP sources matches your expectations, similar to the content verification available with the git resolver using commit hashes and the bundle resolver using digests.- Resolver caching reduces API calls and helps prevent rate limit errors
With this update, OpenShift Pipelines supports caching for bundle, git, and cluster resolvers. This reduces redundant fetches of remote resources and helps prevent API rate limit errors when executing pipelines. The cache operates in three modes:
-
always- cache all resources -
never- disable caching auto- cache only immutable references such as git SHAs and digest-based bundlesThe default mode is
auto. You can configure cache behavior globally through resolver config maps or override it per task using thecacheparameter.
-
- Array values are supported in
whenexpressions With this update, array values can be resolved in the
inputattribute ofwhenexpressions, enabling more flexible conditional logic in pipelines.- Step display names improve pipeline readability
With this update, step objects support a
displayNamefield, helping improve pipeline readability and monitoring.- Pipelines can execute embedded pipelines
With this update, pipelines can execute embedded pipelines directly using the
PipelineSpecfield under tasks, enabling pipelines-in-pipelines functionality.- Concurrent StepAction resolution reduces TaskRun startup time
With this update,
StepActionsin aTaskRunare resolved concurrently instead of sequentially, reducing startup time when using multiple remoteStepActions.- PipelineRuns handle PVC quota limits with resilience
With this update, when persistent volume claim (PVC) creation hits a quota limit, the
PipelineRunis requeued until quota becomes available or the run times out, instead of failing immediately.- Per-task timeout overrides in PipelineRun
With this update, individual task timeouts can be overridden at the
PipelineRunlevel using thespec.taskRunSpecs[].timeoutfield.
Operator
- ServiceMonitor configuration for Results and Pipelines components
With this update, the OpenShift Pipelines Operator creates
ServiceMonitorresources for theTekton Resultsandtekton-pipelines-webhookcomponents. This enables the Prometheus Operator to automatically discover their metrics endpoints, simplifying integration with OpenShift monitoring and alerting.
Pipelines as Code
- Caching changed files to reduce VCS API load
With this update, Pipelines as Code caches the list of changed files per event. This reduces redundant VCS API calls when evaluating
path.pathChanged()oron-path-changeannotations, helping minimize the risk of hitting API rate limits.
- Update comment strategy for GitLab and GitHub
With this update, Pipelines as Code introduces a new comment strategy named
updatefor GitLab and GitHub webhooks. Instead of posting multiple comments perPipelineRun, the system maintains a single status comment. When the pipeline status changes or the run is re-executed, the existing comment is updated with the new status and commit SHA. This helps reduce comment noise and improve repository readability.- Commit message tags to skip pipeline execution
With this update, pipeline executions can be skipped using specific commit message tags. Supported, case-insensitive tags include:
-
[skip ci] -
[ci skip] -
[skip tkn] [tkn skip]This prevents unnecessary pipeline executions for minor or work-in-progress commits.
ImportantWhen using
[skip ci]or[ci skip]commands in commit messages on GitLab, one additional pipeline entry appears for the same commit SHA in the GitLab UI. This is expected behavior on GitLab, which uses the skip commands to skip any GitLab CI regardless of whether a.gitlab-ci.ymlfile is defined.
-
- Glob pattern support for GitHub App token scoping
With this update, Pipelines as Code supports glob patterns when specifying repositories for GitHub App token scoping. You can use wildcard patterns, such as
my-org/*, in both global config maps and repository-level configurations. Users managing many private Git submodules can grant token access to all relevant repositories in a single configuration step.The following example shows glob pattern usage in a global config map:
apiVersion: v1 kind: ConfigMap metadata: name: pipelines-as-code namespace: pipelines-as-code data: secret-github-app-scope-extra-repos: "owner2/project2, owner3/*" # ...The following example shows glob pattern usage in a
Repositorycustom resource:apiVersion: "pipelinesascode.tekton.dev/v1alpha1" kind: Repository metadata: name: test namespace: test-repo spec: url: "https://github.com/linda/project" settings: github_app_token_scope_repos: - "owner/project" - "owner1/*" # ...- Webhook signature validation for Forgejo and Gitea providers
With this update, Pipelines as Code enforces webhook signature validation for Forgejo and Gitea providers. Previously, the provider implementation skipped validation, which could allow unauthenticated or spoofed requests to trigger pipelines. With this release, the system verifies that incoming requests originate from trusted sources.
- CEL expressions are supported in pipeline templates
With this update, Pipelines as Code supports the
cel:prefix for evaluating Common Expression Language (CEL) expressions directly within pipeline templates. Previously, template variables were limited to simple data extraction from the request body, headers, or files. You can use thecel:prefix to perform inline logic, including ternary operators, presence checks, and complex string compositions. Thebody,headers,files, andpacnamespaces are exposed to the CEL evaluation engine.For example, the following expression conditionally selects a commit ID based on the presence of parent commits:
{{ cel: has(body.toCommit) && body.toCommit.parents.size() > 1 ? body.toCommit.parents[1].id : body.toCommit.id }}- Optimized GitHub API calls help improve .tekton file retrieval performance
With this update, Pipelines as Code uses the GitHub GraphQL API to fetch multiple .tekton configuration files in a single batched request instead of making individual REST API calls. This reduces the number of GitHub API calls and helps improve performance when discovering pipeline definitions on both GitHub and GitHub Enterprise instances.
User interface
- ANSI color support in the OpenShift console log viewer
With this update, the log viewer in the OpenShift console supports ANSI color codes for
TaskRunandPipelineRunlogs, helping improve readability.
1.2.2. Technology Preview features Link kopierenLink in die Zwischenablage kopiert!
Technology Preview features offer early access to new product innovations. These features are not fully supported, might be incomplete, and are not for production use. For more information, see Technology Preview Features Support Scope.
Multi-cluster
- Multi-cluster configuration in TektonConfig (Technology Preview)
With this update, the
TektonConfigcustom resource includes new multi-cluster configuration fields in theschedulersection. This enables configuration of multi-cluster setups for OpenShift Pipelines. You can setmulti-cluster-disabledto enable or disable multi-cluster mode and specify themulti-cluster-roleas eitherHuborSpoke.# ... scheduler: config.yaml: cel: {} queueName: pipelines-queue disabled: false multi-cluster-disabled: false multi-cluster-role: Hub options: {} # ...- Automatic scaling for Tekton Results in multi-cluster Hub mode (Technology Preview)
With this update, when the OpenShift Pipelines Operator is configured with
multi-cluster-disabled=falseandmulti-cluster-role=Hub, the replicas oftekton-results-watcherandtekton-results-retention-policy-agentdeployments are set to zero. These components run only on Spoke clusters, where local data fetching and retention occur, reducing resource usage on the Hub cluster without affecting the centralized Results API.- Tekton Scheduler installation using the OpenShift Pipelines Operator (Technology Preview)
With this update, the OpenShift Pipelines Operator supports installation and management of the Tekton Scheduler (Tekton-Kueue). A new
schedulersection in theTektonConfigCR allows enabling the scheduler and specifying a default queue name.The integration supports multi-cluster configurations with
multi-cluster-disabledandmulti-cluster-role. Administrators can manage pipeline resource allocation and queuing across single or multiple OpenShift Container Platform clusters.The upstream Kueue component must be installed for the Tekton Scheduler to function.
- Visual indicator for federated PipelineRuns in the multi-cluster UI (Technology Preview)
With this update, the multi-cluster user interface (UI) includes an icon to differentiate between a
PipelineRunobject running on the local hub cluster and a federatedPipelineRun. The UI determines this by evaluating themanagedByfield within thePipelineRunresource.
1.2.3. Breaking changes Link kopierenLink in die Zwischenablage kopiert!
User interface
- Pipelines console navigation and plugin integration update
With this update, the legacy static console plugin is fully deprecated. Previously, a limited Pipelines navigation entry appeared even when the console plugin was disabled. This behavior is no longer supported.
You must explicitly enable the console plugin to access the Pipelines section in the OpenShift console.
1.2.4. Known issues Link kopierenLink in die Zwischenablage kopiert!
- buildah-ns task fails on OpenShift Container Platform 4.20 and later
When you use the buildah-ns task on OpenShift Container Platform 4.20 and later, the task fails with the error
reading ID mappings from "/proc/0/uid_map": open /proc/0/uid_map: no such file or directory. This occurs because the CRI-O annotation-based user namespace mechanism usingio.kubernetes.cri-o.userns-mode: "auto"was removed in OpenShift Container Platform 4.20 due to upstream Kubernetes changes. As a consequence, the buildah-ns task cannot enable user namespaces using the annotation-based approach.To work around this issue, use the standard
buildahtask and configurehostUsers: falsein thePodTemplatefield to enable user namespace support via the Kubernetes-native mechanism available on OpenShift Container Platform 4.20 and later.- tkn CLI has limited functionality in multicluster environments
When using the
tknCLI tool in a multicluster setup, several commands do not work correctly. As a consequence, the following commands fail or produce incorrect results on the Hub cluster because task runs are executed on Spoke clusters:-
tkn taskrun list -
tkn pipelinerun describe -
tkn pipelinerun logs tkn pipelinerun cancelAdditionally, on Spoke clusters,
tkn pipelinerun listandtkn taskrun listonly work while pipeline runs are still running, as completed runs are garbage collected immediately and no longer appear in the list.
-
- opc results logs get command limits output to 300 lines
When using the
opc results logs getcommand to retrieve logs, the output is limited to 300 lines regardless of the actual log length. As a consequence, you cannot view complete logs for pipeline runs or task runs that generate more than 300 lines of output.To work around this issue, use the
opc results pipelinerun logsoropc results taskrun logscommands instead. These commands provide complete log output. Theopc results logs getcommand might be deprecated in a future release.
1.2.5. Fixed issues Link kopierenLink in die Zwischenablage kopiert!
Pipelines
- Affinity Assistant pods inherit correct service account
Before this update, when the affinity assistant was enabled to co-schedule tasks sharing workspace volumes, Affinity Assistant pods used the
defaultservice account instead of the service account configured in the PipelineRun. In security-restricted environments, thedefaultservice account lacked the necessary Security Context Constraints (SCC) permissions. As a consequence, tasks requiring workspace access would not start, causing the PipelineRun to block. With this update, Affinity Assistant pods inherit the service account from the PipelineRun’staskRunTemplateby default. As a result, Affinity Assistant pods have the correct permissions, and PipelineRuns using workspaces run successfully without requiring manual SCC configuration.- Misconfigured TaskRun pods fail early with clear error messages
Before this update, when a
TaskRunpod could not start due to misconfiguration such as a missing config map or secret, the task run timed out silently with a generic timeout message. As a consequence, you could not identify the root cause of the failure, making debugging difficult. With this update, task runs with pod configuration errors fail immediately with a clear error message indicating the specific issue, such as missing resources. As a result, you can identify and resolve pod configuration problems without waiting for timeout periods.- Pipeline runs without timeouts no longer cause excessive reconciliation
Before this update, when you did not configure a timeout on a task run or pipeline run, the controller reconciled the run thousands of times continuously while in progress. As a consequence, this caused excessive CPU usage on the controller, increased latency for other operations, and impacted cluster performance and scalability. With this update, the reconciliation logic correctly handles runs without timeout configurations. As a result, reconciliation occurs only when there are actual changes to the run or its child resources, helping improve controller performance and cluster stability.
- Pipeline parameter defaults correctly resolve references to other parameters
Before this update, when you defined a pipeline parameter default value that referenced another parameter, such as
default: $(params.registry)/app, the parameter was not resolved and the literal string was used instead. As a consequence, task pod creation failed, and you could not use fallback patterns in pipelines and tasks. With this update, parameter resolution supports arbitrary dependency chains for all parameter types, including string, array, and object, and notation styles such asparams.X,params["X"], andparams['X']. Circular dependencies are detected and reported with clear error messages. As a result, you can define parameter defaults that reference other parameters, enabling flexible fallback patterns in your pipelines.- PipelineRuns correctly handle TaskRef reconciliation errors
Before this update,
PipelineRunsfailed whenTaskRefreconciliation encountered retryable errors. As a consequence, pipelines failed unnecessarily due to transient issues. With this update,PipelineRunsfail only on explicit validation errors and retry on retryable errors. As a result, pipelines are more resilient to temporary issues.- Kubernetes-native sidecars correctly handle signals
Before this update, Kubernetes-native sidecars experienced issues with repeated init container restarts due to improper signal handling. With this update, the controller adds signal handling to
SidecarLogresults. As a result, sidecars operate more reliably.- TaskRun pods are retained when timeouts occur with keep-pod-on-cancel enabled
Before this update, the controller did not retain pods for timed-out
TaskRunswhen thekeep-pod-on-cancelfeature flag was enabled. As a consequence, debugging information was lost. With this update, the controller retains pods when the flag is enabled. As a result, you can inspect pods for timed-out task runs.- StepAction status displays in the correct order
Before this update, status steps displayed in incorrect order when using
StepAction. As a consequence, monitoring and debugging were more difficult. With this update, the controller displays status steps in the correct order. As a result, task execution flow is easier to understand.- Task runs execute successfully on arm64 Kubernetes clusters
Before this update, arm64 Kubernetes clusters experienced task run failures due to platform variant mismatch in entrypoint lookup. As a consequence, tasks failed on arm64 clusters. With this update, the entrypoint correctly handles Linux platform variants. As a result, task runs succeed on arm64 clusters.
- Pipeline run status updates reduce API server load
Before this update, unordered arrays in pipeline run status caused massive invalid status updates, impacting API server load and stability. As a consequence, cluster performance degraded. With this update, the controller ensures consistent array ordering. As a result, API server load is reduced and cluster stability improves.
Operator
- Webhooks are properly cleaned up when the Operator namespace is deleted
Before this update, the
proxy.operator.tekton.dev,validation.pipelinesascode.tekton.dev, andnamespace.operator.tekton.devwebhooks lacked owner references to theopenshift-pipelinesnamespace. As a consequence, when you uninstalled the Pipelines Operator by deleting the namespace, these webhooks were not removed and remained on the cluster. With this update, owner references are added to all operator webhooks. As a result, all webhooks are properly cleaned up when you uninstall the Operator.- Prometheus metrics collection works when the Operator is installed in a custom namespace
Before this update, the Operator
ServiceMonitorresource had a hardcoded namespace reference toopenshift-operatorsin its namespace selector. As a consequence, when you installed the Pipelines Operator in a different namespace, such asopenshift-pipelines, Prometheus attempted to scrape metrics from the wrong namespace, causing permission errors and triggering thePrometheusKubernetesListWatchFailuresalert. With this update, the hardcoded namespace reference is removed, and theServiceMonitorautomatically targets the namespace where the Operator is installed. As a result, Prometheus metrics collection works correctly regardless of the namespace where you install the Operator.
Pipelines as Code
- Custom parameters are supported in CEL expressions correctly
Before this update, custom parameters defined in the
Repositorycustom resource (CR) were not recognized inon-cel-expressionannotations. This resulted in "undeclared reference" errors for custom parameters and, in some cases, standard variables such asevent_type. As a result, repository-specific parameters could not be used for event filtering. With this update, custom parameters are exposed as Common Expression Language (CEL) variables and can be referenced directly in CEL expressions. Standard CEL variables take precedence if a naming conflict occurs.For example, with this
RepositoryCR configuration:apiVersion: pipelinesascode.tekton.dev/v1alpha1 kind: Repository metadata: name: my-repo spec: url: "https://github.com/owner/repo" params: - name: enable_ci value: "true" - name: environment value: "staging"You can use these parameters directly in your
PipelineRunCEL expression:apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: my-pipeline annotations: pipelinesascode.tekton.dev/on-cel-expression: | event == "push" && enable_ci == "true" && environment == "staging" spec: # ...- Commit messages containing "[skip ci]" are respected in GitLab merge requests
Before this update, commit messages containing "[skip ci]" were not honored by the GitLab provider, and
PipelineRunresources were created despite the skip directive. This behavior caused unnecessary pipeline executions. With this update, the GitLab integration correctly detects and respects skip directives in commit messages.- Non-HTTP(S) pipeline URLs no longer cause "invalid port" errors
Before this update, custom hub catalog references that included version specifiers (for example,
foo://resource:1.2) could cause an "invalid port" error. The Gourl.Parse()function interpreted the version delimiter as a port separator, which caused pipeline resolution to fail. With this update, non-HTTP(S) schemes are excluded from URL parsing in task resolution logic, allowing versioned custom catalog references to function correctly.- The tkn pac cel command enforces required flags and provides clearer errors
Before this update, running the
tkn pac celcommand without the required body (-b) and header (-H) flags produced misleading error messages such as "auto-detection failed" or "unexpected end of JSON input." With this update, both flags are mandatory, and the command returns a clear error message when required arguments are missing.- The pull_request_number variable is more reliably populated for push events
Before this update, Pipelines as Code occasionally failed to identify pull request numbers during push events associated with a pull request merge. This occurred primarily when using the merge commit strategy, as GitHub API indexing delays prevented the immediate association of a commit with a pull request. As a consequence, the
pull_request_numbervariable was not populated, causing intermittent failures for subsequent tasks. With this update, an exponential backoff retry mechanism is implemented to detect merge commits and retry the API call. As a result, thepull_request_numbervariable is correctly populated even when there is minor provider latency.- GitLab push events evaluate all modified files for trigger filtering
Before this update, when processing GitLab push events, the
filesCEL variable and thepathChanged()method evaluated only the first 20 modified files because API pagination was not implemented. Pipelines relying on file-based filtering might not have triggered if relevant changes were beyond that limit. With this update, all modified files from the GitLab API are retrieved and evaluated during CEL execution.- GitLab commit statuses correctly reflect pipeline state
Before this update, when a forked merge request required
/ok-to-testapproval, GitLab displayed a "running" status instead of "pending." Additionally, the parent status entry was not updated as the pipeline progressed. With this update, the GitLab provider correctly maps pending states and updates parent status entries throughout the pipeline lifecycle.- GitLab merge request comments are limited to permission failures
Before this update, a comment was posted to a GitLab merge request whenever a commit status update failed, regardless of the error type. This behavior could produce misleading "pipeline run started" comments. With this update, a fallback comment is posted only when the commit status update fails due to permission errors.
- The tkn pac cel command handles invalid GitLab input safely
Before this update, running
tkn pac cel -p gitlabwith malformed payloads or headers could cause anil pointer dereferencepanic. With this update, input validation is performed before processing GitLab data, and the command returns a descriptive error instead of panicking.- Commit-level re-evaluation of
/ok-to-testapprovals is enforced Before this update, when the
remember-ok-to-testparameter was set tofalse, a single/ok-to-testapproval from an unauthorized user was incorrectly retained for subsequent commits in the same pull request. This behavior could allow unreviewed changes to execute in CI. With this update, permissions are re-evaluated for each new commit whenremember-ok-to-test=false, and CI execution is blocked until a repository administrator provides approval.- Skipped push events are logged at the correct level
Before this update, intentionally skipped push events were logged at the
errorlevel when the commit was associated with an open pull request. This behavior caused confusion during troubleshooting. With this update, these events are logged at theinfolevel to reflect expected behavior.- Bitbucket Cloud displays individual status for each pipeline run triggered by push events
Before this update, when multiple pipeline runs were triggered in a pull request in Bitbucket Cloud, Pipelines as Code used a static key when reporting commit statuses. Each subsequent pipeline run overwrote the previous status. As a consequence, only one build status was visible in Bitbucket Cloud, and you could not determine which individual pipeline run succeeded or failed. With this update, Pipelines as Code uses a unique commit status key for each pipeline run in the format "ApplicationName / PipelineRunName". As a result, each pipeline run reports its status individually to Bitbucket Cloud, allowing you to view results for all triggered pipeline runs.
- CEL expressions correctly evaluate pull request label events
Before this update,
on-cel-expressionannotations were not evaluated during pull request labeling events unless anon-labelannotation was also present. Pipelines designed to trigger based on label conditions did not start when labels were added. With this update, Abstract Syntax Tree (AST) inspection detects label references within CEL expressions, ensuring correct evaluation during labeling events.- Deleted or canceled
PipelineRunresources update Git provider commit status Before this update,
PipelineRunsdeleted while in aRunningorQueuedstate remained stuck as pending on Git providers. As a consequence, commit statuses could remain in a pending state indefinitely in the Git provider UI. With this update, the Pipelines as Code finalizer explicitly reports a canceled status to the Git provider when aPipelineRunis deleted. As a result, Git provider UIs accurately reflect the terminated lifecycle of thePipelineRun.
User interface
- Pipeline run logs preserve whitespace and formatting in the console
Before this update, pipeline run logs in the OpenShift Container Platform console did not preserve whitespace, and long lines were automatically wrapped. Structured log output and tabular data appeared misaligned, which reduced readability. With this update, the log viewer preserves whitespace and provides horizontal scrolling for long lines.
- TaskSidebar displays correctly in the Pipeline Builder
Before this update, the TaskSidebar in the Pipeline Builder lacked a required styling property. As a consequence, the sidebar rendered behind other page elements, with only the header visible. With this update, the required styling is applied to the TaskSidebar component. As a result, the sidebar correctly overlays other components and remains accessible.
1.2.6. Deprecated features Link kopierenLink in die Zwischenablage kopiert!
- The openshift-pipelines-client RPM is deprecated
The
openshift-pipelines-clientRPM is deprecated and might be removed in the Pipelines 1.23 release.- The pipelinerun_status field in the Repository CR is deprecated
The
pipelinerun_statusfield in theRepositorycustom resource (CR) is deprecated and might be removed in the Pipelines 1.23 release.
1.2.7. Removed features Link kopierenLink in die Zwischenablage kopiert!
- The disable-affinity-assistant field is removed from TektonConfig
The
disable-affinity-assistantfield in theTektonConfigcustom resourcespec.pipelinesection is removed. This field was deprecated in favor of thecoschedulefeature flag and had no effect since Pipelines 1.0.0. If you previously used thedisable-affinity-assistantfield, migrate to thecoschedulefeature flag to control affinity assistant behavior.- Public Tekton Hub is removed as a default built-in catalog
The public Tekton Hub (hub.tekton.dev), which served as a default built-in catalog for pipeline resources, is removed and is no longer supported. You can use custom self-hosted Tekton Hub instances instead.