Chapter 1. Red Hat OpenShift Pipelines release notes
For additional information about the OpenShift Pipelines lifecycle and supported platforms, refer to the OpenShift Operator Life Cycles and Red Hat OpenShift Container Platform Life Cycle Policy.
Release notes contain information about new and deprecated features, breaking changes, and known issues. The following release notes apply for the most recent OpenShift Pipelines releases on OpenShift Container Platform.
Red Hat OpenShift Pipelines is a cloud-native CI/CD experience based on the Tekton project which provides:
- Standard Kubernetes-native pipeline definitions (CRDs).
- Serverless pipelines with no CI server management overhead.
- Extensibility to build images using any Kubernetes tool, such as S2I, Buildah, JIB, and Kaniko.
- Portability across any Kubernetes distribution.
- Powerful CLI for interacting with pipelines.
- Integrated user experience with the Developer perspective of the OpenShift Container Platform web console.
For an overview of Red Hat OpenShift Pipelines, see Understanding OpenShift Pipelines.
1.1. Compatibility and support matrix
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use.
In the table, features are marked with the following statuses:
TP | Technology Preview |
GA | General Availability |
Red Hat OpenShift Pipelines Version | Component Version | OpenShift Version | Support Status | |||||||
---|---|---|---|---|---|---|---|---|---|---|
Operator | Pipelines | Triggers | CLI | Chains | Hub | Pipelines as Code | Results | Manual Approval Gate | ||
1.16 | 0.62.x | 0.29.x | 0.38.x | 0.22.x (GA) | 1.18.x (TP) | 0.28.x (GA) | 0.12.x (TP) | 0.3.x (TP) | 4.15, 4.16, 4.17 | GA |
1.15 | 0.59.x | 0.27.x | 0.37.x | 0.20.x (GA) | 1.17.x (TP) | 0.27.x (GA) | 0.10.x (TP) | 0.2.x (TP) | 4.14, 4.15, 4.16 | GA |
1.14 | 0.56.x | 0.26.x | 0.35.x | 0.20.x (GA) | 1.16.x (TP) | 0.24.x (GA) | 0.9.x (TP) | NA | 4.12, 4.13, 4.14, 4.15, 4.16 | GA |
For questions and feedback, you can send an email to the product team at pipelines-interest@redhat.com.
1.2. Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
1.3. Release notes for Red Hat OpenShift Pipelines General Availability 1.16
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.16 is available on OpenShift Container Platform 4.15 and later versions.
1.3.1. New features
In addition to fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.16:
1.3.1.1. Pipelines
With this update, you can configure the resync period for the pipelines controller. For every resync period, the controller reconciles all pipeline runs and task runs, regardless of events. The default resync period is 10 hours. If you have a large number of pipeline runs and task runs, a full reconciliation every 10 hours might consume too many resources. In this case, you can configure a longer resync period.
Example of configuring a resync period of 24 hours
apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: options: deployments: tekton-pipelines-controller: spec: template: spec: containers: - name: tekton-pipelines-controller args: - "-resync-period=24h" # ...
With this update, when defining a pipeline, you can set the
onError
parameter for a task tocontinue
. If you make this setting and the task fails when the pipeline is executed, the pipeline logs the error and continues to the next task. By default, a pipeline fails if a task in it fails.Example of setting the
onError
parameter. After thetask-that-fails
task fails, thenext-task
executesapiVersion: tekton.dev/v1 kind: Pipeline metadata: name: example-onerror-pipeline spec: tasks: - name: task-that-fails onError: continue taskSpec: steps: - image: alpine name: exit-with-1 script: | exit 1 - name: next-task # ...
-
With this update, if a task fails, a
finally
task can access thereason
parameter in addition to thestatus
parameter to distinguish if the failure was allowed or not. You can access thereason
parameter through$(tasks.<task_name>.reason)
. If the failure is allowed, thereason
is set toFailureIgnored
. If the failure is not allowed, thereason
is set toFailed
. This additional information can be used to identify that the checks failed, but the failure can be ignored. With this update, larger results are supported through sidecar logs as an alternative to the default configuration, which limits results to the size of 4 KB per step and 12 KB per task run. To enable larger results using sidecar logs, set the
pipeline.options.configMaps.feature-flags.data.results-from
spec tosidecar-logs
in theTektonConfig
CR.Example of enabling larger results using sidecar logs
apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: options: configMaps: feature-flags: data: results-from: "sidecar-logs" # ...
Before this update, parameter propagation was allowed in
PipelineRun
andTaskRun
resources, but not in thePipeline
resource. With this update, you can propagateparams
in thePipeline
resource down to the inlined pipelinetasks
and its inlinedsteps
. Wherever a resource, such asPipelineTask
orStepAction
, is referenced, you must pass the parameters explicitly.Example of using
params
within a pipelineapiVersion: tekton.dev/v1 # or tekton.dev/v1beta1 kind: Pipeline metadata: name: pipeline-propagated-params spec: params: - name: HELLO default: "Hello World!" - name: BYE default: "Bye World!" tasks: - name: echo-hello taskSpec: steps: - name: echo image: ubuntu script: | #!/usr/bin/env bash echo "$(params.HELLO)" - name: echo-bye taskSpec: steps: - name: echo-action ref: name: step-action-echo params: - name: msg value: "$(params.BYE)" # ...
With this update, you can use the task run or pipeline run definition to configure the compute resources for steps and sidecars in a task.
Example task for configuring resources
apiVersion: tekton.dev/v1 kind: Task metadata: name: side-step spec: steps: - name: test image: docker.io//alpine:latest sidecars: - name: side image: docker.io/linuxcontainers/alpine:latest # ...
Example
TaskRun
definition that configures the resourcesapiVersion: tekton.dev/v1 kind: TaskRun metadata: name: test-sidestep spec: taskRef: name: side-step stepSpecs: - name: test computeResources: requests: memory: 1Gi sidecarSpecs: - name: side computeResources: requests: cpu: 100m limits: cpu: 500m # ...
1.3.1.2. Operator
With this update, OpenShift Pipelines includes the
git-clone
StepAction
definition for a step that clones a Git repository. Use the HTTP resolver to reference this definition. The URL for the definition ishttps://raw.githubusercontent.com/openshift-pipelines/tektoncd-catalog/p/stepactions/stepaction-git-clone/0.4.1/stepaction-git-clone.yaml
. TheStepAction
definition is also installed in theopenshift-pipelines
namespace. However, the cluster resolver does not supportStepAction
definitions.Example usage of the
git-clone
step action in a taskapiVersion: tekton.dev/v1 kind: Task metadata: name: clone-repo-anon spec: params: - name: url description: The URL of the Git repository to clone workspaces: - name: output description: The git repo will be cloned onto the volume backing this Workspace. steps: - name: clone-repo-anon-step ref: resolver: http params: - name: url value: https://raw.githubusercontent.com/openshift-pipelines/tektoncd-catalog/p/stepactions/stepaction-git-clone/0.4.1/stepaction-git-clone.yaml params: - name: URL value: $(params.url) - name: OUTPUT_PATH value: $(workspaces.output.path) # ...
-
With this update, the
openshift-pipelines
namespace includes versioned tasks alongside standard tasks. For example, there is abuildah
standard task and abuildah-1-16-0
versioned task. While the standard task might be updated in subsequent releases, the versioned task remains exactly the same as it was in a specified version, except for the correction of errors. With this update, you can configure the
FailurePolicy
,TimeoutSeconds
, andSideEffects
options for webhooks for several components of OpenShift Pipelines by using theTektonConfig
CR. The following example shows the configuration for thepipeline
component. You can use similar configuration for webhooks in thetriggers
,pipelinesAsCode
, andhub
components.Example configuration of webhooks options for the
pipeline
componentapiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: options: webhookConfigurationOptions: validation.webhook.pipeline.tekton.dev: failurePolicy: Fail timeoutSeconds: 20 sideEffects: None webhook.pipeline.tekton.dev: failurePolicy: Fail timeoutSeconds: 20 sideEffects: None # ...
1.3.1.3. Triggers
-
With this update, the
readOnlyRootFilesystem
parameter for the triggers controller, webhook, Core Interceptor, and event listener is set totrue
by default to improve security and avoid being flagged by the security scanner. With this update, you can configure OpenShift Pipelines triggers to run event listeners as a non-root user within the container. To set this option, set the parameters in the
TektonConfig
CR as shown in the following example:Example of configuring trigger event listeners to run as non-root
apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: trigger: options: disabled: false configMaps: config-defaults-triggers: data: default-run-as-non-root: "true" default-run-as-user: "65532" default-run-as-group: "65532" default-fs-group: "65532" # ...
Optionally, you can set the values of the
default-run-as-user
anddefault-run-as-group
parameters to configure the numeric user ID and group ID for running the event listeners in containers. OpenShift Pipelines sets these values in the pod security context and container security context for event listeners. If you use empty values, the default user ID and group ID of65532
are used.You can also set the
default-fs-group
parameter to define thefsGroup
value for the pod security context, which is the group ID that the container processes use for the file system. If you use an empty value, the default group ID of65532
is used.-
With this update, in triggers, the
EventListener
pod template now includessecurityContext
settings. Under these settings, you can configureseccompProfile
,runAsUser
,runAsGroup
, andfsGroup
parameters when theel-security-context
flag is set totrue
.
1.3.1.4. Web console
- Before this release, when using the web console, you could not see the timestamp for the logs that OpenShift Pipelines created. With this update, the web console includes timestamps for all OpenShift Pipelines logs.
-
With this update, the pipeline run and task run list pages in the web console now have a filter for the data source, such as
k8s
andTektonResults API
. - Before this update, when using the web console in the Developer perspective, you could not specify the timeout for pipeline runs. With this update, you can set a timeout while starting the pipeline run in the Developer perspective of the web console.
- Before this update, the Overview pipeline dashboard only appeared when Tekton Results was enabled. All the statistics came from only the Results API. With this update, the Overview pipeline dashboard is available regardless of whether Tekton Results is enabled or not. When Tekton Results is disabled, you can use the dashboard to see the statistics for objects in the cluster.
-
With this update, the sample pipelines displayed in the web console use the
v1
version of the OpenShift Pipelines API.
1.3.1.5. CLI
-
With this update, you can use the
tkn customrun delete <custom_run_names>
command to delete one or more custom runs. -
With this update, when you run a
tkn <resource> list
command with the-o
YAML flag, the listed resources are now separated with---
separators to enhance readability of the output.
1.3.1.6. Pipelines as Code
-
With this update, if you create two
PipelineRun
definitions with the same name, Pipelines as Code logs an error and does not run either of these pipeline runs. -
With this update, the Pipelines as Code
pipelines_as_code_pipelinerun_count
metric allows filtering of thePipelineRun
count by repository or namespace. -
With this update, the
readOnlyRootFilesystem
security context for the Pipelines as Code controller, webhook, and watcher is set totrue
by default to increase security and avoid being flagged by the security scanner.
1.3.1.7. Tekton Chains
With this update, when using
docdb
storage in Tekton Chains, you can configure theMONGO_SERVER_URL
value directly in theTektonConfig
CR as thestorage.docdb.mongo-server-url
setting. Alternatively, you can provide this value using a secret and set thestorage.docdb.mongo-server-url-dir
setting to the directory where theMONGO_SERVER_URL
file is located.Example of creating a secret with the
MONGO_SERVER_URL
value$ oc create secret generic mongo-url -n tekton-chains \ # --from-file=MONGO_SERVER_URL=/home/user/MONGO_SERVER_URL
Example of configuring the
MONGO_SERVER_URL
value using a secretapiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: chain: disabled: false storage.docdb.mongo-server-url-dir: /tmp/mongo-url options: deployments: tekton-chains-controller: spec: template: spec: containers: - name: tekton-chains-controller volumeMounts: - mountPath: /tmp/mongo-url name: mongo-url volumes: - name: mongo-url secret: secretName: mongo-url # ...
With this update, when using KMS signing in Tekton Chains, instead of providing the KMS authentication token value directly in the configuration, you can provide the token value as a secret by using the
signers.kms.auth.token-path
setting.To create a KMS token secret, enter the following command:
$ oc create secret generic <secret_name> -n tekton-chains \ --from-file=KMS_AUTH_TOKEN=/home/user/KMS_AUTH_TOKEN 1
- 1
- Replace
<secret_name>
with any name. The following example uses a KMS secret calledkms-secrets
.
Example configuration of the KMS token value using a secret called
kms-secrets
apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: chain: disabled: false signers.kms.auth.token-path: /etc/kms-secrets/KMS_AUTH_TOKEN options: deployments: tekton-chains-controller: spec: template: spec: containers: - name: tekton-chains-controller volumeMounts: - mountPath: /etc/kms-secrets name: kms-secrets volumes: - name: kms-secrets secret: secretName: kms-secrets # ...
With this update, you can configure a list of namespaces as an argument to the Tekton Chains controller. If you provide this list, Tekton Chains watches pipeline runs and task runs only in the specified namespaces. If you do not provide this list, Tekton Chains watches pipeline runs and task runs in all namespaces.
Example configuration for watching only the
dev
andtest
namespacesapiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: chain: disabled: false options: deployments: tekton-chains-controller: spec: template: spec: containers: - args: - --namespace=dev, test name: tekton-chains-controller # ...
1.3.1.8. Tekton Results
-
Before this update, Tekton Results used the
v1beta1
API format to storeTaskRun
andPipelineRun
object records. With this update, Tekton Results uses thev1
API format to storeTaskRun
andPipelineRun
object records. With this update, Tekton Results can automatically convert existing records to the
v1
API format. To enable such conversion, set parameters in theTektonResult
CR as shown in the following example:Example of configuring Tekton Results to convert existing records to the
v1
API formatapiVersion: operator.tekton.dev/v1alpha1 kind: TektonResult metadata: name: result spec: options: deployments: tekton-results-api: spec: template: spec: containers: - name: api env: - name: CONVERTER_ENABLE value: "true" - name: CONVERTER_DB_LIMIT value: "256" 1 # ...
- 1
- In the
CONVERTER_DB_LIMIT
variable, set the number of records to convert at the same time in a single transaction.
-
With this update, Tekton Results now supports fetching forwarded logs from third party logging APIs. You can enable the logging API through the
TektonResult
CR by setting thelogs_api
totrue
andlogs_type
toLoki
. With this update, you can configure automatic pruning of the Tekton Results database. You can specify the number of days for which records must be stored. You can also specify the schedule for running the pruner job that removes older records. To set these parameters, edit the
TektonResult
CR, as shown in the following example:Example of configuring automatic pruning of the Tekton Results database
apiVersion: operator.tekton.dev/v1alpha1 kind: TektonResult metadata: name: result spec: options: configMaps: config-results-retention-policy: data: runAt: "3 5 * * 0" 1 maxRetention: "30" 2 # ...
With this update, you can configure Tekton Results to store event logs for pipelines and tasks. To enable storage of event logs, edit the
TektonResult
CR, as shown in the following example:Example of configuring automatic pruning of the Tekton Results database
apiVersion: operator.tekton.dev/v1alpha1 kind: TektonResult metadata: name: result spec: options: deployments: tekton-results-watcher: spec: template: spec: containers: - name: watcher args: - "--store_event=true" # ...
With this update, you can configure Tekton Results to use the OpenShift Container Platform Cluster Log Forwarder to store all log data in a LokiStack instance, instead of placing it directly on a storage volume. This option enables scaling to a higher rate of pipeline runs and task runs.
To configure Tekton Results to use the OpenShift Container Platform Cluster Log Forwarder to store all log data in a LokiStack instance, you must deploy LokiStack in your cluster by using the Loki Operator and also install the OpenShift Logging Operator. Then you must create a
ClusterLogForwarder
CR in theopenshift-logging
namespace by using one of the following YAML manifests:YAML manifest for the
ClusterLogForwarder
CR if you installed OpenShift Logging version 6apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: inputs: - application: selector: matchLabels: app.kubernetes.io/managed-by: tekton-pipelines name: only-tekton type: application managementState: Managed outputs: - lokiStack: labelKeys: application: ignoreGlobal: true labelKeys: - log_type - kubernetes.namespace_name - openshift_cluster_id authentication: token: from: serviceAccount target: name: logging-loki namespace: openshift-logging name: default-lokistack tls: ca: configMapName: openshift-service-ca.crt key: service-ca.crt type: lokiStack pipelines: - inputRefs: - only-tekton name: default-logstore outputRefs: - default-lokistack serviceAccount: name: collector # ...
YAML manifest for the
ClusterLogForwarder
CR if you installed OpenShift Logging version 5apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: inputs: - name: only-tekton application: selector: matchLabels: app.kubernetes.io/managed-by: tekton-pipelines pipelines: - name: enable-default-log-store inputRefs: [ only-tekton ] outputRefs: [ default ] # ...
Finally, in the
TektonResult
CR in theopenshift-pipelines
namespace, set the following additional parameters:-
loki_stack_name
: The name of theLokiStack
CR, typicallylogging-loki
. -
loki_stack_namespace
: The name of the namespace where LokiStack is deployed, typicallyopenshift-logging
.
Example of configuring LokiStack log forwarding in the
TektonResult
CRapiVersion: operator.tekton.dev/v1alpha1 kind: TektonResult metadata: name: result spec: targetNamespace: tekton-pipelines # ... loki_stack_name: logging-loki loki_stack_namespace: openshift-logging # ...
-
1.3.2. Breaking changes
-
With this update, the metric name for the
EventListener
object for pipelines triggers that counts received events is changed fromeventlistener_event_count
toeventlistener_event_received_count
.
1.3.3. Known issues
-
The
jib-maven
ClusterTask
does not work if you are using OpenShift Container Platform version 4.16 and later.
1.3.4. Fixed issues
-
Before this update, when you uninstalled Tekton Hub by deleting the
TektonHub
CR, the pod of thehub-db-migration
job was not deleted. With this update, uninstalling Tekton Hub deletes the pod. -
Before this update, when you used Tekton Results to store pod logs from pipelines and tasks, the operation to store the logs sometimes failed. The logs would include the
UpdateLog
action failing with thecanceled context
error. With this update, the operation completes correctly. -
Before this update, when you passed a parameter value to a pipeline or task and the value included more than one variable with both full and short reference formats, for example,
$(tasks.task-name.results.variable1) + $(variable2)
, OpenShift Pipelines did not interpret the value correctly. The pipeline run or task run could stop execution and the Pipelines controller could crash. With this update, OpenShift Pipelines interprets the value correctly and the pipeline run or task run completes. - Before this update, Tekton Chains failed to generate correct attestations when a task run included multiple tasks with the same name. For instance, when using a matrix of tasks, the attestation was generated for the first image. With this update, Tekton Chains generates attestations for all tasks within the task run, ensuring complete coverage.
-
Before this update, when you used the
skopeo-copy
task defined in the OpenShift Pipelines installation namespace and set itsVERBOSE
parameter tofalse
, the task failed. With this update, the task completes normally. -
Before this update, when using Pipelines as Code, if you set the
concurrency_limit
spec in the globalRepository
CR namedpipelines-as-code
in theopenshift-pipelines
orpipelines-as-code
namespace, which provides default settings for allRepository
CRs, the Pipelines as Code watcher crashed. With this update, the Pipelines as Code watcher operates correctly with this setting. - Before this update, all tasks in OpenShift Pipelines included an extra step compared to the cluster tasks of the same name that were available in previous versions of OpenShift Pipelines. This extra step increased the load on the cluster. With this update, the tasks no longer include the extra step as it is integrated into the first step.
-
Before this update, when you used one of the
s2i-*
tasks defined in the OpenShift Pipelines installation namespace and set itsCONTEXT
parameter, the task did not interpret the parameter correctly and the task failed. With this update, the task interprets theCONTEXT
parameter correctly and completes successfully. -
Before this update, in Tekton Chains the in-toto provenance metadata,
URI
andDigest
values, were incomplete. The values contained only the information of remotePipeline
andTask
resources, but were missing the information of the remoteStepAction
resource. With this update, the provenance of the remoteStepAction
resource is recorded in the task run status and inserted into the in-toto provenance, which results in complete in-toto provenance metadata. -
Before this update, you could modify some of the parameters in the
spec
field of thePipelineRun
andTaskRun
resources that should not be modifiable after the resources were created. With this update, you can only modify the allowed fields after the pipeline run and task run are created, such asstatus
andstatusMessage
fields. -
Before this update, if a step action parameter was an
array
type but astring
value was passed in a task, there was no error indicating inconsistent parameter types and the default parameter value was used instead. With this update, an error is added to indicate the inconsistent values:invalid parameter substitution: %s. Please check the types of the default value and the passed value
. -
Before this update, task runs and pipeline runs were deleted by an external pruner when logs were streamed through the watcher. With this update, a finalizer is added to Tekton Results for
TaskRun
andPipelineRun
objects to ensure that the runs are stored and not deleted. The runs are stored either as records or until the deadline has passed, which is calculated as the completion time plus thestore_deadline
time. The finalizer does not prevent deletion if legacy log streaming from the watcher or pruner is enabled. -
Before this update, the web console supported the
v1beta1
API format to display theTaskRun
andPipelineRun
object records that are stored in Tekton Results. With this update, the console supports thev1
API format to displayTaskRun
andPipelineRun
object records stored in Tekton Results. -
Before this update, when using Pipelines as Code, if different
PipelineRun
definitions used the same task name but different versions, for example when fetching tasks from Tekton Hub,the wrong version was sometimes triggered, because Pipelines as Code used the same task version for all pipeline runs. With this update, Pipelines as Code triggers the correct version of the referenced task. - Before this update, when you used a resolver to reference remote pipelines or tasks, transient communication errors caused immediate failure retrieving those remote references. With this update, the resolver requeues the retrieval and eventually retries the retrieval.
- Before this update, Tekton Results could use an increasing amount of memory when storing log information for pipeline runs and task runs. This update fixes the memory leak and Tekton Results uses a normal amount of memory.
-
Before this update, when using Pipelines as Code, if your
.tekton
directory contained a pipeline that was not referenced by anyPipelineRun
definition triggered in the event, Pipelines as Code attempted to fetch all the required tasks for that pipeline even though it was not run. With this update, Pipelines as Code does not try to resolve pipelines that are not referenced in any pipeline run triggered by the current event.
1.3.5. Release notes for Red Hat OpenShift Pipelines General Availability 1.16.1
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.16.1 is available on OpenShift Container Platform 4.15 and later versions.
1.3.5.1. Fixed issues
- Before this update, in the Pipelines overview page of the web console, users that do not have access to all namespaces could select All in the Projects list. The console displayed wrong information for that selection, because the statistics for some of the namespaces were not available to the user. With this update, users who do not have access to all namespaces cannot select All in the Projects list.
-
Before this update, when you tried to used the web console to start a pipeline or task that defined a parameter of the type
array
, entering a value for this parameter resulted in an error and you could not start the pipeline or task. With this update, you can use the web console to start a pipeline or task that defines a parameter of the typearray
and entering a value for this parameter works normally. -
Before this update, when using Pipelines as Code with a Bitbucket Git repository, the Pipelines as Code controller sometimes crashed and a
panic: runtime error: invalid memory address or nil pointer dereference
message was logged. With this update, the Pipelines as Code controller does not crash. -
Before this update, when using Tekton Results, the
tekton-results-watcher
pod sometimes crashed and apanic: runtime error: invalid memory address or nil pointer dereference
message was logged. With this update, thetekton-results-watcher
pod does not crash. - Before this update, when using Tekton Results, if you enabled authentication in Tekton Results, you could not view information from Tekton Results in the web console, because the web console failed to pass an authentication token to the Tekton Results API. With this update, you can view information from Tekton Results in the web console when authentication is enabled.
- Before this update, when viewing information from Tekton Results in the web console, if you scrolled down to the end of a page, the console failed to fetch the next set of records and some of the information was not displayed. With this update, if you scroll to the end of the page, records from Tekton Results load correctly and all information displays correctly in the web console.
1.3.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.16.2
With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.16.2 is available on OpenShift Container Platform 4.15 and later versions.
1.3.6.1. Fixed issues
-
Before this update, in OpenShift Pipelines 1.16, you could not cancel a pipeline run by patching the
PipelineRun
object and setting thespec.status
parameter toCancelled
if the first task run in the pipeline run was completed. Instead, an error message was logged:PipelineRun was cancelled but had errors trying to cancel the TaskRuns and/or Runs
. With this update, the pipeline run is cancelled successfully.