Installing and configuring
Installing and configuring OpenShift Pipelines
Abstract
Chapter 1. Installing OpenShift Pipelines Copy linkLink copied to clipboard!
This guide walks cluster administrators through the process of installing the Red Hat OpenShift Pipelines Operator to an OpenShift Container Platform cluster.
Prerequisites
-
You have access to an OpenShift Container Platform cluster using an account with
cluster-adminpermissions. -
You have installed
ocCLI. -
You have installed OpenShift Pipelines (
tkn) CLI on your local system. - Your cluster has the Marketplace capability enabled or the Red Hat Operator catalog source configured manually.
In a cluster with both Windows and Linux nodes, Red Hat OpenShift Pipelines can run on only Linux nodes.
1.1. Installing the Red Hat OpenShift Pipelines Operator in web console Copy linkLink copied to clipboard!
You can install the Red Hat OpenShift Pipelines Operator by using the OpenShift Container Platform web console to automatically configure the necessary custom resources (CRs) for your pipelines. This method provides a graphical interface to manage the installation and seamless upgrades of the Operator and its components.
The default Operator custom resource definition (CRD) config.operator.tekton.dev is now replaced by tektonconfigs.operator.tekton.dev. In addition, the Operator provides the following additional CRDs to individually manage OpenShift Pipelines components: tektonpipelines.operator.tekton.dev, tektontriggers.operator.tekton.dev and tektonaddons.operator.tekton.dev.
If you have OpenShift Pipelines already installed on your cluster, the existing installation is seamlessly upgraded. The Operator will replace the instance of config.operator.tekton.dev on your cluster with an instance of tektonconfigs.operator.tekton.dev and additional objects of the other CRDs as necessary.
If you manually changed your existing installation, such as, changing the target namespace in the config.operator.tekton.dev CRD instance by making changes to the resource name - cluster field, then the upgrade path is not smooth. In such cases, the recommended workflow is to uninstall your installation and reinstall the Red Hat OpenShift Pipelines Operator.
The Red Hat OpenShift Pipelines Operator now provides the option to select the components that you want to install by specifying profiles as part of the TektonConfig custom resource (CR). The Operator automatically installs the TektonConfig CR when you install the Operator. The supported profiles are:
- Lite: This profile installs only Tekton Pipelines.
- Basic: This profile installs Tekton Pipelines, Tekton Triggers, Tekton Chains, and Tekton Results.
-
All: This is the default profile used when you install the
TektonConfigCR. This profile installs all of the Tekton components, including Tekton Pipelines, Tekton Triggers, Tekton Chains, Tekton Results, Pipelines as Code, and Tekton add-ons. Tekton add-ons includes theClusterTriggerBindings,ConsoleCLIDownload,ConsoleQuickStart, andConsoleYAMLSampleresources, and the tasks and step action definitions available by using the cluster resolver from theopenshift-pipelinesnamespace.
Procedure
- In the Administrator perspective of the web console, navigate to Operators → OperatorHub.
-
Use the Filter by keyword box to search for
Red Hat OpenShift PipelinesOperator in the catalog. Click the Red Hat OpenShift Pipelines Operator tile. - Read the brief description about the Operator on the Red Hat OpenShift Pipelines Operator page. Click Install.
On the Install Operator page:
-
Select All namespaces on the cluster (default) for the Installation Mode. This mode installs the Operator in the default
openshift-operatorsnamespace, which enables the Operator to watch and be available to all namespaces in the cluster. - Select Automatic for the Approval Strategy. This ensures that the Operator Lifecycle Manager (OLM) automatically handles future upgrades to the Operator. If you select the Manual approval strategy, OLM creates an update request. As a cluster administrator, you must then manually approve the OLM update request to update the Operator to the new version.
Select an Update Channel.
-
The
latestchannel enables installation of the most recent stable version of the Red Hat OpenShift Pipelines Operator. Currently, it is the default channel for installing the Red Hat OpenShift Pipelines Operator. To install a specific version of the Red Hat OpenShift Pipelines Operator, cluster administrators can use the corresponding
pipelines-<version>channel. For example, to install the Red Hat OpenShift Pipelines Operator version1.8.x, you can use thepipelines-1.8channel.NoteStarting with OpenShift Container Platform 4.11, the
previewandstablechannels for installing and upgrading the Red Hat OpenShift Pipelines Operator are not available. However, in OpenShift Container Platform 4.10 and earlier versions, you can use thepreviewandstablechannels for installing and upgrading the Operator.
-
The
-
Select All namespaces on the cluster (default) for the Installation Mode. This mode installs the Operator in the default
Click Install. You will see the Operator listed on the Installed Operators page.
NoteThe Operator installs automatically into the
openshift-operatorsnamespace.Verify that the Status displays Succeeded Up to date to confirm successful installation of Red Hat OpenShift Pipelines Operator.
WarningThe success status might show as Succeeded Up to date even if installation of other components is in-progress. Therefore, it is important to verify the installation manually in the terminal.
Verify that the Red Hat OpenShift Pipelines Operator installed all components successfully. Login to the cluster on the terminal, and run the following command:
$ oc get tektonconfig configExample output
NAME VERSION READY REASON config 1.21.0 TrueIf the READY condition is True, the Operator and its components installed successfully.
Additionally, check the components' versions by running the following command:
$ oc get tektonpipeline,tektontrigger,tektonchain,tektonaddon,pacExample output
NAME VERSION READY REASON tektonpipeline.operator.tekton.dev/pipeline v0.47.0 True NAME VERSION READY REASON tektontrigger.operator.tekton.dev/trigger v0.23.1 True NAME VERSION READY REASON tektonchain.operator.tekton.dev/chain v0.16.0 True NAME VERSION READY REASON tektonaddon.operator.tekton.dev/addon 1.11.0 True NAME VERSION READY REASON openshiftpipelinesascode.operator.tekton.dev/pipelines-as-code v0.19.0 True
1.2. Installing the OpenShift Pipelines Operator by using the CLI Copy linkLink copied to clipboard!
You can install the Red Hat OpenShift Pipelines Operator from the OperatorHub by using the command-line interface (CLI) to manage your installation programmatically. Once you install the Operator, you can create a Subscription object to subscribe a namespace to the Operator and automate the deployment process.
Procedure
Create a
Subscriptionobject YAML file to subscribe a namespace to the Red Hat OpenShift Pipelines Operator, for example,sub.yaml:apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-pipelines-operator namespace: openshift-operators spec: channel: <channel_name> name: openshift-pipelines-operator-rh source: redhat-operators sourceNamespace: openshift-marketplacespec.channel-
Name of the channel that you want to subscribe. The
pipelines-<version>channel is the default channel. For example, the default channel for Red Hat OpenShift Pipelines Operator version1.7ispipelines-1.7. Thelatestchannel enables installation of the most recent stable version of the Red Hat OpenShift Pipelines Operator. spec.name- Name of the Operator to subscribe to.
spec.source-
Name of the
CatalogSourceobject that provides the Operator. spec.sourceNamespace-
Namespace of the
CatalogSourceobject. Useopenshift-marketplacefor the default OperatorHub catalog sources.
Create the
Subscriptionobject by running the following command:$ oc apply -f sub.yamlThe subscription installs the Red Hat OpenShift Pipelines Operator into the
openshift-operatorsnamespace. The Operator automatically installs OpenShift Pipelines into the defaultopenshift-pipelinestarget namespace.
1.3. Red Hat OpenShift Pipelines Operator in a restricted environment Copy linkLink copied to clipboard!
You can use the Red Hat OpenShift Pipelines Operator to support the installation of pipelines in a restricted network environment. The Operator automatically configures proxy settings for your pipeline containers and resources, ensuring they can operate securely within your network constraints.
The Operator installs a proxy webhook that sets the proxy environment variables in the containers of the pod created by tekton-controllers based on the cluster proxy object. It also sets the proxy environment variables in the TektonPipelines, TektonTriggers, Controllers, Webhooks, and Operator Proxy Webhook resources.
By default, the proxy webhook is disabled for the openshift-pipelines namespace. To disable it for any other namespace, you can add the operator.tekton.dev/disable-proxy: true label to the namespace object.
Chapter 2. Uninstalling OpenShift Pipelines Copy linkLink copied to clipboard!
Cluster administrators can uninstall the Red Hat OpenShift Pipelines Operator by performing the following steps:
Delete the Custom Resources (CRs) for the optional components,
TektonHubandTektonResult, if these CRs exist, and then delete theTektonConfigCR.ImportantIf you uninstall the Operator without removing the CRs of optional components, you cannot remove the components later.
- Uninstall the Red Hat OpenShift Pipelines Operator.
-
Delete the Custom Resource Definitions (CRDs) of the
operator.tekton.devgroup.
Uninstalling only the Operator will not remove the Red Hat OpenShift Pipelines components created by default when the Operator is installed.
2.1. Deleting the OpenShift Pipelines custom resources Copy linkLink copied to clipboard!
You can remove the OpenShift Pipelines custom resources (CRs) to clean up the configuration before uninstalling the Operator. This involves deleting optional components such as TektonHub and TektonResult, followed by the main TektonConfig CR.
Procedure
- In the Administrator perspective of the web console, navigate to Administration → CustomResourceDefinitions.
-
Type
TektonHubin the Filter by name field to search for theTektonHubCustom Resource Definition (CRD). -
Click the name of the
TektonHubCRD to display the details page for the CRD. -
Click the
Instancestab. -
If an instance is displayed, click the Options menu
for the displayed instance.
- Select Delete TektonHub.
- Click Delete to confirm the deletion of the CR.
-
Repeat these steps, searching for
TektonResultand thenTektonConfigin the Filter by name box. If any instances are found for these custom resource definitions (CRDs), delete these instances.
Deleting the CRs also deletes the Red Hat OpenShift Pipelines components and all the tasks and pipelines on the cluster.
If you uninstall the Operator without removing the TektonHub and TektonResult CRs, you cannot remove the Tekton Hub and Tekton Results components later.
2.2. Uninstalling the Red Hat OpenShift Pipelines Operator Copy linkLink copied to clipboard!
You can uninstall the Red Hat OpenShift Pipelines Operator by using the OpenShift Container Platform web console to remove the OpenShift Pipelines service from your cluster. This process involves deleting the Operator subscription and its associated operand instances.
Procedure
- From the Operators → OperatorHub page, use the Filter by keyword box to search for the Red Hat OpenShift Pipelines Operator.
- Click the Red Hat OpenShift Pipelines Operator tile. The Operator tile indicates that the Operator is installed.
- In the Red Hat OpenShift Pipelines Operator description page, click Uninstall.
- In the Uninstall Operator? window, select Delete all operand instances for this operator, and then click Uninstall.
When you uninstall the OpenShift Pipelines Operator, the uninstallation process deletes all resources within the openshift-pipelines target namespace where OpenShift Pipelines is installed, including the secrets you configured.
2.3. Deleting the custom resource definitions of the operator.tekton.dev group Copy linkLink copied to clipboard!
You can delete the operator.tekton.dev custom resource definitions (CRDs) to fully remove all OpenShift Pipelines traces from your cluster. This step ensures that no residual definitions remain after the Operator uninstallation.
Delete the CustomResourceDefinitions of the operator.tekton.dev group. The Red Hat OpenShift Pipelines Operator creates these CRDs by default during installation.
Procedure
- In the Administrator perspective of the web console, navigate to Administration → CustomResourceDefinitions.
-
Type
operator.tekton.devin the Filter by name box to search for the CRDs in theoperator.tekton.devgroup. To delete each of the displayed CRDs, complete the following steps:
-
Click the Options menu
.
- Select Delete CustomResourceDefinition.
- Click Delete to confirm the deletion of the CRD.
-
Click the Options menu
Chapter 3. Customizing configurations in the TektonConfig custom resource Copy linkLink copied to clipboard!
In Red Hat OpenShift Pipelines, you can customize the following configurations by using the TektonConfig custom resource (CR):
- Optimizing OpenShift Pipelines performance, including high-availability mode for the OpenShift Pipelines controller
- Configuring the Red Hat OpenShift Pipelines control plane
- Changing the default service account
- Disabling the service monitor
- Configuring pipeline resolvers
- Disabling pipeline templates
- Disabling the integration of Tekton Hub
- Disabling the automatic creation of RBAC resources
- Pruning of task runs and pipeline runs
3.1. Prerequisites Copy linkLink copied to clipboard!
- You have installed the Red Hat OpenShift Pipelines Operator.
3.2. Performance tuning using the TektonConfig custom resource Copy linkLink copied to clipboard!
You can tune the performance and high availability (HA) of the OpenShift Pipelines controller by editing the TektonConfig custom resource (CR). You can adjust parameters such as replica counts, buckets, and API query limits to optimize the controller for your specific workload requirements.
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
pipeline:
performance:
disable-ha: false
buckets: 7
replicas: 5
threads-per-controller: 2
kube-api-qps: 5.0
kube-api-burst: 10
All fields are optional. If you set them, the Red Hat OpenShift Pipelines Operator includes most of the fields as arguments in the openshift-pipelines-controller deployment under the openshift-pipelines-controller container. The OpenShift Pipelines Operator also updates the buckets field in the config-leader-election config map under the openshift-pipelines namespace.
If you do not specify the values, the OpenShift Pipelines Operator does not update those fields and applies the default values for the OpenShift Pipelines controller.
If you change or remove any of the performance fields, the OpenShift Pipelines Operator updates the openshift-pipelines-controller deployment and the config-leader-election config map (if the buckets field changed) and re-creates openshift-pipelines-controller pods.
High-availability (HA) mode applies to the OpenShift Pipelines controller, which creates and starts pods based on pipeline run and task run definitions. Without HA mode, a single pod executes these operations, potentially creating significant delays under a high load.
In HA mode, OpenShift Pipelines uses several pods (replicas) to run these operations. Initially, OpenShift Pipelines assigns every controller operation into a bucket. Each replica picks operations from one or more buckets. If two replicas could pick the same operation at the same time, the controller internally determines a leader that executes this operation.
HA mode does not affect execution of task runs after creating the pods.
| Name | Description | Default value for the OpenShift Pipelines controller |
|---|---|---|
|
| Enable or disable the HA mode. By default, the system enables the HA mode. |
|
|
|
In HA mode, the number of buckets used to process controller operations. The maximum value is |
|
|
|
In HA mode, the number of pods created to process controller operations. Set this value to the same or lower number than the |
|
|
| The number of threads (workers) to use when processing the work queue of the OpenShift Pipelines controller. |
|
|
| The maximum queries per second (QPS) to the cluster control plane from the REST client. |
|
|
| The maximum burst for a throttle. |
|
The OpenShift Pipelines Operator does not control the number of replicas of the OpenShift Pipelines controller. The replicas setting of the deployment determines the number of replicas. For example, to change the number of replicas to 3, enter the following command:
$ oc --namespace openshift-pipelines scale deployment openshift-pipelines-controller --replicas=3
The OpenShift Pipelines controller multiplies the kube-api-qps and kube-api-burst fields by 2. For example, if the kube-api-qps and kube-api-burst values are 10, the actual QPS and burst values become 20.
3.3. Configuring the Red Hat OpenShift Pipelines control plane Copy linkLink copied to clipboard!
You can configure the OpenShift Pipelines control plane to suit your operational needs by editing the TektonConfig custom resource (CR). Customize settings such as metrics collection, sidecar injection, and service account defaults directly through the OpenShift Container Platform web console as needed.
Procedure
- In the Administrator perspective of the web console, navigate to Administration → CustomResourceDefinitions.
-
Use the Search by name box to search for the
tektonconfigs.operator.tekton.devcustom resource definition (CRD). Click TektonConfig to see the CRD details page. - Click the Instances tab.
-
Click the config instance to see the
TektonConfigCR details. - Click the YAML tab.
Edit the
TektonConfigYAML file based on your requirements.Example TektonConfig CR
apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: running-in-environment-with-injected-sidecars: true metrics.taskrun.duration-type: histogram metrics.pipelinerun.duration-type: histogram await-sidecar-readiness: true params: - name: enableMetrics value: 'true' default-service-account: pipeline require-git-ssh-secret-known-hosts: false enable-tekton-oci-bundles: false metrics.taskrun.level: task metrics.pipelinerun.level: pipeline enable-api-fields: stable enable-provenance-in-status: false enable-custom-tasks: true disable-creds-init: false disable-affinity-assistant: true
3.3.1. Modifiable fields with default values Copy linkLink copied to clipboard!
You can change various default configuration fields in the TektonConfig custom resource (CR) to tailor the behavior of your pipelines. This reference lists the available fields, such as sidecar injection and metric levels, along with their default values and descriptions.
The following list includes all modifiable fields with their default values in the TektonConfig CR:
running-in-environment-with-injected-sidecars(default:true): Set this field tofalseif pipelines run in a cluster that does not use injected sidecars, such as Istio. Setting it tofalsedecreases the time a pipeline takes for a task run to start.NoteFor clusters that use injected sidecars, setting this field to
falsecan lead to an unexpected behavior.-
await-sidecar-readiness(default:true): Set this field tofalseto stop OpenShift Pipelines from waiting forTaskRunsidecar containers to run before it begins to operate. When set tofalse, tasks run in environments that do not support thedownwardAPIvolume type. -
default-service-account(default:pipeline): This field has the default service account name to use for theTaskRunandPipelineRunresources, if none is specified. require-git-ssh-secret-known-hosts(default:false): Setting this field totruerequires that any Git SSH secret must include theknown_hostsfield.- For more information about configuring Git SSH secrets, see Configuring SSH authentication for Git in the Additional resources section.
-
enable-tekton-oci-bundles(default:false): Set this field totrueto enable the use of an experimental alpha feature named Tekton OCI bundle. enable-api-fields(default:stable): You can enable or disable API fields. Acceptable values arestable,beta, oralpha.NoteRed Hat OpenShift Pipelines does not support the
alphavalue.-
enable-provenance-in-status(default:false): Set this field totrueto enable populating theprovenancefield inTaskRunandPipelineRunstatuses. Theprovenancefield has metadata about resources used in the task run and pipeline run, such as the source for fetching a remote task or pipeline definition. -
enable-custom-tasks(default:true): Set this field tofalseto disable the use of custom tasks in pipelines. -
disable-creds-init(default:false): Set this field totrueto prevent OpenShift Pipelines from scanning attached service accounts and injecting any credentials into your steps. -
disable-affinity-assistant(default:true): Set this field tofalseto enable affinity assistant for eachTaskRunresource sharing a persistent volume claim workspace.
You can change the default values of the following metrics fields in the TektonConfig CR:
-
metrics.taskrun.duration-typeandmetrics.pipelinerun.duration-type(default:histogram): Setting these fields determines the duration type for a task or pipeline run. Acceptable value isgaugeorhistogram. -
metrics.taskrun.level(default:task): This field determines the level of the task run metrics. Acceptable value istaskrun,task, ornamespace. -
metrics.pipelinerun.level(default:pipeline): This field determines the level of the pipeline run metrics. Acceptable value ispipelinerun,pipeline, ornamespace.
3.3.2. Optional configuration fields Copy linkLink copied to clipboard!
You can configure optional fields in the TektonConfig custom resource (CR) to enable advanced features or override specific defaults. These fields, such as default timeouts and pod templates, are not set by default and allow for fine-grained control over your pipeline execution environment.
The following fields do not have a default value, and the system considers them only if you configure them. By default, the Operator does not add and configure these fields in the TektonConfig CR.
-
default-timeout-minutes: This field sets the default timeout for theTaskRunandPipelineRunresources, if you do not specify one when creating them. If a task run or pipeline run takes more time than the set number of minutes for its execution, then the system times out and cancels the task run or pipeline run. For example,default-timeout-minutes: 60sets 60 minutes as default. -
default-managed-by-label-value: This field has the default value given to theapp.kubernetes.io/managed-bylabel that the system applies to allTaskRunpods, if you specify none. For example,default-managed-by-label-value: tekton-pipelines. -
default-pod-template: This field sets the defaultTaskRunandPipelineRunpod templates, if you specify none. -
default-cloud-events-sink: This field sets the defaultCloudEventssink used for theTaskRunandPipelineRunresources, if you specify none. -
default-task-run-workspace-binding: This field has the default workspace configuration for the workspaces that aTaskresource declares, but aTaskRunresource does not explicitly declare. -
default-affinity-assistant-pod-template: This field sets the defaultPipelineRunpod template used for affinity assistant pods, if you specify none. -
default-max-matrix-combinations-count: This field has the default maximum number of combinations generated from a matrix, if you specify none.
3.4. Changing the default service account for OpenShift Pipelines Copy linkLink copied to clipboard!
You can change the default service account used by OpenShift Pipelines for task and pipeline runs to meet your security or operational requirements. By editing the TektonConfig custom resource (CR), you can specify a different service account for pipelines and triggers.
Example TektonConfig CR
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
pipeline:
default-service-account: pipeline
trigger:
default-service-account: pipeline
enable-api-fields: stable
3.5. Setting labels and annotations for the OpenShift Pipelines installation namespace Copy linkLink copied to clipboard!
You can apply custom labels and annotations to the openshift-pipelines namespace to integrate with your organization’s metadata standards or tools. You can configure these metadata fields in the TektonConfig custom resource (CR) and apply them.
Changing the name of the openshift-pipelines namespace is not supported.
Specify the labels and annotations by adding them to the spec.targetNamespaceMetadata specification in the TektonConfig custom resource (CR).
Example of setting labels and annotations for the openshift-pipelines namespace
apiVersion: operator.tekton.dev/v1
kind: TektonConfig
metadata:
name: config
spec:
targetNamespaceMetadata:
labels: {"example-label":"example-value"}
annotations: {"example-annotation":"example-value"}
3.6. Setting the resync period for the pipelines controller Copy linkLink copied to clipboard!
You can configure the resync period for the pipelines controller to optimize resource usage in clusters with a large number of pipeline and task runs. By adjusting this interval in the TektonConfig custom resource, you control how often the controller reconciles all resources regardless of events.
The default resync period is 10 hours. If you have a large number of pipeline runs and task runs, a full reconciliation every 10 hours might consume too many resources. In this case, you can configure a longer resync period.
Prerequisites
-
You are logged in to your OpenShift Container Platform cluster with
cluster-adminprivileges.
Procedure
In the
TektonConfigcustom resource, configure the resync period for the pipelines controller, as shown in the following example.Example
apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: options: deployments: tekton-pipelines-controller: spec: template: spec: containers: - name: tekton-pipelines-controller args: - "-resync-period=24h"name.args- This example sets the resync period to 24 hours.
3.7. Disabling the service monitor Copy linkLink copied to clipboard!
You can disable the service monitor in OpenShift Pipelines if you do not need to expose telemetry data or want to reduce resource consumption. This configuration is managed by setting the enableMetrics parameter to false in the TektonConfig custom resource (CR).
Example
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
pipeline:
params:
- name: enableMetrics
value: 'false'
3.8. Configuring pipeline resolvers Copy linkLink copied to clipboard!
You can enable or disable specific pipeline resolvers, such as git, cluster, bundle, and hub resolvers, to control how your pipelines fetch resources. These settings are managed within the TektonConfig custom resource (CR), where you can also give resolver-specific configurations.
-
enable-bundles-resolver -
enable-cluster-resolver -
enable-git-resolver -
enable-hub-resolver
Example
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
pipeline:
enable-bundles-resolver: true
enable-cluster-resolver: true
enable-git-resolver: true
enable-hub-resolver: true
You can also give resolver specific configurations in the TektonConfig CR. For example, define the following fields in the map[string]string format to set configurations for each pipeline resolver:
Example
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
pipeline:
bundles-resolver-config:
default-service-account: pipelines
cluster-resolver-config:
default-namespace: test
git-resolver-config:
server-url: localhost.com
hub-resolver-config:
default-tekton-hub-catalog: tekton
3.9. Disabling resolver tasks and pipeline templates Copy linkLink copied to clipboard!
You can disable the automatic installation of resolver tasks and pipeline templates to customize your cluster’s initial state. By modifying the TektonConfig custom resource (CR), you can prevent these default resources from being deployed if they are not required for your environment.
By default, the TektonAddon custom resource (CR) installs resolverTasks and pipelineTemplates resources along with OpenShift Pipelines on the cluster.
Procedure
Edit the
TektonConfigCR by running the following command:$ oc edit TektonConfig configIn the
TektonConfigCR, set theresolverTasksandpipelineTemplatesparameter value in.addon.paramsspec tofalse:Example of disabling resolver task and pipeline template resources
apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: # ... addon: params: - name: resolverTasks value: 'false' - name: pipelineTemplates value: 'false' # ...ImportantYou can set the value of the
pipelinesTemplatesparameter totrueonly when the value of theresolverTasksparameter istrue.
3.10. Disabling the installation of Tekton Triggers Copy linkLink copied to clipboard!
You can disable the automatic installation of Tekton Triggers during the OpenShift Pipelines deployment to manage triggers separately or exclude them from your environment. This is achieved by setting the disabled parameter to true in the TektonConfig custom resource (CR).
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
trigger:
disabled: true
#...
The default setting is false.
3.11. Disabling the integration of Tekton Hub Copy linkLink copied to clipboard!
You can disable the Tekton Hub integration in the OpenShift Container Platform web console Developer perspective to customize the user experience. This setting is controlled by the enable-devconsole-integration parameter in the TektonConfig custom resource (CR).
Example of disabling Tekton Hub
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
hub:
params:
- name: enable-devconsole-integration
value: false
3.12. Migrating from Tekton Hub to Artifact Hub Copy linkLink copied to clipboard!
Tekton Hub earlier provided a hosted catalog of prebuilt Tekton resources, including Tasks and Pipelines. This service is deprecated in favor of Artifact Hub, a centralized catalog for Tekton resources.
-
The hub resolver now defaults to the
artifacttype. -
Tekton Hub (
type: tekton) requires additional configuration to continue functioning.
3.12.1. Assess migration impact Copy linkLink copied to clipboard!
You must migrate to Artifact Hub to ensure uninterrupted catalog resolution.
You must migrate if:
-
Your Tekton resources reference
type: tektonorcatalog: Tekton. - You rely on Tekton Hub as a hosted catalog service.
Use the following scripts to identify which Tekton resources require modification and to verify the hub resolver configuration on your cluster.
# Count resources by type
echo -e "\nResources needing migration:"
find . -type f \( -name "*.yaml" -o -name "*.yml" \) \
-exec grep -l "value: tekton\|value: Tekton" {} \; \
| xargs grep "^kind:" | awk '{print $2}' | sort | uniq -c
# Check cluster hub resolver configuration
echo -e "\nHub resolver configuration:"
kubectl get configmap hubresolver-config -n openshift-pipelines \
-o jsonpath='{.data.default-type}' 2>/dev/null \
|| kubectl get configmap hubresolver-config \
-n tekton-pipelines-resolvers -o jsonpath='{.data.default-type}' 2>/dev/null \
|| echo "Hub resolver config not found (cluster may not be accessible)"
Example output:
Resources needing migration:
1 Pipeline
2 TaskRun
Hub resolver configuration:
tekton
3.12.2. Migrating to Artifact Hub Copy linkLink copied to clipboard!
You can update existing Tekton resources to use Artifact Hub instead of the deprecated Tekton Hub.
Procedure
-
Identify any
paramssections in yourPipelineRun,TaskRun, or resolver-based resources that includetype: tekton,catalog: Tekton, or non-semver catalog versions. Remove the
type: tektonparameter.NoteDo not add `type: artifact`. The resolver defaults to the `artifact` type automatically.Update the catalog name to the appropriate Artifact Hub catalog:
-
For Tasks: change
catalog: Tektontocatalog: tekton-catalog-tasks -
For Pipelines: change
catalog: Tektontocatalog: tekton-catalog-pipelines -
For StepActions: change
catalog: Tektontocatalog: tekton-catalog-stepactions
-
For Tasks: change
Update version values to full semantic versioning (semver).
For example, change a version such as
0.8to0.8.0.Update your resource definitions so.
The following examples show how to migrate resolver parameters from Tekton Hub to Artifact Hub.
params: - name: type value: tekton # remove this type and value - name: catalog value: Tekton # change to tekton-catalog-tasks - name: name value: git-cloneparams: # type: artifact is the default and does not need to be specified - name: catalog value: tekton-catalog-tasks - name: name value: git-clone- Save the updated files and reapply them to your cluster as needed.
3.12.3. Configuring a private Artifact Hub instance Copy linkLink copied to clipboard!
For disconnected or private environments, configure a custom Artifact Hub endpoint.
Procedure
Update the
hub-resolver-configvalue in the following Tekton config example:apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: # ... pipeline: hub-resolver-config: default-artifact-hub-url: "https://artifacthub.io" # ...
When using a private Artifact Hub:
- Verify network connectivity from the resolver pods.
- Configure TLS certificates for HTTPS endpoints.
- Configure authentication, if required.
- Ensure catalog names match those published in your private hub.
3.13. Disabling the automatic creation of RBAC resources Copy linkLink copied to clipboard!
You can disable the automatic creation of cluster-wide RBAC resources by using the Red Hat OpenShift Pipelines Operator to improve security and control over permissions. This is done by setting the createRbacResource parameter to false in the TektonConfig custom resource (CR), preventing the creation of potentially privileged role bindings.
The default installation of the Red Hat OpenShift Pipelines Operator creates many role-based access control (RBAC) resources for all namespaces in the cluster, except the namespaces matching the ^(openshift|kube)-* regular expression pattern. Among these RBAC resources, the pipelines-scc-rolebinding security context constraint (SCC) role binding resource is a potential security issue, because the associated pipelines-scc SCC has the RunAsAny privilege.
Prerequisites
-
You have access to the cluster with
cluster-adminprivileges. -
You installed the OpenShift CLI (
oc).
Procedure
Edit the
TektonConfigCR by running the following command:$ oc edit TektonConfig configIn the
TektonConfigCR, set thecreateRbacResourceparam value tofalse:apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: params: - name: createRbacResource value: "false" # ...
3.14. Disabling inline specification of pipelines and tasks Copy linkLink copied to clipboard!
You can disable the inline specification of tasks and pipelines to enforce the use of referenced resources and improve security. By configuring the disable-inline-spec field in the TektonConfig custom resource (CR), you can restrict the use of embedded specs in Pipeline, PipelineRun, and TaskRun resources.
By default, OpenShift Pipelines supports inline specification of pipelines and tasks in the following cases:
You can create a
PipelineCR that includes one or more task specifications, as in the following example:Example of an inline specification in a
PipelineCRapiVersion: operator.tekton.dev/v1 kind: Pipeline metadata: name: pipelineInline spec: tasks: taskSpec: # ...You can create a
PipelineRuncustom resource (CR) that includes a pipeline specification, as in the following example:Example of an inline specification in a
PipelineRunCRapiVersion: operator.tekton.dev/v1 kind: PipelineRun metadata: name: pipelineRunInline spec: pipelineSpec: tasks: # ...You can create a
TaskRuncustom resource (CR) that includes a task specification, as in the following example:Example of an inline specification in a
TaskRunCRapiVersion: operator.tekton.dev/v1 kind: TaskRun metadata: name: taskRunInline spec: taskSpec: steps: # ...
You can disable inline specification in some or all of these cases. To disable the inline specification, set the disable-inline-spec field of the .spec.pipeline specification of the TektonConfig CR, as in the following example:
Example configuration that disables inline specification
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
pipeline:
disable-inline-spec: "pipeline,pipelinerun,taskrun"
# ...
You can set the disable-inline-spec parameter to any single value or to a comma-separated list of many values. The following values for the parameter are valid:
| Value | Description |
|---|---|
|
|
You cannot use a |
|
|
You cannot use a |
|
|
You cannot use a |
3.15. Configuration of RBAC and Trusted CA flags Copy linkLink copied to clipboard!
You can independently control the creation of RBAC resources and Trusted CA bundle config maps to customize your OpenShift Pipelines installation. The TektonConfig custom resource (CR) provides specific flags, createRbacResource and createCABundleConfigMaps, to manage these components separately.
| Parameter | Description | Default value |
|---|---|---|
|
| Controls the creation of RBAC resources only. This flag does not affect Trusted CA bundle config map. |
|
|
|
Controls the creation of Trusted CA bundle config map and Service CA bundle config map. This flag must be set to |
|
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
profile: all
targetNamespace: openshift-pipelines
params:
- name: createRbacResource
value: "true"
- name: createCABundleConfigMaps
value: "true"
- name: legacyPipelineRbac
value: "true"
params[0].name- Specifies RBAC resource creation.
params[1].name- Specifies Trusted CA bundle config map creation.
3.16. Automatic pruning of task runs and pipeline runs Copy linkLink copied to clipboard!
You can automatically prune stale TaskRun and PipelineRun resources to free up cluster resources and keep optimal performance. Red Hat OpenShift Pipelines provides a configurable pruner component that removes unused objects based on your defined policies.
You can configure the pruner for your entire installation by using the TektonConfig custom resource and change configuration for a namespace by using namespace annotations. However, you cannot selectively auto-prune an individual task run or pipeline run in a namespace.
3.16.1. Configuring the pruner Copy linkLink copied to clipboard!
You can configure the default pruner to automatically remove old TaskRun and PipelineRun resources based on a schedule or resource count. By modifying the TektonConfig custom resource (CR), you can set retention limits and pruning intervals to manage resource usage.
The following example corresponds to the default configuration:
Example of the pruner configuration
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
# ...
spec:
pruner:
resources:
- taskrun
- pipelinerun
keep: 100
prune-per-resource: false
schedule: "* 8 * * *"
startingDeadlineSeconds: 60
# ...
| Parameter | Description |
|---|---|
|
| The cron schedule for running the pruner process. The default schedule runs the process at 08:00 every day. For more information about the cron schedule syntax, see Cron schedule syntax in the Kubernetes documentation. |
|
|
The resource types to which the pruner applies. The available resource types are |
|
| The number of most recent resources of every type to keep. |
|
|
If you set this to
If you set this to |
|
|
The maximum time for which to keep resources, in minutes. For example, to retain resources created not more than five days ago, set |
|
| This parameter is optional. If the pruner job does not start at the scheduled time for any reason, this setting configures the maximum time, in seconds, in which the job can still be started. If the job does not start within the specified time, OpenShift Pipelines considers this job failed and starts the pruner at the next scheduled time. If you do not specify this parameter and the pruner job does not start at the scheduled time, OpenShift Pipelines attempts to start the job at any later time possible. |
The keep and keep-since parameters are mutually exclusive. Use only one of them in your configuration.
3.16.2. Annotations for automatically pruning task runs and pipeline runs Copy linkLink copied to clipboard!
You can customize the pruning behavior for specific namespaces by applying annotations to the Namespace resource. These annotations allow you to override global pruning settings, such as retention limits and schedules, for individual projects.
The following namespace annotations have the same meanings as the corresponding keys in the TektonConfig custom resource:
-
operator.tekton.dev/prune.schedule -
operator.tekton.dev/prune.resources -
operator.tekton.dev/prune.keep -
operator.tekton.dev/prune.prune-per-resource -
operator.tekton.dev/prune.keep-since
The operator.tekton.dev/prune.resources annotation accepts a comma-separated list. To prune both task runs and pipeline runs, set this annotation to "taskrun, pipelinerun".
The following additional namespace annotations are available:
-
operator.tekton.dev/prune.skip: When set totrue, the namespace for which the annotation is configured is not pruned. -
operator.tekton.dev/prune.strategy: Set the value of this annotation to eitherkeeporkeep-since.
For example, the following annotations retain all task runs and pipeline runs created in the last five days and delete the older resources:
Example of auto-pruning annotations
kind: Namespace
apiVersion: v1
# ...
metadata:
annotations:
operator.tekton.dev/prune.resources: "taskrun, pipelinerun"
operator.tekton.dev/prune.keep-since: 7200
# ...
3.17. Enabling the event-driven pruner Copy linkLink copied to clipboard!
You can enable the event-based pruner to delete completed PipelineRun and TaskRun resources in near real-time. By configuring the tektonpruner controller in the TektonConfig custom resource (CR), you can replace the default scheduled pruner with an event-driven approach for more immediate resource cleanup.
You must disable the job-based pruner in the TektonConfig CR before you enable the event-driven pruner. If you enable both pruner types, the deployment readiness status changes to False and the output displays the following error message:
Components not in ready state: Invalid Pruner Configuration!! Both pruners, tektonpruner(event based) and pruner(job based) cannot be enabled simultaneously. Please disable one of them.
Procedure
In your TektonConfig CR, disable the job-based pruner by setting
spec.pruner.disabledfield totrueand enable the event-driven pruner by setting thespec.tektonpruner.disabledfield tofalse.For example:
apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: # ... pruner: disabled: true # ... tektonpruner: disabled: false options: {} # ...After you apply the updated CR, the Operator deploys the
tekton-pruner-controllerpod in theopenshift-pipelinesnamespace.Ensure that the following config maps are present in the
openshift-pipelinesnamespace:Expand Config map Purpose tekton-pruner-default-specDefine default pruning behavior
pruner-infoStore internal runtime data used by the controller
config-logging-tekton-prunerConfigure logging settings for the pruner
config-observability-tekton-prunerEnable observability features such as metrics and tracing
Verification
To verify that the
tekton-pruner-controllerpod is running, run the following command:$ oc get pods -n openshift-pipelinesVerify that the output includes a
tekton-pruner-controllerandtekton-pruner-webhookpods in theRunningstate.Example output:
$ tekton-pruner-controller-<id> Running tekton-pruner-webhook-<id> Running
3.17.1. Configuration of the event-driven pruner Copy linkLink copied to clipboard!
You can fine-tune the event-based pruner by adjusting settings in the TektonConfig custom resource (CR). This reference details the available configuration options, including history limits, time-to-live (TTL) values, and namespace-specific policies.
The following is an example of the TektonConfig CR with the default configuration that uses global pruning rules:
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
# ...
tektonpruner:
disabled: false
global-config:
enforcedConfigLevel: global
failedHistoryLimit: null
historyLimit: 10
namespaces: null
successfulHistoryLimit: null
ttlSecondsAfterFinished: null
options: {}
# ...
failedHistoryLimit- The amount of retained failed runs.
historyLimit- The amount of runs to retain. Pruner uses this setting if status-specific limits are not defined.
namespaces-
Definition of per-namespace pruning policies, when you set
enforcedConfigLeveltonamespace. successfulHistoryLimit- The amount of retained successful runs.
ttlSecondsAfterFinished- Time in seconds after completion, after which the pruner deletes resources.
You can define pruning rules for individual namespaces by setting enforcedConfigLevel to namespace and configuring policies under the namespaces section. In the following example, the pruner applies a 60 second time to live (TTL) to resources in the dev-project namespace:
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
# ...
tektonpruner:
disabled: false
global-config:
enforcedConfigLevel: namespace
ttlSecondsAfterFinished: 300
namespaces:
dev-project:
ttlSecondsAfterFinished: 60
# ...
You can use the following parameters in your TektonConfig CR tektonpruner:
| Parameter | Description |
|---|---|
|
| Delete resources a fixed number of seconds after they complete. |
|
| Retain the specified number of the most recent successful runs. Delete older successful runs. |
|
| Retain the specified number of the most recent failed runs. Delete older failed runs. |
|
|
Apply a generic history limit when |
|
|
Specify the level at which pruner applies the configuration. Accepted values: |
|
| Define per-namespace pruning policies. |
You can use TTL-based pruning to prune resources exceeding set expiration times. Use history-based pruning to prune resources exceeding the configured historyLimit. TTL and history limits operate independently.
- Global configuration of the event-driven pruner
The following example shows the default
TektonConfigCR configuration, which applies global pruning rules:apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: # ... tektonpruner: disabled: false global-config: enforcedConfigLevel: global failedHistoryLimit: 5 historyLimit: 10 successfulHistoryLimit: 5 ttlSecondsAfterFinished: 3600 options: {} # ...-
failedHistoryLimit: The amount of retained failed runs. -
historyLimit: The amount of runs to retain. Pruner uses this setting if status-specific limits are not defined. -
successfulHistoryLimit: The amount of retained successful runs. -
ttlSecondsAfterFinished: Time in seconds after completion, after which the pruner deletes resources.
-
- Namespace-level configuration of the event-driven pruner
You can define pruning rules for individual namespaces by setting
enforcedConfigLeveltonamespaceand configuring policies under thenamespacessection. In the following example, the pruner applies a 60 second TTL to resources in thedev-projectandstagingnamespaces:apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: # ... tektonpruner: disabled: false global-config: enforcedConfigLevel: namespace ttlSecondsAfterFinished: 300 namespaces: dev-project: ttlSecondsAfterFinished: 60 staging: ttlSecondsAfterFinished: 60 # ...- Resource-level configuration of the event-driven pruner
If you have configured the namespace-level configuration of the event-driven pruner, you can further configure resource-level pruning rules by creating a
tekton-pruner-namespace-specconfig map in your namespace. Resource-level rules take precedence over global and namespace-level pruning configuration when defined for a specific resource type.When many config maps apply to the same resource, the event-driven pruner applies the most specific rule.
The following example defines a TTL and a history limit for both
PipelineRunandTaskRunresources:apiVersion: v1 kind: ConfigMap metadata: name: tekton-pruner-namespace-spec namespace: user-specified-namespace labels: app.kubernetes.io/part-of: tekton-pruner pruner.tekton.dev/config-type: namespace data: ns-config: | ttlSecondsAfterFinished: 300 historyLimit: 5The pruner controller requires the
tekton-pruner-namespace-specname and theapp.kubernetes.io/part-of: tekton-prunerandpruner.tekton.dev/config-type: namespacelabels on all resource-level pruning config maps to process them correctly. The pruner controller ignores config maps missing these labels or using an wrong name.- Resource-level configuration of the event-driven pruner with selectors
In the following example, the pruning rule applies only if the resource has both the
priority: highlabel and thecompliance: requiredannotation. Resources that do not match this selector fall back to the namespace default or other selectors with lower specificity:apiVersion: v1 kind: ConfigMap metadata: name: tekton-pruner-namespace-spec namespace: user-specified-namespace labels: app.kubernetes.io/part-of: tekton-pruner pruner.tekton.dev/config-type: namespace data: ns-config: | ttlSecondsAfterFinished: 3600 pipelineRuns: - selector: - matchLabels: priority: high matchAnnotations: compliance: required ttlSecondsAfterFinished: 7776000 successfulHistoryLimit: 100 failedHistoryLimit: 100- Common values
The following tables give recommended values for TTL and history limits for
PipelineRunsandTaskRuns. Use these values to help configure pruning policies that balance cluster performance, resource retention, and operational requirements.Expand Time period Seconds Use case 5 minutes
300
Development and testing with rapid iteration
30 minutes
1800
Short-lived experiments
1 hour
3600
CI pipelines
6 hours
21600
Daily builds
1 day
86400
Staging environments
7 days
604800
Production, short retention
30 days
2592000
Compliance, auditing
90 days
7776000
Regulated industries
Expand Environment successfulHistoryLimitfailedHistoryLimitReason Development
3-5
5-10
Increase feedback turnaround and reduce storage requirements
Staging
5-10
10-20
Balance retention and resources
Production
10-50
20-100
Audit trial and debugging
CI/CD
3-5
10-20
Give recent context for failure analysis
3.17.2. Observability metrics of the event-driven pruner Copy linkLink copied to clipboard!
You can monitor the performance and health of the event-based pruner by using the metrics exposed by the tekton-pruner-controller. These metrics, available in OpenTelemetry format, give insights into resource processing, error rates, and reconciliation times for effective troubleshooting and capacity planning.
Resource-level pruning rules configured by using config maps in individual namespaces also emit metrics by using the same labels, allowing you to track pruning at finer granularity.
The following bullets are categories of the metrics exposed:
- Resource processing
- Performance timing
- State tracking
- Error monitoring
Most pruner metrics use labels to give additional context. You can use these labels in PromQL queries or dashboards to filter and group the metrics.
| Label | Description |
|---|---|
|
|
The Kubernetes namespace of the |
|
| The Tekton resource type. |
|
| The outcome of processing a resource. |
|
| The pruning method that deleted a resource. |
|
| Specific cause for skipping or error outcomes. |
- Resource processing metrics
The event-driven pruner exposes the following resource processing metrics:
Expand Name Type Description Labels tekton_pruner_controller_resources_processed_totalCounter
Total resources processed
namespace, resource_type, status
tekton_pruner_controller_resources_deleted_totalCounter
Total resources deleted
namespace, resource_type, operation
- Performance timing metrics
The event-driven pruner exposes the following performance timing metrics:
Expand Name Type Description Labels Bucket tekton_pruner_controller_reconciliation_duration_secondsHistogram
Time spent in reconciliation
namespace, resource_type
0.1 to 30 seconds
tekton_pruner_controller_ttl_processing_duration_secondsHistogram
Time spent processing TTL
namespace, resource_type
0.1 to 30 seconds
tekton_pruner_controller_history_processing_duration_secondsHistogram
Time spent processing history limits
namespace, resource_type
0.1 to 30 seconds
- State tracking metrics
The event-driven pruner exposes the following state tracking metrics:
Expand Name Type Description kn_workqueue_adds_totalCounter
Total resources queued
kn_workqueue_depthGauge
Number of current items in queue
- Error monitoring metrics
The event-driven pruner exposes the following error monitoring metrics:
Expand Name Type Description Labels tekton_pruner_controller_resources_errors_totalCounter
Total processing errors
namespace, resource_type, reason
3.18. Setting additional options for webhooks Copy linkLink copied to clipboard!
You can configure advanced webhook options, such as failure policies and timeouts, for OpenShift Pipelines controllers to improve stability and error handling. These settings are applied by using the TektonConfig custom resource (CR) and allow you to customize how admission controllers interact with the Kubernetes API server.
Prerequisites
-
You installed the
occommand-line utility. -
You have logged in to your OpenShift Container Platform cluster with administrator rights for the namespace in which OpenShift Pipelines is installed, typically the
openshift-pipelinesnamespace.
Procedure
View the list of webhooks that the OpenShift Pipelines controllers created. There are two types of webhooks: mutating webhooks and validating webhooks.
To view the list of mutating webhooks, enter the following command:
$ oc get MutatingWebhookConfigurationExample output
NAME WEBHOOKS AGE annotation.operator.tekton.dev 1 4m20s proxy.operator.tekton.dev 1 4m20s webhook.operator.tekton.dev 1 4m22s webhook.pipeline.tekton.dev 1 4m20s webhook.triggers.tekton.dev 1 3m50sTo view the list of validating webhooks, enter the following command:
$ oc get ValidatingWebhookConfigurationExample output
NAME WEBHOOKS AGE config.webhook.operator.tekton.dev 1 4m24s config.webhook.pipeline.tekton.dev 1 4m22s config.webhook.triggers.tekton.dev 1 3m52s namespace.operator.tekton.dev 1 4m22s validation.pipelinesascode.tekton.dev 1 2m49s validation.webhook.operator.tekton.dev 1 4m24s validation.webhook.pipeline.tekton.dev 1 4m22s validation.webhook.triggers.tekton.dev 1 3m52s
In the
TektonConfigcustom resource (CR), add configuration for mutating and validating webhooks under the section for each of the controllers as necessary, as shown in the following examples. Use thevalidation.webhook.pipeline.tekton.devspec for the validating webhooks and thewebhook.pipeline.tekton.devspec for the mutating webhooks.Important-
You cannot set configuration for
operatorwebhooks. -
All settings are optional. For example, you can set the
timeoutSecondsparameter and omit thefailurePolicyandsideEffectsparameters.
Example settings for the Pipelines controller
apiVersion: operator.tekton.dev/v1 kind: TektonConfig metadata: name: config spec: pipeline: options: webhookConfigurationOptions: validation.webhook.pipeline.tekton.dev: failurePolicy: Fail timeoutSeconds: 20 sideEffects: None webhook.pipeline.tekton.dev: failurePolicy: Fail timeoutSeconds: 20 sideEffects: NoneExample settings for the Triggers controller
apiVersion: operator.tekton.dev/v1 kind: TektonConfig metadata: name: config spec: triggers: options: webhookConfigurationOptions: validation.webhook.triggers.tekton.dev: failurePolicy: Fail timeoutSeconds: 20 sideEffects: None webhook.triggers.tekton.dev: failurePolicy: Fail timeoutSeconds: 20 sideEffects: NoneExample settings for the Pipelines as Code controller
apiVersion: operator.tekton.dev/v1 kind: TektonConfig metadata: name: config spec: pipelinesAsCode: options: webhookConfigurationOptions: validation.pipelinesascode.tekton.dev: failurePolicy: Fail timeoutSeconds: 20 sideEffects: None pipelines.triggers.tekton.dev: failurePolicy: Fail timeoutSeconds: 20 sideEffects: NoneExample settings for the Tekton Hub controller
apiVersion: operator.tekton.dev/v1 kind: TektonConfig metadata: name: config spec: hub: options: webhookConfigurationOptions: validation.webhook.hub.tekton.dev: failurePolicy: Fail timeoutSeconds: 20 sideEffects: None webhook.hub.tekton.dev: failurePolicy: Fail timeoutSeconds: 20 sideEffects: None-
You cannot set configuration for