Este conteúdo não está disponível no idioma selecionado.
Chapter 3. Customizing configurations in the TektonConfig custom resource
In Red Hat OpenShift Pipelines, you can customize the following configurations by using the TektonConfig custom resource (CR):
- Optimizing OpenShift Pipelines performance, including high-availability mode for the OpenShift Pipelines controller
- Configuring the Red Hat OpenShift Pipelines control plane
- Changing the default service account
- Disabling the service monitor
- Configuring pipeline resolvers
- Disabling pipeline templates
- Disabling the integration of Tekton Hub
- Disabling the automatic creation of RBAC resources
- Pruning of task runs and pipeline runs
3.1. Prerequisites Copiar o linkLink copiado para a área de transferência!
- You have installed the Red Hat OpenShift Pipelines Operator.
3.2. Performance tuning using the TektonConfig custom resource Copiar o linkLink copiado para a área de transferência!
You can tune the performance and high availability (HA) of the OpenShift Pipelines controller by editing the TektonConfig custom resource (CR). You can adjust parameters such as replica counts, buckets, and API query limits to optimize the controller for your specific workload requirements.
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
pipeline:
performance:
disable-ha: false
buckets: 7
replicas: 5
threads-per-controller: 2
kube-api-qps: 5.0
kube-api-burst: 10
All fields are optional. If you set them, the Red Hat OpenShift Pipelines Operator includes most of the fields as arguments in the openshift-pipelines-controller deployment under the openshift-pipelines-controller container. The OpenShift Pipelines Operator also updates the buckets field in the config-leader-election config map under the openshift-pipelines namespace.
If you do not specify the values, the OpenShift Pipelines Operator does not update those fields and applies the default values for the OpenShift Pipelines controller.
If you change or remove any of the performance fields, the OpenShift Pipelines Operator updates the openshift-pipelines-controller deployment and the config-leader-election config map (if the buckets field changed) and re-creates openshift-pipelines-controller pods.
High-availability (HA) mode applies to the OpenShift Pipelines controller, which creates and starts pods based on pipeline run and task run definitions. Without HA mode, a single pod executes these operations, potentially creating significant delays under a high load.
In HA mode, OpenShift Pipelines uses several pods (replicas) to run these operations. Initially, OpenShift Pipelines assigns every controller operation into a bucket. Each replica picks operations from one or more buckets. If two replicas could pick the same operation at the same time, the controller internally determines a leader that executes this operation.
HA mode does not affect execution of task runs after creating the pods.
| Name | Description | Default value for the OpenShift Pipelines controller |
|---|---|---|
|
| Enable or disable the HA mode. By default, the system enables the HA mode. |
|
|
|
In HA mode, the number of buckets used to process controller operations. The maximum value is |
|
|
|
In HA mode, the number of pods created to process controller operations. Set this value to the same or lower number than the |
|
|
| The number of threads (workers) to use when processing the work queue of the OpenShift Pipelines controller. |
|
|
| The maximum queries per second (QPS) to the cluster control plane from the REST client. |
|
|
| The maximum burst for a throttle. |
|
The OpenShift Pipelines Operator does not control the number of replicas of the OpenShift Pipelines controller. The replicas setting of the deployment determines the number of replicas. For example, to change the number of replicas to 3, enter the following command:
$ oc --namespace openshift-pipelines scale deployment openshift-pipelines-controller --replicas=3
The OpenShift Pipelines controller multiplies the kube-api-qps and kube-api-burst fields by 2. For example, if the kube-api-qps and kube-api-burst values are 10, the actual QPS and burst values become 20.
3.3. Configuring the Red Hat OpenShift Pipelines control plane Copiar o linkLink copiado para a área de transferência!
You can configure the OpenShift Pipelines control plane to suit your operational needs by editing the TektonConfig custom resource (CR). Customize settings such as metrics collection, sidecar injection, and service account defaults directly through the OpenShift Container Platform web console as needed.
Procedure
-
In the Administrator perspective of the web console, navigate to Administration
CustomResourceDefinitions. -
Use the Search by name box to search for the
tektonconfigs.operator.tekton.devcustom resource definition (CRD). ClickTektonConfigto see the CRD details page. - Click the Instances tab.
-
Click the config instance to see the
TektonConfigCR details. - Click the YAML tab.
Edit the
TektonConfigYAML file based on your requirements.Example
TektonConfigCRapiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: running-in-environment-with-injected-sidecars: true metrics.taskrun.duration-type: histogram metrics.pipelinerun.duration-type: histogram await-sidecar-readiness: true params: - name: enableMetrics value: 'true' default-service-account: pipeline require-git-ssh-secret-known-hosts: false enable-tekton-oci-bundles: false metrics.taskrun.level: task metrics.pipelinerun.level: pipeline enable-api-fields: stable enable-provenance-in-status: false enable-custom-tasks: true disable-creds-init: false disable-affinity-assistant: true
3.3.1. Modifiable fields with default values Copiar o linkLink copiado para a área de transferência!
You can change various default configuration fields in the TektonConfig custom resource (CR) to tailor the behavior of your pipelines. This reference lists the available fields, such as sidecar injection and metric levels, along with their default values and descriptions.
The following list includes all modifiable fields with their default values in the TektonConfig CR:
running-in-environment-with-injected-sidecars(default:true): Set this field tofalseif pipelines run in a cluster that does not use injected sidecars, such as Istio. Setting it tofalsedecreases the time a pipeline takes for a task run to start.NoteFor clusters that use injected sidecars, setting this field to
falsecan lead to an unexpected behavior.-
await-sidecar-readiness(default:true): Set this field tofalseto stop OpenShift Pipelines from waiting forTaskRunsidecar containers to run before it begins to operate. When set tofalse, tasks run in environments that do not support thedownwardAPIvolume type. -
default-service-account(default:pipeline): This field has the default service account name to use for theTaskRunandPipelineRunresources, if you do not specify one. require-git-ssh-secret-known-hosts(default:false): Setting this field totruerequires that any Git SSH secret must include theknown_hostsfield.- For more information about configuring Git SSH secrets, see Configuring SSH authentication for Git in the Additional resources section.
-
enable-tekton-oci-bundles(default:false): Set this field totrueto enable the use of an experimental alpha feature named Tekton OCI bundle. enable-api-fields(default:stable): You can enable or disable API fields. Acceptable values arestable,beta, oralpha.NoteRed Hat OpenShift Pipelines does not support the
alphavalue.-
enable-provenance-in-status(default:false): Set this field totrueto enable populating theprovenancefield inTaskRunandPipelineRunstatuses. Theprovenancefield has metadata about resources used in the task run and pipeline run, such as the source for fetching a remote task or pipeline definition. -
enable-custom-tasks(default:true): Set this field tofalseto disable the use of custom tasks in pipelines. -
disable-creds-init(default:false): Set this field totrueto prevent OpenShift Pipelines from scanning attached service accounts and injecting any credentials into your steps. -
disable-affinity-assistant(default:true): Set this field tofalseto enable affinity assistant for eachTaskRunresource sharing a persistent volume claim workspace.
You can change the default values of the following metrics fields in the TektonConfig CR:
-
metrics.taskrun.duration-typeandmetrics.pipelinerun.duration-type(default:histogram): Setting these fields determines the duration type for a task or pipeline run. Acceptable value isgaugeorhistogram. -
metrics.taskrun.level(default:task): This field determines the level of the task run metrics. Acceptable value istaskrun,task, ornamespace. -
metrics.pipelinerun.level(default:pipeline): This field determines the level of the pipeline run metrics. Acceptable value ispipelinerun,pipeline, ornamespace.
3.3.2. Optional configuration fields Copiar o linkLink copiado para a área de transferência!
You can configure optional fields in the TektonConfig custom resource (CR) to enable advanced features or override specific defaults. These fields, such as default timeouts and pod templates, are not set by default and allow for fine-grained control over your pipeline execution environment.
The following fields do not have a default value, and the system considers them only if you configure them. By default, the Operator does not add and configure these fields in the TektonConfig CR.
-
default-timeout-minutes: This field sets the default timeout for theTaskRunandPipelineRunresources, if you do not specify one when creating them. If a task run or pipeline run takes more time than the set number of minutes for its execution, then the system times out and cancels the task run or pipeline run. For example,default-timeout-minutes: 60sets 60 minutes as default. -
default-managed-by-label-value: This field has the default value given to theapp.kubernetes.io/managed-bylabel that the system applies to allTaskRunpods, if you specify none. For example,default-managed-by-label-value: tekton-pipelines. -
default-pod-template: This field sets the defaultTaskRunandPipelineRunpod templates, if you specify none. -
default-cloud-events-sink: This field sets the defaultCloudEventssink used for theTaskRunandPipelineRunresources, if you specify none. -
default-task-run-workspace-binding: This field has the default workspace configuration for the workspaces that aTaskresource declares, but aTaskRunresource does not explicitly declare. -
default-affinity-assistant-pod-template: This field sets the defaultPipelineRunpod template used for affinity assistant pods, if you specify none. -
default-max-matrix-combinations-count: This field has the default maximum number of combinations generated from a matrix, if you specify none.
3.4. Changing the default service account for OpenShift Pipelines Copiar o linkLink copiado para a área de transferência!
You can change the default service account used by OpenShift Pipelines for task and pipeline runs to meet your security or operational requirements. By editing the TektonConfig custom resource (CR), you can specify a different service account for pipelines and triggers.
Example TektonConfig CR
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
pipeline:
default-service-account: pipeline
trigger:
default-service-account: pipeline
enable-api-fields: stable
3.5. Setting labels and annotations for the OpenShift Pipelines installation namespace Copiar o linkLink copiado para a área de transferência!
You can apply custom labels and annotations to the openshift-pipelines namespace to integrate with your organization’s metadata standards or tools. You can configure these metadata fields in the TektonConfig custom resource (CR) and apply them.
Changing the name of the openshift-pipelines namespace is not supported.
Specify the labels and annotations by adding them to the spec.targetNamespaceMetadata specification in the TektonConfig custom resource (CR).
Example of setting labels and annotations for the openshift-pipelines namespace
apiVersion: operator.tekton.dev/v1
kind: TektonConfig
metadata:
name: config
spec:
targetNamespaceMetadata:
labels: {"example-label":"example-value"}
annotations: {"example-annotation":"example-value"}
3.6. Setting the resync period for the pipelines controller Copiar o linkLink copiado para a área de transferência!
You can configure the resync period for the pipelines controller to optimize resource usage in clusters with a large number of pipeline and task runs. By adjusting this interval in the TektonConfig custom resource, you control how often the controller reconciles all resources regardless of events.
The default resync period is 10 hours. If you have a large number of pipeline runs and task runs, a full reconciliation every 10 hours might consume too many resources. In this case, you can configure a longer resync period.
Prerequisites
-
You are logged in to your OpenShift Container Platform cluster with
cluster-adminprivileges.
Procedure
In the
TektonConfigcustom resource, configure the resync period for the pipelines controller, as shown in the following example.Example
apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: options: deployments: tekton-pipelines-controller: spec: template: spec: containers: - name: tekton-pipelines-controller args: - "-resync-period=24h"name.args- This example sets the resync period to 24 hours.
3.7. Disabling the service monitor Copiar o linkLink copiado para a área de transferência!
You can disable the service monitor in OpenShift Pipelines if you do not need to expose telemetry data or want to reduce resource consumption. To do this, set the enableMetrics parameter to false in the TektonConfig custom resource (CR).
Example
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
pipeline:
params:
- name: enableMetrics
value: 'false'
3.8. Configuring pipeline resolvers Copiar o linkLink copiado para a área de transferência!
You can enable or disable specific pipeline resolvers, such as git, cluster, bundle, and hub resolvers, to control how your pipelines fetch resources. You manage these settings within the TektonConfig custom resource (CR), where you can also give resolver-specific configurations.
-
enable-bundles-resolver -
enable-cluster-resolver -
enable-git-resolver -
enable-hub-resolver
Example
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
pipeline:
enable-bundles-resolver: true
enable-cluster-resolver: true
enable-git-resolver: true
enable-hub-resolver: true
You can also give resolver specific configurations in the TektonConfig CR. For example, define the following fields in the map[string]string format to set configurations for each pipeline resolver:
Example
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
pipeline:
bundles-resolver-config:
default-service-account: pipelines
cluster-resolver-config:
default-namespace: test
git-resolver-config:
server-url: localhost.com
hub-resolver-config:
default-tekton-hub-catalog: tekton
3.9. Disabling resolver tasks and pipeline templates Copiar o linkLink copiado para a área de transferência!
You can disable the automatic installation of resolver tasks and pipeline templates to customize your cluster’s initial state. By modifying the TektonConfig custom resource (CR), you can prevent the deployment of these default resources if your environment does not require them.
By default, the TektonAddon custom resource (CR) installs resolverTasks and pipelineTemplates resources along with OpenShift Pipelines on the cluster.
Procedure
Edit the
TektonConfigCR by running the following command:$ oc edit TektonConfig configIn the
TektonConfigCR, set theresolverTasksandpipelineTemplatesparameter value in.addon.paramsspec tofalse:Example of disabling resolver task and pipeline template resources
apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: # ... addon: params: - name: resolverTasks value: 'false' - name: pipelineTemplates value: 'false' # ...ImportantYou can set the value of the
pipelinesTemplatesparameter totrueonly when the value of theresolverTasksparameter istrue.
3.10. Disabling the installation of Tekton Triggers Copiar o linkLink copiado para a área de transferência!
You can disable the automatic installation of Tekton Triggers during the OpenShift Pipelines deployment to manage triggers separately or exclude them from your environment. To do this, set the disabled parameter to true in the TektonConfig custom resource (CR).
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
trigger:
disabled: true
#...
The default setting is false.
3.11. Disabling the integration of Tekton Hub Copiar o linkLink copiado para a área de transferência!
You can disable the Tekton Hub integration in the OpenShift Container Platform web console Developer perspective to customize the user experience. The enable-devconsole-integration parameter in the TektonConfig custom resource (CR) controls this setting.
Example of disabling Tekton Hub
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
hub:
params:
- name: enable-devconsole-integration
value: false
3.12. Migrating from Tekton Hub to Artifact Hub Copiar o linkLink copiado para a área de transferência!
Tekton Hub provided a catalog of prebuilt Tekton tasks and pipelines. This service is deprecated. Use Artifact Hub, a centralized catalog for Tekton resources, instead.
-
The hub resolver now defaults to the
artifacttype. -
Tekton Hub (
type: tekton) requires additional configuration to continue functioning.
3.12.1. Assess migration impact Copiar o linkLink copiado para a área de transferência!
You must migrate to Artifact Hub to ensure uninterrupted catalog resolution.
You must migrate if:
-
Your Tekton resources reference
type: tektonorcatalog: Tekton. - You rely on Tekton Hub as a hosted catalog service.
Use the following scripts to identify which Tekton resources require modification and to verify the hub resolver configuration on your cluster.
# Count resources by type
echo -e "\nResources needing migration:"
find . -type f \( -name "*.yaml" -o -name "*.yml" \) \
-exec grep -l "value: tekton\|value: Tekton" {} \; \
| xargs grep "^kind:" | awk '{print $2}' | sort | uniq -c
# Check cluster hub resolver configuration
echo -e "\nHub resolver configuration:"
kubectl get configmap hubresolver-config -n openshift-pipelines \
-o jsonpath='{.data.default-type}' 2>/dev/null \
|| kubectl get configmap hubresolver-config \
-n tekton-pipelines-resolvers -o jsonpath='{.data.default-type}' 2>/dev/null \
|| echo "Hub resolver config not found (cluster may not be accessible)"
Example output:
Resources needing migration:
1 Pipeline
2 TaskRun
Hub resolver configuration:
tekton
3.12.2. Migrating to Artifact Hub Copiar o linkLink copiado para a área de transferência!
You can update existing Tekton resources to use Artifact Hub instead of the deprecated Tekton Hub.
Procedure
-
Identify any
paramssections in yourPipelineRun,TaskRun, or resolver-based resources that includetype: tekton,catalog: Tekton, or non-semver catalog versions. Remove the
type: tektonparameter.NoteDo not add `type: artifact`. The resolver defaults to the `artifact` type automatically.Update the catalog name to the appropriate Artifact Hub catalog:
-
For Tasks: change
catalog: Tektontocatalog: tekton-catalog-tasks -
For Pipelines: change
catalog: Tektontocatalog: tekton-catalog-pipelines -
For
StepActions: changecatalog: Tektontocatalog: tekton-catalog-stepactions
-
For Tasks: change
Update version values to full semantic versioning (semver).
For example, change a version such as
0.8to0.8.0.Update your resource definitions so.
The following examples show how to migrate resolver parameters from Tekton Hub to Artifact Hub.
params: - name: type value: tekton # remove this type and value - name: catalog value: Tekton # change to tekton-catalog-tasks - name: name value: git-cloneparams: # type: artifact is the default and does not need to be specified - name: catalog value: tekton-catalog-tasks - name: name value: git-clone- Save the updated files and reapply them to your cluster as needed.
3.12.3. Configuring a private Artifact Hub instance Copiar o linkLink copiado para a área de transferência!
For disconnected or private environments, configure a custom Artifact Hub endpoint.
Procedure
Update the
hub-resolver-configvalue in the following Tekton config example:apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: # ... pipeline: hub-resolver-config: default-artifact-hub-url: "https://artifacthub.io" # ...
When using a private Artifact Hub:
- Verify network connectivity from the resolver pods.
- Configure TLS certificates for HTTPS endpoints.
- Configure authentication, if required.
- Ensure catalog names match those published in your private hub.
3.13. Disabling the automatic creation of RBAC resources Copiar o linkLink copiado para a área de transferência!
You can disable the automatic creation of cluster-wide RBAC resources by using the Red Hat OpenShift Pipelines Operator to improve security and control over permissions. To do this, set the createRbacResource parameter to false in the TektonConfig custom resource (CR), preventing the creation of potentially privileged role bindings.
The default installation of the Red Hat OpenShift Pipelines Operator creates many role-based access control (RBAC) resources for all namespaces in the cluster, except the namespaces matching the ^(openshift|kube)-* regular expression pattern. Among these RBAC resources, the pipelines-scc-rolebinding security context constraint (SCC) role binding resource is a potential security issue, because the associated pipelines-scc SCC has the RunAsAny privilege.
Prerequisites
-
You have access to the cluster with
cluster-adminprivileges. -
You installed the OpenShift CLI (
oc).
Procedure
Edit the
TektonConfigCR by running the following command:$ oc edit TektonConfig configIn the
TektonConfigCR, set thecreateRbacResourceparam value tofalse:apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: params: - name: createRbacResource value: "false" # ...
3.14. Disabling inline specification of pipelines and tasks Copiar o linkLink copiado para a área de transferência!
You can disable the inline specification of tasks and pipelines to enforce the use of referenced resources and improve security. By configuring the disable-inline-spec field in the TektonConfig custom resource (CR), you can restrict the use of embedded specs in Pipeline, PipelineRun, and TaskRun resources.
By default, OpenShift Pipelines supports inline specification of pipelines and tasks in the following cases:
You can create a
PipelineCR that includes one or more task specifications, as in the following example:Example of an inline specification in a
PipelineCRapiVersion: operator.tekton.dev/v1 kind: Pipeline metadata: name: pipelineInline spec: tasks: taskSpec: # ...You can create a
PipelineRuncustom resource (CR) that includes a pipeline specification, as in the following example:Example of an inline specification in a
PipelineRunCRapiVersion: operator.tekton.dev/v1 kind: PipelineRun metadata: name: pipelineRunInline spec: pipelineSpec: tasks: # ...You can create a
TaskRuncustom resource (CR) that includes a task specification, as in the following example:Example of an inline specification in a
TaskRunCRapiVersion: operator.tekton.dev/v1 kind: TaskRun metadata: name: taskRunInline spec: taskSpec: steps: # ...
You can disable inline specification in some or all of these cases. To disable the inline specification, set the disable-inline-spec field of the .spec.pipeline specification of the TektonConfig CR, as in the following example:
Example configuration that disables inline specification
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
pipeline:
disable-inline-spec: "pipeline,pipelinerun,taskrun"
# ...
You can set the disable-inline-spec parameter to any single value or to a comma-separated list of many values. The following values for the parameter are valid:
| Value | Description |
|---|---|
|
|
You cannot use a |
|
|
You cannot use a |
|
|
You cannot use a |
3.15. Configuration of RBAC and Trusted CA flags Copiar o linkLink copiado para a área de transferência!
You can independently control the creation of RBAC resources and Trusted CA bundle config maps to customize your OpenShift Pipelines installation. The TektonConfig custom resource (CR) provides specific flags, createRbacResource and createCABundleConfigMaps, to manage these components separately.
| Parameter | Description | Default value |
|---|---|---|
|
| Controls the creation of RBAC resources only. This flag does not affect Trusted CA bundle config map. |
|
|
|
Controls the creation of Trusted CA bundle config map and Service CA bundle config map. This flag must be set to |
|
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
profile: all
targetNamespace: openshift-pipelines
params:
- name: createRbacResource
value: "true"
- name: createCABundleConfigMaps
value: "true"
- name: legacyPipelineRbac
value: "true"
params[0].name- Specifies RBAC resource creation.
params[1].name- Specifies Trusted CA bundle config map creation.
3.16. Automatic pruning of task runs and pipeline runs Copiar o linkLink copiado para a área de transferência!
You can automatically prune stale TaskRun and PipelineRun resources to free up cluster resources. Red Hat OpenShift Pipelines provides a configurable pruner that removes unused objects based on your policies.
You can configure the pruner for your entire cluster by using the TektonConfig custom resource. You can also override configuration for a namespace by using namespace annotations. However, you cannot selectively auto-prune an individual task run or pipeline run.
3.16.1. Configuring the pruner Copiar o linkLink copiado para a área de transferência!
You can configure the default pruner to automatically remove old TaskRun and PipelineRun resources based on a schedule or resource count. By modifying the TektonConfig custom resource (CR), you can set retention limits and pruning intervals to manage resource usage.
The following example corresponds to the default configuration:
Example of the pruner configuration
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
# ...
spec:
pruner:
resources:
- taskrun
- pipelinerun
keep: 100
prune-per-resource: false
schedule: "* 8 * * *"
startingDeadlineSeconds: 60
# ...
| Parameter | Description |
|---|---|
|
| The cron schedule for running the pruner process. The default schedule runs the process at 08:00 every day. For more information about the cron schedule syntax, see Cron schedule syntax in the Kubernetes documentation. |
|
|
The resource types to which the pruner applies. The available resource types are |
|
| The number of most recent resources of every type to keep. |
|
|
If you set this to
If you set this to
For example, if you set |
|
|
The maximum time for which to keep resources, in minutes. For example, to retain resources created not more than five days ago, set |
|
| This parameter is optional. If the pruner job does not start at the scheduled time for any reason, this setting configures the maximum time, in seconds, in which the job can still be started. If the job does not start within the specified time, OpenShift Pipelines considers this job failed and starts the pruner at the next scheduled time. If you do not specify this parameter and the pruner job does not start at the scheduled time, OpenShift Pipelines attempts to start the job at any later time possible. |
The keep and keep-since parameters are mutually exclusive. Use only one of them in your configuration.
3.16.2. Annotations for automatically pruning task runs and pipeline runs Copiar o linkLink copiado para a área de transferência!
You can customize the pruning behavior for specific namespaces by applying annotations to the Namespace resource. These annotations allow you to override global pruning settings, such as retention limits and schedules, for individual projects.
The following namespace annotations have the same meanings as the corresponding keys in the TektonConfig custom resource:
-
operator.tekton.dev/prune.schedule -
operator.tekton.dev/prune.resources -
operator.tekton.dev/prune.keep -
operator.tekton.dev/prune.prune-per-resource -
operator.tekton.dev/prune.keep-since
The operator.tekton.dev/prune.resources annotation accepts a comma-separated list. To prune both task runs and pipeline runs, set this annotation to "taskrun, pipelinerun".
The following additional namespace annotations are available:
-
operator.tekton.dev/prune.skip: When set totrue, the namespace for which the annotation is configured is not pruned. -
operator.tekton.dev/prune.strategy: Set the value of this annotation to eitherkeeporkeep-since.
For example, the following annotations retain all task runs and pipeline runs created in the last five days and delete the older resources:
Example of auto-pruning annotations
kind: Namespace
apiVersion: v1
# ...
metadata:
annotations:
operator.tekton.dev/prune.resources: "taskrun, pipelinerun"
operator.tekton.dev/prune.keep-since: 7200
# ...
3.17. Enabling the event-driven pruner Copiar o linkLink copiado para a área de transferência!
You can enable the event-based pruner to delete completed PipelineRun and TaskRun resources in near real-time. By configuring the tektonpruner controller in the TektonConfig custom resource (CR), you can replace the default scheduled pruner with an event-driven approach for more immediate resource cleanup.
You must disable the job-based pruner in the TektonConfig CR before you enable the event-driven pruner. If you enable both pruner types, the deployment readiness status changes to False and the output displays the following error message:
Components not in ready state: Invalid Pruner Configuration!! Both pruners, tektonpruner(event based) and pruner(job based) cannot be enabled simultaneously. Please disable one of them.
Procedure
In your
TektonConfigCR, disable the job-based pruner by settingspec.pruner.disabledfield totrueand enable the event-driven pruner by setting thespec.tektonpruner.disabledfield tofalse.For example:
apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: # ... pruner: disabled: true # ... tektonpruner: disabled: false options: {} # ...After you apply the updated CR, the Operator deploys the
tekton-pruner-controllerpod in theopenshift-pipelinesnamespace.Ensure that the following config maps are present in the
openshift-pipelinesnamespace:Expand Config map Purpose tekton-pruner-default-specDefine default pruning behavior
pruner-infoStore internal runtime data used by the controller
config-logging-tekton-prunerConfigure logging settings for the pruner
config-observability-tekton-prunerEnable observability features such as metrics and tracing
Verification
To verify that the
tekton-pruner-controllerpod is running, run the following command:$ oc get pods -n openshift-pipelinesVerify that the output includes a
tekton-pruner-controllerandtekton-pruner-webhookpods in theRunningstate.Example output:
$ tekton-pruner-controller-<id> Running tekton-pruner-webhook-<id> Running
3.17.1. Configuration of the event-driven pruner Copiar o linkLink copiado para a área de transferência!
You can fine-tune the event-based pruner by adjusting settings in the TektonConfig custom resource (CR). This reference details the available configuration options, including history limits, time-to-live (TTL) values, and namespace-specific policies.
The following is an example of the TektonConfig CR with the default configuration that uses global pruning rules:
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
# ...
tektonpruner:
disabled: false
global-config:
enforcedConfigLevel: global
failedHistoryLimit: null
historyLimit: 10
namespaces: null
successfulHistoryLimit: null
ttlSecondsAfterFinished: null
options: {}
# ...
failedHistoryLimit- The amount of retained failed runs.
historyLimit- The amount of runs to retain. Pruner uses this setting if status-specific limits are not defined.
namespaces-
Definition of per-namespace pruning policies, when you set
enforcedConfigLeveltonamespace. successfulHistoryLimit- The amount of retained successful runs.
ttlSecondsAfterFinished- Time in seconds after completion. After that time, the pruner deletes resources.
You can define pruning rules for individual namespaces by setting enforcedConfigLevel to namespace and configuring policies under the namespaces section. In the following example, the pruner applies a 60 second time to live (TTL) to resources in the dev-project namespace:
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
# ...
tektonpruner:
disabled: false
global-config:
enforcedConfigLevel: namespace
ttlSecondsAfterFinished: 300
namespaces:
dev-project:
ttlSecondsAfterFinished: 60
# ...
You can use the following parameters in your TektonConfig CR tektonpruner:
| Parameter | Description |
|---|---|
|
| Delete resources a fixed number of seconds after they complete. |
|
| Retain the specified number of the most recent successful runs. Delete older successful runs. |
|
| Retain the specified number of the most recent failed runs. Delete older failed runs. |
|
|
Apply a generic history limit when |
|
|
Specify the level at which pruner applies the configuration. Accepted values: |
|
| Define per-namespace pruning policies. |
You can use TTL-based pruning to prune resources exceeding set expiration times. Use history-based pruning to prune resources exceeding the configured historyLimit. TTL and history limits operate independently.
- Global configuration of the event-driven pruner
The following example shows the default
TektonConfigCR configuration, which applies global pruning rules:apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: # ... tektonpruner: disabled: false global-config: enforcedConfigLevel: global failedHistoryLimit: 5 historyLimit: 10 successfulHistoryLimit: 5 ttlSecondsAfterFinished: 3600 options: {} # ...-
failedHistoryLimit: The amount of retained failed runs. -
historyLimit: The amount of runs to retain. Pruner uses this setting if status-specific limits are not defined. -
successfulHistoryLimit: The amount of retained successful runs. -
ttlSecondsAfterFinished: Time in seconds after completion, after which the pruner deletes resources.
-
- Namespace-level configuration of the event-driven pruner
You can define pruning rules for individual namespaces by setting
enforcedConfigLeveltonamespaceand configuring policies under thenamespacessection. In the following example, the pruner applies a 60 second TTL to resources in thedev-projectandstagingnamespaces:apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: # ... tektonpruner: disabled: false global-config: enforcedConfigLevel: namespace ttlSecondsAfterFinished: 300 namespaces: dev-project: ttlSecondsAfterFinished: 60 staging: ttlSecondsAfterFinished: 60 # ...- Resource-level configuration of the event-driven pruner
If you have configured the namespace-level configuration of the event-driven pruner, you can further configure resource-level pruning rules by creating a
tekton-pruner-namespace-specconfig map in your namespace. Resource-level rules take precedence over global and namespace-level pruning configuration when defined for a specific resource type.When many config maps apply to the same resource, the event-driven pruner applies the most specific rule.
The following example defines a TTL and a history limit for both
PipelineRunandTaskRunresources:apiVersion: v1 kind: ConfigMap metadata: name: tekton-pruner-namespace-spec namespace: user-specified-namespace labels: app.kubernetes.io/part-of: tekton-pruner pruner.tekton.dev/config-type: namespace data: ns-config: | ttlSecondsAfterFinished: 300 historyLimit: 5The pruner controller requires the
tekton-pruner-namespace-specname and theapp.kubernetes.io/part-of: tekton-prunerandpruner.tekton.dev/config-type: namespacelabels on all resource-level pruning config maps to process them correctly. The pruner controller ignores config maps missing these labels or using an wrong name.- Resource-level configuration of the event-driven pruner with selectors
In the following example, the pruning rule applies only if the resource has both the
priority: highlabel and thecompliance: requiredannotation. Resources that do not match this selector fall back to the namespace default or other selectors with lower specificity:apiVersion: v1 kind: ConfigMap metadata: name: tekton-pruner-namespace-spec namespace: user-specified-namespace labels: app.kubernetes.io/part-of: tekton-pruner pruner.tekton.dev/config-type: namespace data: ns-config: | ttlSecondsAfterFinished: 3600 pipelineRuns: - selector: - matchLabels: priority: high matchAnnotations: compliance: required ttlSecondsAfterFinished: 7776000 successfulHistoryLimit: 100 failedHistoryLimit: 100- Common values
The following tables give recommended values for TTL and history limits for
PipelineRunsandTaskRuns. Use these values to help configure pruning policies that balance cluster performance, resource retention, and operational requirements.Expand Time period Seconds Use case 5 minutes
300
Development and testing with rapid iteration
30 minutes
1800
Short-lived experiments
1 hour
3600
CI pipelines
6 hours
21600
Daily builds
1 day
86400
Staging environments
7 days
604800
Production, short retention
30 days
2592000
Compliance, auditing
90 days
7776000
Regulated industries
Expand Environment successfulHistoryLimitfailedHistoryLimitReason Development
3-5
5-10
Increase feedback turnaround and reduce storage requirements
Staging
5-10
10-20
Balance retention and resources
Production
10-50
20-100
Audit trial and debugging
CI/CD
3-5
10-20
Give recent context for failure analysis
3.17.2. Observability metrics of the event-driven pruner Copiar o linkLink copiado para a área de transferência!
You can monitor the performance and health of the event-based pruner by using the metrics exposed by the tekton-pruner-controller. These metrics, available in OpenTelemetry format, give insights into resource processing, error rates, and reconciliation times for effective troubleshooting and capacity planning.
Resource-level pruning rules configured by using config maps in individual namespaces also emit metrics by using the same labels, allowing you to track pruning at finer granularity.
The following bullets are categories of the metrics exposed:
- Resource processing
- Performance timing
- State tracking
- Error monitoring
Most pruner metrics use labels to give additional context. You can use these labels in PromQL queries or dashboards to filter and group the metrics.
| Label | Description |
|---|---|
|
|
The Kubernetes namespace of the |
|
| The Tekton resource type. |
|
| The outcome of processing a resource. |
|
| The pruning method that deleted a resource. |
|
| Specific cause for skipping or error outcomes. |
- Resource processing metrics
The event-driven pruner exposes the following resource processing metrics:
Expand Name Type Description Labels tekton_pruner_controller_resources_processed_totalCounter
Total resources processed
namespace, resource_type, status
tekton_pruner_controller_resources_deleted_totalCounter
Total resources deleted
namespace, resource_type, operation
- Performance timing metrics
The event-driven pruner exposes the following performance timing metrics:
Expand Name Type Description Labels Bucket tekton_pruner_controller_reconciliation_duration_secondsHistogram
Time spent in reconciliation
namespace, resource_type
0.1 to 30 seconds
tekton_pruner_controller_ttl_processing_duration_secondsHistogram
Time spent processing TTL
namespace, resource_type
0.1 to 30 seconds
tekton_pruner_controller_history_processing_duration_secondsHistogram
Time spent processing history limits
namespace, resource_type
0.1 to 30 seconds
- State tracking metrics
The event-driven pruner exposes the following state tracking metrics:
Expand Name Type Description kn_workqueue_adds_totalCounter
Total resources queued
kn_workqueue_depthGauge
Number of current items in queue
- Error monitoring metrics
The event-driven pruner exposes the following error monitoring metrics:
Expand Name Type Description Labels tekton_pruner_controller_resources_errors_totalCounter
Total processing errors
namespace, resource_type, reason
3.18. Setting additional options for webhooks Copiar o linkLink copiado para a área de transferência!
You can configure advanced webhook options, such as failure policies and timeouts, for OpenShift Pipelines controllers to improve stability and error handling. These settings are applied by using the TektonConfig custom resource (CR) and allow you to customize how admission controllers interact with the Kubernetes API server.
Prerequisites
-
You installed the
occommand-line utility. -
You have logged in to your OpenShift Container Platform cluster with administrator rights for the namespace in which OpenShift Pipelines is installed, typically the
openshift-pipelinesnamespace.
Procedure
View the list of webhooks that the OpenShift Pipelines controllers created. There are two types of webhooks: mutating webhooks and validating webhooks.
To view the list of mutating webhooks, enter the following command:
$ oc get MutatingWebhookConfigurationExample output
NAME WEBHOOKS AGE annotation.operator.tekton.dev 1 4m20s proxy.operator.tekton.dev 1 4m20s webhook.operator.tekton.dev 1 4m22s webhook.pipeline.tekton.dev 1 4m20s webhook.triggers.tekton.dev 1 3m50sTo view the list of validating webhooks, enter the following command:
$ oc get ValidatingWebhookConfigurationExample output
NAME WEBHOOKS AGE config.webhook.operator.tekton.dev 1 4m24s config.webhook.pipeline.tekton.dev 1 4m22s config.webhook.triggers.tekton.dev 1 3m52s namespace.operator.tekton.dev 1 4m22s validation.pipelinesascode.tekton.dev 1 2m49s validation.webhook.operator.tekton.dev 1 4m24s validation.webhook.pipeline.tekton.dev 1 4m22s validation.webhook.triggers.tekton.dev 1 3m52s
In the
TektonConfigcustom resource (CR), add configuration for mutating and validating webhooks under the section for each of the controllers as necessary, as shown in the following examples. Use thevalidation.webhook.pipeline.tekton.devspec for the validating webhooks and thewebhook.pipeline.tekton.devspec for the mutating webhooks.Important-
You cannot set configuration for
operatorwebhooks. -
All settings are optional. For example, you can set the
timeoutSecondsparameter and omit thefailurePolicyandsideEffectsparameters.
Example settings for the Pipelines controller
apiVersion: operator.tekton.dev/v1 kind: TektonConfig metadata: name: config spec: pipeline: options: webhookConfigurationOptions: validation.webhook.pipeline.tekton.dev: failurePolicy: Fail timeoutSeconds: 20 sideEffects: None webhook.pipeline.tekton.dev: failurePolicy: Fail timeoutSeconds: 20 sideEffects: NoneExample settings for the Triggers controller
apiVersion: operator.tekton.dev/v1 kind: TektonConfig metadata: name: config spec: triggers: options: webhookConfigurationOptions: validation.webhook.triggers.tekton.dev: failurePolicy: Fail timeoutSeconds: 20 sideEffects: None webhook.triggers.tekton.dev: failurePolicy: Fail timeoutSeconds: 20 sideEffects: NoneExample settings for the Pipelines as Code controller
apiVersion: operator.tekton.dev/v1 kind: TektonConfig metadata: name: config spec: pipelinesAsCode: options: webhookConfigurationOptions: validation.pipelinesascode.tekton.dev: failurePolicy: Fail timeoutSeconds: 20 sideEffects: None pipelines.triggers.tekton.dev: failurePolicy: Fail timeoutSeconds: 20 sideEffects: NoneExample settings for the Tekton Hub controller
apiVersion: operator.tekton.dev/v1 kind: TektonConfig metadata: name: config spec: hub: options: webhookConfigurationOptions: validation.webhook.hub.tekton.dev: failurePolicy: Fail timeoutSeconds: 20 sideEffects: None webhook.hub.tekton.dev: failurePolicy: Fail timeoutSeconds: 20 sideEffects: None-
You cannot set configuration for