Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 3. Customizing configurations in the TektonConfig custom resource


In Red Hat OpenShift Pipelines, you can customize the following configurations by using the TektonConfig custom resource (CR):

  • Optimizing OpenShift Pipelines performance, including high-availability mode for the OpenShift Pipelines controller
  • Configuring the Red Hat OpenShift Pipelines control plane
  • Changing the default service account
  • Disabling the service monitor
  • Configuring pipeline resolvers
  • Disabling pipeline templates
  • Disabling the integration of Tekton Hub
  • Disabling the automatic creation of RBAC resources
  • Pruning of task runs and pipeline runs

3.1. Prerequisites

  • You have installed the Red Hat OpenShift Pipelines Operator.

3.2. Performance tuning using TektonConfig CR

You can modify the fields under the .spec.pipeline.performance parameter in the TektonConfig custom resource (CR) to change high availability (HA) support and performance configuration for the OpenShift Pipelines controller.

Example TektonConfig performance fields

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  pipeline:
    performance:
      disable-ha: false
      buckets: 7
      replicas: 5
      threads-per-controller: 2
      kube-api-qps: 5.0
      kube-api-burst: 10
Copy to Clipboard Toggle word wrap

All fields are optional. If you set them, the Red Hat OpenShift Pipelines Operator includes most of the fields as arguments in the openshift-pipelines-controller deployment under the openshift-pipelines-controller container. The OpenShift Pipelines Operator also updates the buckets field in the config-leader-election configuration map under the openshift-pipelines namespace.

If you do not specify the values, the OpenShift Pipelines Operator does not update those fields and applies the default values for the OpenShift Pipelines controller.

Note

If you modify or remove any of the performance fields, the OpenShift Pipelines Operator updates the openshift-pipelines-controller deployment and the config-leader-election configuration map (if the buckets field changed) and re-creates openshift-pipelines-controller pods.

High-availability (HA) mode applies to the OpenShift Pipelines controller, which creates and starts pods based on pipeline run and task run definitions. Without HA mode, a single pod executes these operations, potentially creating significant delays under a high load.

In HA mode, OpenShift Pipelines uses several pods (replicas) to execute these operations. Initially, OpenShift Pipelines assigns every controller operation into a bucket. Each replica picks operations from one or more buckets. If two replicas could pick the same operation at the same time, the controller internally determines a leader that executes this operation.

HA mode does not affect execution of task runs after the pods are created.

Expand
Table 3.1. Modifiable fields for tuning OpenShift Pipelines performance
NameDescriptionDefault value for the OpenShift Pipelines controller

disable-ha

Enable or disable the high availability (HA) mode. By default, the HA mode is enabled.

false

buckets

In HA mode, the number of buckets used to process controller operations. The maximum value is 10

1

replicas

In HA mode, the number of pods created to process controller operations. Set this value to the same or lower number than the buckets value.

1

threads-per-controller

The number of threads (workers) to use when the work queue of the OpenShift Pipelines controller is processed.

2

kube-api-qps

The maximum queries per second (QPS) to the cluster master from the REST client.

5.0

kube-api-burst

The maximum burst for a throttle.

10

Note

The OpenShift Pipelines Operator does not control the number of replicas of the OpenShift Pipelines controller. The replicas setting of the deployment determines the number of replicas. For example, to change the number of replicas to 3, enter the following command:

$ oc --namespace openshift-pipelines scale deployment openshift-pipelines-controller --replicas=3
Copy to Clipboard Toggle word wrap
Important

The kube-api-qps and kube-api-burst fields are multiplied by 2 in the OpenShift Pipelines controller. For example, if the kube-api-qps and kube-api-burst values are 10, the actual QPS and burst values become 20.

3.3. Configuring the Red Hat OpenShift Pipelines control plane

You can customize the OpenShift Pipelines control plane by editing the configuration fields in the TektonConfig custom resource (CR). The Red Hat OpenShift Pipelines Operator automatically adds the configuration fields with their default values so that you can use the OpenShift Pipelines control plane.

Procedure

  1. In the Administrator perspective of the web console, navigate to Administration CustomResourceDefinitions.
  2. Use the Search by name box to search for the tektonconfigs.operator.tekton.dev custom resource definition (CRD). Click TektonConfig to see the CRD details page.
  3. Click the Instances tab.
  4. Click the config instance to see the TektonConfig CR details.
  5. Click the YAML tab.
  6. Edit the TektonConfig YAML file based on your requirements.

    Example of TektonConfig CR with default values

    apiVersion: operator.tekton.dev/v1alpha1
    kind: TektonConfig
    metadata:
      name: config
    spec:
      pipeline:
        running-in-environment-with-injected-sidecars: true
        metrics.taskrun.duration-type: histogram
        metrics.pipelinerun.duration-type: histogram
        await-sidecar-readiness: true
        params:
          - name: enableMetrics
            value: 'true'
        default-service-account: pipeline
        require-git-ssh-secret-known-hosts: false
        enable-tekton-oci-bundles: false
        metrics.taskrun.level: task
        metrics.pipelinerun.level: pipeline
        enable-api-fields: stable
        enable-provenance-in-status: false
        enable-custom-tasks: true
        disable-creds-init: false
        disable-affinity-assistant: true
    Copy to Clipboard Toggle word wrap

3.3.1. Modifiable fields with default values

The following list includes all modifiable fields with their default values in the TektonConfig CR:

  • running-in-environment-with-injected-sidecars (default: true): Set this field to false if pipelines run in a cluster that does not use injected sidecars, such as Istio. Setting it to false decreases the time a pipeline takes for a task run to start.

    Note

    For clusters that use injected sidecars, setting this field to false can lead to an unexpected behavior.

  • await-sidecar-readiness (default: true): Set this field to false to stop OpenShift Pipelines from waiting for TaskRun sidecar containers to run before it begins to operate. This allows tasks to be run in environments that do not support the downwardAPI volume type.
  • default-service-account (default: pipeline): This field contains the default service account name to use for the TaskRun and PipelineRun resources, if none is specified.
  • require-git-ssh-secret-known-hosts (default: false): Setting this field to true requires that any Git SSH secret must include the known_hosts field.

    • For more information about configuring Git SSH secrets, see Configuring SSH authentication for Git in the Additional resources section.
  • enable-tekton-oci-bundles (default: false): Set this field to true to enable the use of an experimental alpha feature named Tekton OCI bundle.
  • enable-api-fields (default: stable): Setting this field determines which features are enabled. Acceptable value is stable, beta, or alpha.

    Note

    Red Hat OpenShift Pipelines does not support the alpha value.

  • enable-provenance-in-status (default: false): Set this field to true to enable populating the provenance field in TaskRun and PipelineRun statuses. The provenance field contains metadata about resources used in the task run and pipeline run, such as the source from where a remote task or pipeline definition was fetched.
  • enable-custom-tasks (default: true): Set this field to false to disable the use of custom tasks in pipelines.
  • disable-creds-init (default: false): Set this field to true to prevent OpenShift Pipelines from scanning attached service accounts and injecting any credentials into your steps.
  • disable-affinity-assistant (default: true): Set this field to false to enable affinity assistant for each TaskRun resource sharing a persistent volume claim workspace.

Metrics options

You can modify the default values of the following metrics fields in the TektonConfig CR:

  • metrics.taskrun.duration-type and metrics.pipelinerun.duration-type (default: histogram): Setting these fields determines the duration type for a task or pipeline run. Acceptable value is gauge or histogram.
  • metrics.taskrun.level (default: task): This field determines the level of the task run metrics. Acceptable value is taskrun, task, or namespace.
  • metrics.pipelinerun.level (default: pipeline): This field determines the level of the pipeline run metrics. Acceptable value is pipelinerun, pipeline, or namespace.

3.3.2. Optional configuration fields

The following fields do not have a default value, and are considered only if you configure them. By default, the Operator does not add and configure these fields in the TektonConfig custom resource (CR).

  • default-timeout-minutes: This field sets the default timeout for the TaskRun and PipelineRun resources, if none is specified when creating them. If a task run or pipeline run takes more time than the set number of minutes for its execution, then the task run or pipeline run is timed out and cancelled. For example, default-timeout-minutes: 60 sets 60 minutes as default.
  • default-managed-by-label-value: This field contains the default value given to the app.kubernetes.io/managed-by label that is applied to all TaskRun pods, if none is specified. For example, default-managed-by-label-value: tekton-pipelines.
  • default-pod-template: This field sets the default TaskRun and PipelineRun pod templates, if none is specified.
  • default-cloud-events-sink: This field sets the default CloudEvents sink that is used for the TaskRun and PipelineRun resources, if none is specified.
  • default-task-run-workspace-binding: This field contains the default workspace configuration for the workspaces that a Task resource declares, but a TaskRun resource does not explicitly declare.
  • default-affinity-assistant-pod-template: This field sets the default PipelineRun pod template that is used for affinity assistant pods, if none is specified.
  • default-max-matrix-combinations-count: This field contains the default maximum number of combinations generated from a matrix, if none is specified.

You can change the default service account for OpenShift Pipelines by editing the default-service-account field in the .spec.pipeline and .spec.trigger specifications. The default service account name is pipeline.

Example

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  pipeline:
    default-service-account: pipeline
  trigger:
    default-service-account: pipeline
    enable-api-fields: stable
Copy to Clipboard Toggle word wrap

You can set labels and annotations for the openshift-pipelines namespace in which the operator installs OpenShift Pipelines.

Note

Changing the name of the openshift-pipelines namespace is not supported.

Specify the labels and annotations by adding them to the spec.targetNamespaceMetadata specification in the TektonConfig custom resource (CR).

Example of setting labels and annotations for the openshift-pipelines namespace

apiVersion: operator.tekton.dev/v1
kind: TektonConfig
metadata:
  name: config
spec:
  targetNamespaceMetadata:
    labels: {"example-label":"example-value"}
    annotations: {"example-annotation":"example-value"}
Copy to Clipboard Toggle word wrap

3.6. Setting the resync period for the pipelines controller

You can configure the resync period for the pipelines controller. Once every resync period, the controller reconciles all pipeline runs and task runs, regardless of events.

The default resync period is 10 hours. If you have a large number of pipeline runs and task runs, a full reconciliation every 10 hours might consume too many resources. In this case, you can configure a longer resync period.

Prerequisites

  • You are logged in to your OpenShift Container Platform cluster with cluster-admin privileges.

Procedure

  • In the TektonConfig custom resource, configure the resync period for the pipelines controller, as shown in the following example.

    Example

    apiVersion: operator.tekton.dev/v1alpha1
    kind: TektonConfig
    metadata:
      name: config
    spec:
      pipeline:
        options:
          deployments:
            tekton-pipelines-controller:
              spec:
                template:
                  spec:
                    containers:
                    - name: tekton-pipelines-controller
                      args:
                        - "-resync-period=24h" 
    1
    Copy to Clipboard Toggle word wrap

    1
    This example sets the resync period to 24 hours.

3.7. Disabling the service monitor

You can disable the service monitor, which is part of OpenShift Pipelines, to expose the telemetry data. To disable the service monitor, set the enableMetrics parameter to false in the .spec.pipeline specification of the TektonConfig custom resource (CR):

Example

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  pipeline:
    params:
       - name: enableMetrics
         value: 'false'
Copy to Clipboard Toggle word wrap

3.8. Configuring pipeline resolvers

You can configure pipeline resolvers in the TektonConfig custom resource (CR). You can enable or disable these pipeline resolvers:

  • enable-bundles-resolver
  • enable-cluster-resolver
  • enable-git-resolver
  • enable-hub-resolver

Example

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  pipeline:
    enable-bundles-resolver: true
    enable-cluster-resolver: true
    enable-git-resolver: true
    enable-hub-resolver: true
Copy to Clipboard Toggle word wrap

You can also provide resolver specific configurations in the TektonConfig CR. For example, define the following fields in the map[string]string format to set configurations for each pipeline resolver:

Example

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  pipeline:
    bundles-resolver-config:
      default-service-account: pipelines
    cluster-resolver-config:
      default-namespace: test
    git-resolver-config:
      server-url: localhost.com
    hub-resolver-config:
      default-tekton-hub-catalog: tekton
Copy to Clipboard Toggle word wrap

3.9. Disabling resolver tasks and pipeline templates

By default, the TektonAddon custom resource (CR) installs resolverTasks and pipelineTemplates resources along with OpenShift Pipelines on the cluster.

You can disable the installation of the resolver tasks and pipeline templates by setting the parameter value to false in the .spec.addon specification.

Procedure

  1. Edit the TektonConfig CR by running the following command:

    $ oc edit TektonConfig config
    Copy to Clipboard Toggle word wrap
  2. In the TektonConfig CR, set the resolverTasks and pipelineTemplates parameter value in .addon.params spec to false:

    Example of disabling resolver task and pipeline template resources

    apiVersion: operator.tekton.dev/v1alpha1
    kind: TektonConfig
    metadata:
      name: config
    spec:
    # ...
      addon:
        params:
          - name: resolverTasks
            value: 'false'
          - name: pipelineTemplates
            value: 'false'
    # ...
    Copy to Clipboard Toggle word wrap

    Important

    You can set the value of the pipelinesTemplates parameter to true only when the value of the resolverTasks parameter is true.

3.10. Disabling the installation of Tekton Triggers

You can disable the automatic istallation of Tekton Triggers when deploying OpenShift Pipelines through the Operator, to provide more flexibility for environments where triggers are managed separately. To disable the istallation of Tekton Triggers, set the disabled parameter to true in the spec.trigger specification of your TektonConfig custom resource (CR):

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  trigger:
    disabled: true
#...
Copy to Clipboard Toggle word wrap

The default setting is false.

3.11. Disabling the integration of Tekton Hub

You can disable the integration of Tekton Hub in the web console Developer perspective by setting the enable-devconsole-integration parameter to false in the TektonConfig custom resource (CR).

Example of disabling Tekton Hub

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  hub:
    params:
      - name: enable-devconsole-integration
        value: false
Copy to Clipboard Toggle word wrap

3.12. Disabling the automatic creation of RBAC resources

The default installation of the Red Hat OpenShift Pipelines Operator creates multiple role-based access control (RBAC) resources for all namespaces in the cluster, except the namespaces matching the ^(openshift|kube)-* regular expression pattern. Among these RBAC resources, the pipelines-scc-rolebinding security context constraint (SCC) role binding resource is a potential security issue, because the associated pipelines-scc SCC has the RunAsAny privilege.

To disable the automatic creation of cluster-wide RBAC resources after the Red Hat OpenShift Pipelines Operator is installed, cluster administrators can set the createRbacResource parameter to false in the cluster-level TektonConfig custom resource (CR).

Procedure

  1. Edit the TektonConfig CR by running the following command:

    $ oc edit TektonConfig config
    Copy to Clipboard Toggle word wrap
  2. In the TektonConfig CR, set the createRbacResource param value to false:

Example TektonConfig CR

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  params:
  - name: createRbacResource
    value: "false"
# ...
Copy to Clipboard Toggle word wrap

3.13. Disabling inline specification of pipelines and tasks

By default, OpenShift Pipelines supports inline specification of pipelines and tasks in the following cases:

  • You can create a Pipeline CR that includes one or more task specifications, as in the following example:

    Example of an inline specification in a Pipeline CR

    apiVersion: operator.tekton.dev/v1
    kind: Pipeline
    metadata:
      name: pipelineInline
    spec:
      tasks:
        taskSpec:
    # ...
    Copy to Clipboard Toggle word wrap

  • You can create a PipelineRun custom resource (CR) that includes a pipeline specification, as in the following example:

    Example of an inline specification in a PipelineRun CR

    apiVersion: operator.tekton.dev/v1
    kind: PipelineRun
    metadata:
      name: pipelineRunInline
    spec:
      pipelineSpec:
        tasks:
    # ...
    Copy to Clipboard Toggle word wrap

  • You can create a TaskRun custom resource (CR) that includes a task specification, as in the following example:

    Example of an inline specification in a TaskRun CR

    apiVersion: operator.tekton.dev/v1
    kind: TaskRun
    metadata:
      name: taskRunInline
    spec:
      taskSpec:
        steps:
    # ...
    Copy to Clipboard Toggle word wrap

You can disable inline specification in some or all of these cases. To disable the inline specification, set the disable-inline-spec field of the .spec.pipeline specification of the TektonConfig CR, as in the following example:

Example configuration that disables inline specification

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  pipeline:
    disable-inline-spec: "pipeline,pipelinerun,taskrun"
# ...
Copy to Clipboard Toggle word wrap

You can set the disable-inline-spec parameter to any single value or to a comma-separated list of multiple values. The following values for the parameter are valid:

Expand
Table 3.2. Supported values for the disable-inline-spec parameter
ValueDescription

pipeline

You cannot use a taskSpec: spec to define a task inside a Pipeline CR. Instead, you must use a taskRef: spec to incorporate a task from a Task CR or to specify a task using a resolver.

pipelinerun

You cannot use a pipelineSpec: spec to define a pipeline inside a PipelineRun CR. Instead, you must use a pipelineRef: spec to incorporate a pipeline from a Pipeline CR or to specify a pipeline using a resolver.

taskrun

You cannot use a taskSpec: spec to define a task inside a TaskRun CR. Instead, you must use a taskRef: spec to incorporate a task from a Task CR or to specify a task using a resolver.

3.14. Configuration of RBAC and Trusted CA flags

The Red Hat OpenShift Pipelines Operator provides independent control over RBAC resource creation and Trusted CA bundle config map through two separate flags, createRbacResource and createCABundleConfigMaps.

Expand
ParameterDescriptionDefault value

createRbacResource

Controls the creation of RBAC resources only. This flag does not affect Trusted CA bundle config map.

true

createCABundleConfigMaps

Controls the creation of Trusted CA bundle config map and Service CA bundle config map. This flag must be set to false to disable config map creation.

true

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  profile: all
  targetNamespace: openshift-pipelines
  params:
    - name: createRbacResource 
1

      value: "true"
    - name: createCABundleConfigMaps 
2

      value: "true"
    - name: legacyPipelineRbac
      value: "true"
Copy to Clipboard Toggle word wrap
1
Specifies RBAC resource creation.
2
Specifies Trusted CA bundle config map creation.

3.15. Automatic pruning of task runs and pipeline runs

Stale TaskRun and PipelineRun objects and their executed instances occupy physical resources that can be used for active runs. For optimal utilization of these resources, Red Hat OpenShift Pipelines provides a pruner component that automatically removes unused objects and their instances in various namespaces.

Note

You can configure the pruner for your entire installation by using the TektonConfig custom resource and modify configuration for a namespace by using namespace annotations. However, you cannot selectively auto-prune an individual task run or pipeline run in a namespace.

3.15.1. Configuring the pruner

You can use the TektonConfig custom resource to configure periodic pruning of resources associated with pipeline runs and task runs.

The following example corresponds to the default configuration:

Example of the pruner configuration

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
# ...
spec:
  pruner:
    resources:
      - taskrun
      - pipelinerun
    keep: 100
    prune-per-resource: false
    schedule: "* 8 * * *"
    startingDeadlineSeconds: 60
# ...
Copy to Clipboard Toggle word wrap

Expand
Table 3.3. Supported parameters for pruner configuration
ParameterDescription

schedule

The Cron schedule for running the pruner process. The default schedule runs the process at 08:00 every day. For more information about the Cron schedule syntax, see Cron schedule syntax in the Kubernetes documentation.

resources

The resource types to which the pruner applies. The available resource types are taskrun and pipelinerun

keep

The number of most recent resources of every type to keep.

prune-per-resource

If set to false, the value for the keep parameter denotes the total number of task runs or pipeline runs. For example, if keep is set to 100, then the pruner keeps 100 most recent task runs and 100 most recent pipeline runs and removes all other resources.

If set to true, the value for the keep parameter is calculated separately for pipeline runs referencing each pipeline and for task runs referencing each task. For example, if keep is set to 100, then the pruner keeps 100 most recent pipeline runs for Pipeline1, 100 most recent pipeline runs for Pipeline2, 100 most recent task runs for Task1, and so on, and removes all other resources.

keep-since

The maximum time for which to keep resources, in minutes. For example, to retain resources which were created not more than five days ago, set keep-since to 7200.

startingDeadlineSeconds

This parameter is optional. If the pruner job is not started at the scheduled time for any reason, this setting configures the maximum time, in seconds, in which the job can still be started. If the job is not started within the specified time, OpenShift Pipelines considers this job failed and starts the pruner at the next scheduled time. If you do not specify this parameter and the pruner job does not start at the scheduled time, OpenShift Pipelines attempts to start the job at any later time possible.

Note

The keep and keep-since parameters are mutually exclusive. Use only one of them in your configuration.

To modify the configuration for automatic pruning of task runs and pipeline runs in a namespace, you can set annotations in the namespace.

The following namespace annotations have the same meanings as the corresponding keys in the TektonConfig custom resource:

  • operator.tekton.dev/prune.schedule
  • operator.tekton.dev/prune.resources
  • operator.tekton.dev/prune.keep
  • operator.tekton.dev/prune.prune-per-resource
  • operator.tekton.dev/prune.keep-since
Note

The operator.tekton.dev/prune.resources annotation accepts a comma-separated list. To prune both task runs and pipeline runs, set this annotation to "taskrun, pipelinerun".

The following additional namespace annotations are available:

  • operator.tekton.dev/prune.skip: When set to true, the namespace for which the annotation is configured is not pruned.
  • operator.tekton.dev/prune.strategy: Set the value of this annotation to either keep or keep-since.

For example, the following annotations retain all task runs and pipeline runs created in the last five days and delete the older resources:

Example of auto-pruning annotations

kind: Namespace
apiVersion: v1
# ...
metadata:
  annotations:
    operator.tekton.dev/prune.resources: "taskrun, pipelinerun"
    operator.tekton.dev/prune.keep-since: 7200
# ...
Copy to Clipboard Toggle word wrap

3.16. Enabling the event-based pruner

Important

The event-based pruner is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

You can use the event-based tektonpruner controller to automatically delete completed resources, such as PipelineRuns and TaskRuns, based on configurable policies. Unlike the default job-based pruner, the event-based pruner listens for resource events and prunes resources in near real time.

Important

You must disable the default pruner in the TektonConfig custom resource (CR) before you enable the event-based pruner. If both pruner types are enabled, the deployment readiness status changes to False and the following error message is displayed on the output:

Components not in ready state: Invalid Pruner Configuration!! Both pruners, tektonpruner(event based) and pruner(job based) cannot be enabled simultaneously. Please disable one of them.
Copy to Clipboard Toggle word wrap

Procedure

  1. In your TektonConfig CR, disable the default pruner by setting the spec.pruner.disabled field to true and enable the event-based pruner by setting the spec.tektonpruner.disabled field to false. For example:

    apiVersion: operator.tekton.dev/v1alpha1
    kind: TektonConfig
    metadata:
     name: config
    spec:
      # ...
      pruner:
        disabled: true
      # ...
      tektonpruner:
        disabled: false
        options: {}
      # ...
    Copy to Clipboard Toggle word wrap

    After you apply the updated CR, the Operator deploys the tekton-pruner-controller pod in the openshift-pipelines namespace.

  2. Ensure that the following config maps are present in the openshift-pipelines namespace:

    Expand
    Config mapPurpose

    tekton-pruner-default-spec

    Define default pruning behavior

    pruner-info

    Store internal runtime data used by the controller

    config-logging-tekton-pruner

    Configure logging settings for the pruner

    config-observability-tekton-pruner

    Enable observability features such as metrics and tracing

Verification

  1. To verify that the tekton-pruner-controller pod is running, run the following command:

    $ oc get pods -n openshift-pipelines
    Copy to Clipboard Toggle word wrap
  2. Verify that the output includes a tekton-pruner-controller pod in the Running state. Example output:

    $ tekton-pruner-controller-<id>       Running
    Copy to Clipboard Toggle word wrap

3.16.1. Configuration of the event-based pruner

Important

The event-based pruner is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

You can configure the pruning behavior of the event-based pruner by modifying your TektonConfig custom resource (CR).

The following is an example of the TektonConfig CR with the default configuration that uses global pruning rules:

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
 name: config
spec:
  # ...
  tektonpruner:
    disabled: false
    global-config:
      enforcedConfigLevel: global
      failedHistoryLimit: null
      historyLimit: 10
      namespaces: null
      successfulHistoryLimit: null
      ttlSecondsAfterFinished: null
    options: {}
  # ...
Copy to Clipboard Toggle word wrap
  • failedHistoryLimit: The amount of retained failed runs.
  • historyLimit: The amount of runs to retain. Pruner uses this setting if status-specific limits are not defined.
  • namespaces: Definition of per-namespace pruning policies, when you set enforcedConfigLevel to namespace.
  • successfulHistoryLimit: The amount of retained successful runs.
  • ttlSecondsAfterFinished: Time in seconds after completion, after which the pruner deletes resources.

You can define pruning rules for individual namespaces by setting enforcedConfigLevel to namespace and configuring policies under the namespaces section. In the following example, a 60 second time to live (TTL) is applied to resources in the dev-project namespace:

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
 name: config
spec:
  # ...
  tektonpruner:
    disabled: false
    global-config:
      enforcedConfigLevel: namespace
      ttlSecondsAfterFinished: 300
      namespaces:
        dev-project:
          ttlSecondsAfterFinished: 60
  # ...
Copy to Clipboard Toggle word wrap

You can use the following parameters in your TektonConfig CR tektonpruner:

Expand
ParameterDescription

ttlSecondsAfterFinished

Delete resources a fixed number of seconds after they complete.

successfulHistoryLimit

Retain the specified number of the most recent successful runs. Delete older successful runs.

failedHistoryLimit

Retain the specified number of the most recent failed runs. Delete older failed runs.

historyLimit

Apply a generic history limit when failedHistoryLimit and successfulHistoryLimit are not defined.

enforcedConfigLevel

Specify the level at which pruner applies the configuration. Accepted values: global or namespace.

namespaces

Define per-namespace pruning policies.

Note

You can use TTL-based pruning to prune resources exceeding set expiration times. Use history-based pruning to prune resources exceeding the configured historyLimit.

3.16.2. Observability metrics of the event-based pruner

Important

The event-based pruner is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

The event-based pruner exposes detailed metrics through the tekton-pruner-controller controller Service definition on port 9090 in OpenTelemetry format for monitoring, troubleshooting, and capacity planning.

Following are categories of the metrics exposed:

  • Resource processing
  • Performance timing
  • State tracking
  • Error monitoring

Most pruner metrics use labels to provide additional context. You can use these labels in PromQL queries or dashboards to filter and group the metrics.

Expand
LabelDescription

namespace

The Kubernetes namespace of the PipelineRun or TaskRun.

resource_type

The Tekton resource type.

status

The outcome of processing a resource.

operation

The pruning method that deleted a resource.

reason

Specific cause for skipping or error outcomes.

Resource processing metrics

The following resource processing metrics are exposed by the event-based pruner:

Expand
NameTypeDescriptionLabels

tekton_pruner_controller_resources_processed_total

Counter

Total resources processed

namespace, resource_type, status

tekton_pruner_controller_resources_deleted_total

Counter

Total resources deleted

namespace, resource_type, operation

Performance timing metrics

The following performance timing metrics are exposed by the event-based pruner:

Expand
NameTypeDescriptionLabelsBucket

tekton_pruner_controller_reconciliation_duration_seconds

Histogram

Time spent in reconciliation

namespace, resource_type

0.1 to 30 seconds

tekton_pruner_controller_ttl_processing_duration_seconds

Histogram

Time spent processing TTL

namespace, resource_type

0.1 to 30 seconds

tekton_pruner_controller_history_processing_duration_seconds

Histogram

Time spent processing history limits

namespace, resource_type

0.1 to 30 seconds

State tracking metrics

The following state tracking metrics are exposed by the event-based pruner:

Expand
NameTypeDescription

kn_workqueue_adds_total

Counter

Total resources queued

kn_workqueue_depth

Gauge

Number of current items in queue

Error monitoring metrics

The following error monitoring metrics are exposed by the event-based pruner:

Expand
NameTypeDescriptionLabels

tekton_pruner_controller_resources_errors_total

Counter

Total processing errors

namespace, resource_type, reason

3.17. Setting additional options for webhooks

Optionally, you can set the failurePolicy, timeoutSeconds, or sideEffects options for the webhooks created by several controllers in OpenShift Pipelines. For more information about these options, see the Kubernetes documentation.

Prerequisites

  • You installed the oc command-line utility.
  • You are logged into your OpenShift Container Platform cluster with administrator rights for the namespace in which OpenShift Pipelines is installed, typically the openshift-pipelines namespace.

Procedure

  1. View the list of webhooks that the OpenShift Pipelines controllers created. There are two types of webhooks: mutating webhooks and validating webhooks.

    1. To view the list of mutating webhooks, enter the following command:

      $ oc get MutatingWebhookConfiguration
      Copy to Clipboard Toggle word wrap

      Example output

      NAME                             WEBHOOKS   AGE
      annotation.operator.tekton.dev   1          4m20s
      proxy.operator.tekton.dev        1          4m20s
      webhook.operator.tekton.dev      1          4m22s
      webhook.pipeline.tekton.dev      1          4m20s
      webhook.triggers.tekton.dev      1          3m50s
      Copy to Clipboard Toggle word wrap

    2. To view the list of validating webhooks, enter the following command:

      $ oc get ValidatingWebhookConfiguration
      Copy to Clipboard Toggle word wrap

      Example output

      NAME                                                 WEBHOOKS   AGE
      config.webhook.operator.tekton.dev                   1          4m24s
      config.webhook.pipeline.tekton.dev                   1          4m22s
      config.webhook.triggers.tekton.dev                   1          3m52s
      namespace.operator.tekton.dev                        1          4m22s
      validation.pipelinesascode.tekton.dev                1          2m49s
      validation.webhook.operator.tekton.dev               1          4m24s
      validation.webhook.pipeline.tekton.dev               1          4m22s
      validation.webhook.triggers.tekton.dev               1          3m52s
      Copy to Clipboard Toggle word wrap

  2. In the TektonConfig custom resource (CR), add configuration for mutating and validating webhooks under the section for each of the controllers as necessary, as shown in the following examples. Use the validation.webhook.pipeline.tekton.dev spec for the validating webhooks and the webhook.pipeline.tekton.dev spec for the mutating webhooks.

    Important
    • You cannot set configuration for operator webhooks.
    • All settings are optional. For example, you can set the timeoutSeconds parameter and omit the failurePolicy and sideEffects parameters.

    Example settings for the Pipelines controller

    apiVersion: operator.tekton.dev/v1
    kind: TektonConfig
    metadata:
      name: config
    spec:
      pipeline:
        options:
          webhookConfigurationOptions:
            validation.webhook.pipeline.tekton.dev:
              failurePolicy: Fail
              timeoutSeconds: 20
              sideEffects: None
            webhook.pipeline.tekton.dev:
              failurePolicy: Fail
              timeoutSeconds: 20
              sideEffects: None
    Copy to Clipboard Toggle word wrap

    Example settings for the Triggers controller

    apiVersion: operator.tekton.dev/v1
    kind: TektonConfig
    metadata:
      name: config
    spec:
      triggers:
        options:
          webhookConfigurationOptions:
            validation.webhook.triggers.tekton.dev:
              failurePolicy: Fail
              timeoutSeconds: 20
              sideEffects: None
            webhook.triggers.tekton.dev:
              failurePolicy: Fail
              timeoutSeconds: 20
              sideEffects: None
    Copy to Clipboard Toggle word wrap

    Example settings for the Pipelines as Code controller

    apiVersion: operator.tekton.dev/v1
    kind: TektonConfig
    metadata:
      name: config
    spec:
      pipelinesAsCode:
        options:
          webhookConfigurationOptions:
            validation.pipelinesascode.tekton.dev:
              failurePolicy: Fail
              timeoutSeconds: 20
              sideEffects: None
            pipelines.triggers.tekton.dev:
              failurePolicy: Fail
              timeoutSeconds: 20
              sideEffects: None
    Copy to Clipboard Toggle word wrap

    Example settings for the Tekton Hub controller

    apiVersion: operator.tekton.dev/v1
    kind: TektonConfig
    metadata:
      name: config
    spec:
      hub:
        options:
          webhookConfigurationOptions:
            validation.webhook.hub.tekton.dev:
              failurePolicy: Fail
              timeoutSeconds: 20
              sideEffects: None
            webhook.hub.tekton.dev:
              failurePolicy: Fail
              timeoutSeconds: 20
              sideEffects: None
    Copy to Clipboard Toggle word wrap

Retour au début
Red Hat logoGithubredditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance. Découvrez nos récentes mises à jour.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez le Blog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

Theme

© 2025 Red Hat