Installing and configuring
Installing and configuring OpenShift Pipelines
Abstract
Chapter 1. Installing OpenShift Pipelines Copy linkLink copied to clipboard!
This guide walks cluster administrators through the process of installing the Red Hat OpenShift Pipelines Operator to an OpenShift Container Platform cluster.
Prerequisites
-
You have access to an OpenShift Container Platform cluster using an account with
cluster-adminpermissions. -
You have installed
ocCLI. -
You have installed OpenShift Pipelines (
tkn) CLI on your local system. - Your cluster has the Marketplace capability enabled or the Red Hat Operator catalog source configured manually.
In a cluster with both Windows and Linux nodes, Red Hat OpenShift Pipelines can run on only Linux nodes.
1.1. Installing the Red Hat OpenShift Pipelines Operator in web console Copy linkLink copied to clipboard!
You can install the Red Hat OpenShift Pipelines Operator by using the OpenShift Container Platform web console to automatically configure the necessary custom resources (CRs) for your pipelines. This method provides a graphical interface to manage the installation and seamless upgrades of the Operator and its components.
The default Operator custom resource definition (CRD) config.operator.tekton.dev is now replaced by tektonconfigs.operator.tekton.dev. In addition, the Operator provides the following additional CRDs to individually manage OpenShift Pipelines components: tektonpipelines.operator.tekton.dev, tektontriggers.operator.tekton.dev and tektonaddons.operator.tekton.dev.
If you have OpenShift Pipelines already installed on your cluster, the existing installation is seamlessly upgraded. The Operator will replace the instance of config.operator.tekton.dev on your cluster with an instance of tektonconfigs.operator.tekton.dev and additional objects of the other CRDs as necessary.
If you manually changed your existing installation, such as, changing the target namespace in the config.operator.tekton.dev CRD instance by making changes to the resource name - cluster field, then the upgrade path is not smooth. In such cases, the recommended workflow is to uninstall your installation and reinstall the Red Hat OpenShift Pipelines Operator.
The Red Hat OpenShift Pipelines Operator now provides the option to select the components that you want to install by specifying profiles as part of the TektonConfig custom resource (CR). The Operator automatically installs the TektonConfig CR when you install the Operator. The supported profiles are:
- Lite: This profile installs only Tekton Pipelines.
- Basic: This profile installs Tekton Pipelines, Tekton Triggers, Tekton Chains, and Tekton Results.
-
All: This is the default profile used when you install the
TektonConfigCR. This profile installs all of the Tekton components, including Tekton Pipelines, Tekton Triggers, Tekton Chains, Tekton Results, Pipelines as Code, and Tekton add-ons. Tekton add-ons includes theClusterTriggerBindings,ConsoleCLIDownload,ConsoleQuickStart, andConsoleYAMLSampleresources, and the tasks and step action definitions available by using the cluster resolver from theopenshift-pipelinesnamespace.
Procedure
- In the Administrator perspective of the web console, navigate to Operators → OperatorHub.
-
Use the Filter by keyword box to search for
Red Hat OpenShift PipelinesOperator in the catalog. Click the Red Hat OpenShift Pipelines Operator tile. - Read the brief description about the Operator on the Red Hat OpenShift Pipelines Operator page. Click Install.
On the Install Operator page:
-
Select All namespaces on the cluster (default) for the Installation Mode. This mode installs the Operator in the default
openshift-operatorsnamespace, which enables the Operator to watch and be available to all namespaces in the cluster. - Select Automatic for the Approval Strategy. This ensures that the Operator Lifecycle Manager (OLM) automatically handles future upgrades to the Operator. If you select the Manual approval strategy, OLM creates an update request. As a cluster administrator, you must then manually approve the OLM update request to update the Operator to the new version.
Select an Update Channel.
-
The
latestchannel enables installation of the most recent stable version of the Red Hat OpenShift Pipelines Operator. Currently, it is the default channel for installing the Red Hat OpenShift Pipelines Operator. To install a specific version of the Red Hat OpenShift Pipelines Operator, cluster administrators can use the corresponding
pipelines-<version>channel. For example, to install the Red Hat OpenShift Pipelines Operator version1.8.x, you can use thepipelines-1.8channel.NoteStarting with OpenShift Container Platform 4.11, the
previewandstablechannels for installing and upgrading the Red Hat OpenShift Pipelines Operator are not available. However, in OpenShift Container Platform 4.10 and earlier versions, you can use thepreviewandstablechannels for installing and upgrading the Operator.
-
The
-
Select All namespaces on the cluster (default) for the Installation Mode. This mode installs the Operator in the default
Click Install. You will see the Operator listed on the Installed Operators page.
NoteThe Operator installs automatically into the
openshift-operatorsnamespace.Verify that the Status displays Succeeded Up to date to confirm successful installation of Red Hat OpenShift Pipelines Operator.
WarningThe success status might show as Succeeded Up to date even if installation of other components is in-progress. Therefore, it is important to verify the installation manually in the terminal.
Verify that the Red Hat OpenShift Pipelines Operator installed all components successfully. Login to the cluster on the terminal, and run the following command:
oc get tektonconfig config
$ oc get tektonconfig configCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME VERSION READY REASON config 1.20.0 True
NAME VERSION READY REASON config 1.20.0 TrueCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the READY condition is True, the Operator and its components installed successfully.
Additionally, check the components' versions by running the following command:
oc get tektonpipeline,tektontrigger,tektonchain,tektonaddon,pac
$ oc get tektonpipeline,tektontrigger,tektonchain,tektonaddon,pacCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2. Installing the OpenShift Pipelines Operator by using the CLI Copy linkLink copied to clipboard!
You can install the Red Hat OpenShift Pipelines Operator from the OperatorHub by using the command-line interface (CLI) to manage your installation programmatically. Once you install the Operator, you can create a Subscription object to subscribe a namespace to the Operator and automate the deployment process.
Procedure
Create a
Subscriptionobject YAML file to subscribe a namespace to the Red Hat OpenShift Pipelines Operator, for example,sub.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow spec.channel-
Name of the channel that you want to subscribe. The
pipelines-<version>channel is the default channel. For example, the default channel for Red Hat OpenShift Pipelines Operator version1.7ispipelines-1.7. Thelatestchannel enables installation of the most recent stable version of the Red Hat OpenShift Pipelines Operator. spec.name- Name of the Operator to subscribe to.
spec.source-
Name of the
CatalogSourceobject that provides the Operator. spec.sourceNamespace-
Namespace of the
CatalogSourceobject. Useopenshift-marketplacefor the default OperatorHub catalog sources.
Create the
Subscriptionobject by running the following command:oc apply -f sub.yaml
$ oc apply -f sub.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow The subscription installs the Red Hat OpenShift Pipelines Operator into the
openshift-operatorsnamespace. The Operator automatically installs OpenShift Pipelines into the defaultopenshift-pipelinestarget namespace.
1.3. Red Hat OpenShift Pipelines Operator in a restricted environment Copy linkLink copied to clipboard!
You can use the Red Hat OpenShift Pipelines Operator to support the installation of pipelines in a restricted network environment. The Operator automatically configures proxy settings for your pipeline containers and resources, ensuring they can operate securely within your network constraints.
The Operator installs a proxy webhook that sets the proxy environment variables in the containers of the pod created by tekton-controllers based on the cluster proxy object. It also sets the proxy environment variables in the TektonPipelines, TektonTriggers, Controllers, Webhooks, and Operator Proxy Webhook resources.
By default, the proxy webhook is disabled for the openshift-pipelines namespace. To disable it for any other namespace, you can add the operator.tekton.dev/disable-proxy: true label to the namespace object.
Chapter 2. Uninstalling OpenShift Pipelines Copy linkLink copied to clipboard!
Cluster administrators can uninstall the Red Hat OpenShift Pipelines Operator by performing the following steps:
Delete the Custom Resources (CRs) for the optional components,
TektonHubandTektonResult, if these CRs exist, and then delete theTektonConfigCR.ImportantIf you uninstall the Operator without removing the CRs of optional components, you cannot remove the components later.
- Uninstall the Red Hat OpenShift Pipelines Operator.
-
Delete the Custom Resource Definitions (CRDs) of the
operator.tekton.devgroup.
Uninstalling only the Operator will not remove the Red Hat OpenShift Pipelines components created by default when the Operator is installed.
2.1. Deleting the OpenShift Pipelines custom resources Copy linkLink copied to clipboard!
You can remove the OpenShift Pipelines custom resources (CRs) to clean up the configuration before uninstalling the Operator. This involves deleting optional components such as TektonHub and TektonResult, followed by the main TektonConfig CR.
Procedure
- In the Administrator perspective of the web console, navigate to Administration → CustomResourceDefinitions.
-
Type
TektonHubin the Filter by name field to search for theTektonHubCustom Resource Definition (CRD). -
Click the name of the
TektonHubCRD to display the details page for the CRD. -
Click the
Instancestab. -
If an instance is displayed, click the Options menu
for the displayed instance.
- Select Delete TektonHub.
- Click Delete to confirm the deletion of the CR.
-
Repeat these steps, searching for
TektonResultand thenTektonConfigin the Filter by name box. If any instances are found for these CRDs, delete these instances.
Deleting the CRs also deletes the Red Hat OpenShift Pipelines components and all the tasks and pipelines on the cluster.
If you uninstall the Operator without removing the TektonHub and TektonResult CRs, you cannot remove the Tekton Hub and Tekton Results components later.
2.2. Uninstalling the Red Hat OpenShift Pipelines Operator Copy linkLink copied to clipboard!
You can uninstall the Red Hat OpenShift Pipelines Operator by using the OpenShift Container Platform web console to remove the OpenShift Pipelines service from your cluster. This process involves deleting the Operator subscription and its associated operand instances.
Procedure
- From the Operators → OperatorHub page, use the Filter by keyword box to search for the Red Hat OpenShift Pipelines Operator.
- Click the Red Hat OpenShift Pipelines Operator tile. The Operator tile indicates that the Operator is installed.
- In the Red Hat OpenShift Pipelines Operator description page, click Uninstall.
- In the Uninstall Operator? window, select Delete all operand instances for this operator, and then click Uninstall.
When you uninstall the OpenShift Pipelines Operator, the uninstallation process deletes all resources within the openshift-pipelines target namespace where OpenShift Pipelines is installed, including the secrets you configured.
2.3. Deleting the custom resource definitions of the operator.tekton.dev group Copy linkLink copied to clipboard!
You can delete the operator.tekton.dev custom resource definitions (CRDs) to fully remove all OpenShift Pipelines traces from your cluster. This step ensures that no residual definitions remain after the Operator uninstallation.
Delete the CustomResourceDefinitions of the operator.tekton.dev group. The Red Hat OpenShift Pipelines Operator creates these CRDs by default during installation.
Procedure
- In the Administrator perspective of the web console, navigate to Administration → CustomResourceDefinitions.
-
Type
operator.tekton.devin the Filter by name box to search for the CRDs in theoperator.tekton.devgroup. To delete each of the displayed CRDs, complete the following steps:
-
Click the Options menu
.
- Select Delete CustomResourceDefinition.
- Click Delete to confirm the deletion of the CRD.
-
Click the Options menu
Chapter 3. Customizing configurations in the TektonConfig custom resource Copy linkLink copied to clipboard!
In Red Hat OpenShift Pipelines, you can customize the following configurations by using the TektonConfig custom resource (CR):
- Optimizing OpenShift Pipelines performance, including high-availability mode for the OpenShift Pipelines controller
- Configuring the Red Hat OpenShift Pipelines control plane
- Changing the default service account
- Disabling the service monitor
- Configuring pipeline resolvers
- Disabling pipeline templates
- Disabling the integration of Tekton Hub
- Disabling the automatic creation of RBAC resources
- Pruning of task runs and pipeline runs
3.1. Prerequisites Copy linkLink copied to clipboard!
- You have installed the Red Hat OpenShift Pipelines Operator.
3.2. Performance tuning using the TektonConfig custom resource Copy linkLink copied to clipboard!
You can tune the performance and high availability (HA) of the OpenShift Pipelines controller by editing the TektonConfig custom resource (CR). You can adjust parameters such as replica counts, buckets, and API query limits to optimize the controller for your specific workload requirements.
All fields are optional. If you set them, the Red Hat OpenShift Pipelines Operator includes most of the fields as arguments in the openshift-pipelines-controller deployment under the openshift-pipelines-controller container. The OpenShift Pipelines Operator also updates the buckets field in the config-leader-election config map under the openshift-pipelines namespace.
If you do not specify the values, the OpenShift Pipelines Operator does not update those fields and applies the default values for the OpenShift Pipelines controller.
If you change or remove any of the performance fields, the OpenShift Pipelines Operator updates the openshift-pipelines-controller deployment and the config-leader-election configuration map (if the buckets field changed) and re-creates openshift-pipelines-controller pods.
High-availability (HA) mode applies to the OpenShift Pipelines controller, which creates and starts pods based on pipeline run and task run definitions. Without HA mode, a single pod executes these operations, potentially creating significant delays under a high load.
In HA mode, OpenShift Pipelines uses several pods (replicas) to run these operations. Initially, OpenShift Pipelines assigns every controller operation into a bucket. Each replica picks operations from one or more buckets. If two replicas could pick the same operation at the same time, the controller internally determines a leader that executes this operation.
HA mode does not affect execution of task runs after the pods are created.
| Name | Description | Default value for the OpenShift Pipelines controller |
|---|---|---|
|
| Enable or disable the high availability (HA) mode. By default, the HA mode is enabled. |
|
|
|
In HA mode, the number of buckets used to process controller operations. The maximum value is |
|
|
|
In HA mode, the number of pods created to process controller operations. Set this value to the same or lower number than the |
|
|
| The number of threads (workers) to use when the work queue of the OpenShift Pipelines controller is processed. |
|
|
| The maximum queries per second (QPS) to the cluster control plane from the REST client. |
|
|
| The maximum burst for a throttle. |
|
The OpenShift Pipelines Operator does not control the number of replicas of the OpenShift Pipelines controller. The replicas setting of the deployment determines the number of replicas. For example, to change the number of replicas to 3, enter the following command:
oc --namespace openshift-pipelines scale deployment openshift-pipelines-controller --replicas=3
$ oc --namespace openshift-pipelines scale deployment openshift-pipelines-controller --replicas=3
The kube-api-qps and kube-api-burst fields are multiplied by 2 in the OpenShift Pipelines controller. For example, if the kube-api-qps and kube-api-burst values are 10, the actual QPS and burst values become 20.
3.3. Configuring the Red Hat OpenShift Pipelines control plane Copy linkLink copied to clipboard!
You can configure the OpenShift Pipelines control plane to suit your operational needs by editing the TektonConfig custom resource (CR). Customize settings such as metrics collection, sidecar injection, and service account defaults directly through the OpenShift Container Platform web console as needed.
Procedure
- In the Administrator perspective of the web console, navigate to Administration → CustomResourceDefinitions.
-
Use the Search by name box to search for the
tektonconfigs.operator.tekton.devcustom resource definition (CRD). Click TektonConfig to see the CRD details page. - Click the Instances tab.
-
Click the config instance to see the
TektonConfigCR details. - Click the YAML tab.
Edit the
TektonConfigYAML file based on your requirements.Example TektonConfig CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3.1. Modifiable fields with default values Copy linkLink copied to clipboard!
You can change various default configuration fields in the TektonConfig custom resource (CR) to tailor the behavior of your pipelines. This reference lists the available fields, such as sidecar injection and metric levels, along with their default values and descriptions.
The following list includes all modifiable fields with their default values in the TektonConfig CR:
running-in-environment-with-injected-sidecars(default:true): Set this field tofalseif pipelines run in a cluster that does not use injected sidecars, such as Istio. Setting it tofalsedecreases the time a pipeline takes for a task run to start.NoteFor clusters that use injected sidecars, setting this field to
falsecan lead to an unexpected behavior.-
await-sidecar-readiness(default:true): Set this field tofalseto stop OpenShift Pipelines from waiting forTaskRunsidecar containers to run before it begins to operate. When set tofalse, tasks run in environments that do not support thedownwardAPIvolume type. -
default-service-account(default:pipeline): This field has the default service account name to use for theTaskRunandPipelineRunresources, if none is specified. require-git-ssh-secret-known-hosts(default:false): Setting this field totruerequires that any Git SSH secret must include theknown_hostsfield.- For more information about configuring Git SSH secrets, see Configuring SSH authentication for Git in the Additional resources section.
-
enable-tekton-oci-bundles(default:false): Set this field totrueto enable the use of an experimental alpha feature named Tekton OCI bundle. enable-api-fields(default:stable): You can enable or disable API fields. Acceptable values arestable,beta, oralpha.NoteRed Hat OpenShift Pipelines does not support the
alphavalue.-
enable-provenance-in-status(default:false): Set this field totrueto enable populating theprovenancefield inTaskRunandPipelineRunstatuses. Theprovenancefield has metadata about resources used in the task run and pipeline run, such as the source for fetching a remote task or pipeline definition. -
enable-custom-tasks(default:true): Set this field tofalseto disable the use of custom tasks in pipelines. -
disable-creds-init(default:false): Set this field totrueto prevent OpenShift Pipelines from scanning attached service accounts and injecting any credentials into your steps. -
disable-affinity-assistant(default:true): Set this field tofalseto enable affinity assistant for eachTaskRunresource sharing a persistent volume claim workspace.
You can modify the default values of the following metrics fields in the TektonConfig CR:
-
metrics.taskrun.duration-typeandmetrics.pipelinerun.duration-type(default:histogram): Setting these fields determines the duration type for a task or pipeline run. Acceptable value isgaugeorhistogram. -
metrics.taskrun.level(default:task): This field determines the level of the task run metrics. Acceptable value istaskrun,task, ornamespace. -
metrics.pipelinerun.level(default:pipeline): This field determines the level of the pipeline run metrics. Acceptable value ispipelinerun,pipeline, ornamespace.
3.3.2. Optional configuration fields Copy linkLink copied to clipboard!
You can configure optional fields in the TektonConfig custom resource (CR) to enable advanced features or override specific defaults. These fields, such as default timeouts and pod templates, are not set by default and allow for fine-grained control over your pipeline execution environment.
The following fields do not have a default value, and are considered only if you configure them. By default, the Operator does not add and configure these fields in the TektonConfig custom resource (CR).
-
default-timeout-minutes: This field sets the default timeout for theTaskRunandPipelineRunresources, if none is specified when creating them. If a task run or pipeline run takes more time than the set number of minutes for its execution, then the task run or pipeline run is timed out and canceled. For example,default-timeout-minutes: 60sets 60 minutes as default. -
default-managed-by-label-value: This field contains the default value given to theapp.kubernetes.io/managed-bylabel that is applied to allTaskRunpods, if none is specified. For example,default-managed-by-label-value: tekton-pipelines. -
default-pod-template: This field sets the defaultTaskRunandPipelineRunpod templates, if none is specified. -
default-cloud-events-sink: This field sets the defaultCloudEventssink that is used for theTaskRunandPipelineRunresources, if none is specified. -
default-task-run-workspace-binding: This field contains the default workspace configuration for the workspaces that aTaskresource declares, but aTaskRunresource does not explicitly declare. -
default-affinity-assistant-pod-template: This field sets the defaultPipelineRunpod template that is used for affinity assistant pods, if none is specified. -
default-max-matrix-combinations-count: This field contains the default maximum number of combinations generated from a matrix, if none is specified.
3.4. Changing the default service account for OpenShift Pipelines Copy linkLink copied to clipboard!
You can change the default service account used by OpenShift Pipelines for task and pipeline runs to meet your security or operational requirements. By editing the TektonConfig custom resource (CR), you can specify a different service account for pipelines and triggers.
Example TektonConfig CR
3.5. Setting labels and annotations for the OpenShift Pipelines installation namespace Copy linkLink copied to clipboard!
You can apply custom labels and annotations to the openshift-pipelines namespace to integrate with your organization’s metadata standards or tools. You can configure these metadata fields in the TektonConfig custom resource (CR) and apply them.
Changing the name of the openshift-pipelines namespace is not supported.
Specify the labels and annotations by adding them to the spec.targetNamespaceMetadata specification in the TektonConfig custom resource (CR).
Example of setting labels and annotations for the openshift-pipelines namespace
3.6. Setting the resync period for the pipelines controller Copy linkLink copied to clipboard!
You can configure the resync period for the pipelines controller to optimize resource usage in clusters with a large number of pipeline and task runs. By adjusting this interval in the TektonConfig custom resource, you control how often the controller reconciles all resources regardless of events.
The default resync period is 10 hours. If you have a large number of pipeline runs and task runs, a full reconciliation every 10 hours might consume too many resources. In this case, you can configure a longer resync period.
Prerequisites
-
You are logged in to your OpenShift Container Platform cluster with
cluster-adminprivileges.
Procedure
In the
TektonConfigcustom resource, configure the resync period for the pipelines controller, as shown in the following example.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow name.args- This example sets the resync period to 24 hours.
3.7. Disabling the service monitor Copy linkLink copied to clipboard!
You can disable the service monitor in OpenShift Pipelines if you do not need to expose telemetry data or want to reduce resource consumption. This configuration is managed by setting the enableMetrics parameter to false in the TektonConfig custom resource (CR).
Example
3.8. Configuring pipeline resolvers Copy linkLink copied to clipboard!
You can enable or disable specific pipeline resolvers, such as git, cluster, bundle, and hub resolvers, to control how your pipelines fetch resources. These settings are managed within the TektonConfig custom resource (CR), where you can also provide resolver-specific configurations.
-
enable-bundles-resolver -
enable-cluster-resolver -
enable-git-resolver -
enable-hub-resolver
Example
You can also provide resolver specific configurations in the TektonConfig CR. For example, define the following fields in the map[string]string format to set configurations for each pipeline resolver:
Example
3.9. Disabling resolver tasks and pipeline templates Copy linkLink copied to clipboard!
You can disable the automatic installation of resolver tasks and pipeline templates to customize your cluster’s initial state. By modifying the TektonConfig custom resource (CR), you can prevent these default resources from being deployed if they are not required for your environment.
By default, the TektonAddon custom resource (CR) installs resolverTasks and pipelineTemplates resources along with OpenShift Pipelines on the cluster.
Procedure
Edit the
TektonConfigCR by running the following command:oc edit TektonConfig config
$ oc edit TektonConfig configCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the
TektonConfigCR, set theresolverTasksandpipelineTemplatesparameter value in.addon.paramsspec tofalse:Example of disabling resolver task and pipeline template resources
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantYou can set the value of the
pipelinesTemplatesparameter totrueonly when the value of theresolverTasksparameter istrue.
3.10. Disabling the installation of Tekton Triggers Copy linkLink copied to clipboard!
You can disable the automatic installation of Tekton Triggers during the OpenShift Pipelines deployment to manage triggers separately or exclude them from your environment. This is achieved by setting the disabled parameter to true in the TektonConfig custom resource (CR).
The default setting is false.
3.11. Disabling the integration of Tekton Hub Copy linkLink copied to clipboard!
You can disable the Tekton Hub integration in the OpenShift Container Platform web console Developer perspective to customize the user experience. This setting is controlled by the enable-devconsole-integration parameter in the TektonConfig custom resource (CR).
Example of disabling Tekton Hub
3.12. Disabling the automatic creation of RBAC resources Copy linkLink copied to clipboard!
You can disable the automatic creation of cluster-wide RBAC resources by using the Red Hat OpenShift Pipelines Operator to improve security and control over permissions. This is done by setting the createRbacResource parameter to false in the TektonConfig custom resource (CR), preventing the creation of potentially privileged role bindings.
The default installation of the Red Hat OpenShift Pipelines Operator creates multiple role-based access control (RBAC) resources for all namespaces in the cluster, except the namespaces matching the ^(openshift|kube)-* regular expression pattern. Among these RBAC resources, the pipelines-scc-rolebinding security context constraint (SCC) role binding resource is a potential security issue, because the associated pipelines-scc SCC has the RunAsAny privilege.
Prerequisites
-
You have access to the cluster with
cluster-adminprivileges. -
You installed the OpenShift CLI (
oc).
Procedure
Edit the
TektonConfigCR by running the following command:oc edit TektonConfig config
$ oc edit TektonConfig configCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the
TektonConfigCR, set thecreateRbacResourceparam value tofalse:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.13. Disabling inline specification of pipelines and tasks Copy linkLink copied to clipboard!
You can disable the inline specification of tasks and pipelines to enforce the use of referenced resources and improve security. By configuring the disable-inline-spec field in the TektonConfig custom resource (CR), you can restrict the use of embedded specs in Pipeline, PipelineRun, and TaskRun resources.
By default, OpenShift Pipelines supports inline specification of pipelines and tasks in the following cases:
You can create a
PipelineCR that includes one or more task specifications, as in the following example:Example of an inline specification in a
PipelineCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can create a
PipelineRuncustom resource (CR) that includes a pipeline specification, as in the following example:Example of an inline specification in a
PipelineRunCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can create a
TaskRuncustom resource (CR) that includes a task specification, as in the following example:Example of an inline specification in a
TaskRunCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow
You can disable inline specification in some or all of these cases. To disable the inline specification, set the disable-inline-spec field of the .spec.pipeline specification of the TektonConfig CR, as in the following example:
Example configuration that disables inline specification
You can set the disable-inline-spec parameter to any single value or to a comma-separated list of multiple values. The following values for the parameter are valid:
| Value | Description |
|---|---|
|
|
You cannot use a |
|
|
You cannot use a |
|
|
You cannot use a |
3.14. Configuration of RBAC and Trusted CA flags Copy linkLink copied to clipboard!
You can independently control the creation of RBAC resources and Trusted CA bundle config maps to customize your OpenShift Pipelines installation. The TektonConfig custom resource (CR) provides specific flags, createRbacResource and createCABundleConfigMaps, to manage these components separately.
| Parameter | Description | Default value |
|---|---|---|
|
| Controls the creation of RBAC resources only. This flag does not affect Trusted CA bundle config map. |
|
|
|
Controls the creation of Trusted CA bundle config map and Service CA bundle config map. This flag must be set to |
|
params[0].name- Specifies RBAC resource creation.
params[1].name- Specifies Trusted CA bundle config map creation.
3.15. Automatic pruning of task runs and pipeline runs Copy linkLink copied to clipboard!
You can automatically prune stale TaskRun and PipelineRun resources to free up cluster resources and maintain optimal performance. Red Hat OpenShift Pipelines provides a configurable pruner component that removes unused objects based on your defined policies.
You can configure the pruner for your entire installation by using the TektonConfig custom resource and modify configuration for a namespace by using namespace annotations. However, you cannot selectively auto-prune an individual task run or pipeline run in a namespace.
3.15.1. Configuring the pruner Copy linkLink copied to clipboard!
You can configure the default pruner to automatically remove old TaskRun and PipelineRun resources based on a schedule or resource count. By modifying the TektonConfig custom resource (CR), you can set retention limits and pruning intervals to manage resource usage.
The following example corresponds to the default configuration:
Example of the pruner configuration
| Parameter | Description |
|---|---|
|
| The Cron schedule for running the pruner process. The default schedule runs the process at 08:00 every day. For more information about the Cron schedule syntax, see Cron schedule syntax in the Kubernetes documentation. |
|
|
The resource types to which the pruner applies. The available resource types are |
|
| The number of most recent resources of every type to keep. |
|
|
If set to
If set to |
|
|
The maximum time for which to keep resources, in minutes. For example, to retain resources which were created not more than five days ago, set |
|
| This parameter is optional. If the pruner job is not started at the scheduled time for any reason, this setting configures the maximum time, in seconds, in which the job can still be started. If the job is not started within the specified time, OpenShift Pipelines considers this job failed and starts the pruner at the next scheduled time. If you do not specify this parameter and the pruner job does not start at the scheduled time, OpenShift Pipelines attempts to start the job at any later time possible. |
The keep and keep-since parameters are mutually exclusive. Use only one of them in your configuration.
3.15.2. Annotations for automatically pruning task runs and pipeline runs Copy linkLink copied to clipboard!
You can customize the pruning behavior for specific namespaces by applying annotations to the Namespace resource. These annotations allow you to override global pruning settings, such as retention limits and schedules, for individual projects.
The following namespace annotations have the same meanings as the corresponding keys in the TektonConfig custom resource:
-
operator.tekton.dev/prune.schedule -
operator.tekton.dev/prune.resources -
operator.tekton.dev/prune.keep -
operator.tekton.dev/prune.prune-per-resource -
operator.tekton.dev/prune.keep-since
The operator.tekton.dev/prune.resources annotation accepts a comma-separated list. To prune both task runs and pipeline runs, set this annotation to "taskrun, pipelinerun".
The following additional namespace annotations are available:
-
operator.tekton.dev/prune.skip: When set totrue, the namespace for which the annotation is configured is not pruned. -
operator.tekton.dev/prune.strategy: Set the value of this annotation to eitherkeeporkeep-since.
For example, the following annotations retain all task runs and pipeline runs created in the last five days and delete the older resources:
Example of auto-pruning annotations
3.16. Enabling the event-based pruner Copy linkLink copied to clipboard!
The event-based pruner is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can enable the event-based pruner to delete completed PipelineRun and TaskRun resources in near real-time. By configuring the tektonpruner controller in the TektonConfig custom resource (CR), you can replace the default scheduled pruner with an event-driven approach for more immediate resource cleanup.
You must disable the default pruner in the TektonConfig custom resource (CR) before you enable the event-based pruner. If both pruner types are enabled, the deployment readiness status changes to False and the following error message is displayed on the output:
Components not in ready state: Invalid Pruner Configuration!! Both pruners, tektonpruner(event based) and pruner(job based) cannot be enabled simultaneously. Please disable one of them.
Components not in ready state: Invalid Pruner Configuration!! Both pruners, tektonpruner(event based) and pruner(job based) cannot be enabled simultaneously. Please disable one of them.
Procedure
In your TektonConfig CR, disable the default pruner by setting the
spec.pruner.disabledfield totrueand enable the event-based pruner by setting thespec.tektonpruner.disabledfield tofalse. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow After you apply the updated CR, the Operator deploys the
tekton-pruner-controllerpod in theopenshift-pipelinesnamespace.Ensure that the following config maps are present in the
openshift-pipelinesnamespace:Expand Config map Purpose tekton-pruner-default-specDefine default pruning behavior
pruner-infoStore internal runtime data used by the controller
config-logging-tekton-prunerConfigure logging settings for the pruner
config-observability-tekton-prunerEnable observability features such as metrics and tracing
Verification
To verify that the
tekton-pruner-controllerpod is running, run the following command:oc get pods -n openshift-pipelines
$ oc get pods -n openshift-pipelinesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the output includes a
tekton-pruner-controllerpod in theRunningstate. Example output:tekton-pruner-controller-<id> Running
$ tekton-pruner-controller-<id> RunningCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.16.1. Configuration of the event-based pruner Copy linkLink copied to clipboard!
The event-based pruner is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can fine-tune the event-based pruner by adjusting settings in the TektonConfig custom resource (CR). This reference details the available configuration options, including history limits, time-to-live (TTL) values, and namespace-specific policies.
The following is an example of the TektonConfig CR with the default configuration that uses global pruning rules:
failedHistoryLimit- The amount of retained failed runs.
historyLimit- The amount of runs to retain. Pruner uses this setting if status-specific limits are not defined.
namespaces-
Definition of per-namespace pruning policies, when you set
enforcedConfigLeveltonamespace. successfulHistoryLimit- The amount of retained successful runs.
ttlSecondsAfterFinished- Time in seconds after completion, after which the pruner deletes resources.
You can define pruning rules for individual namespaces by setting enforcedConfigLevel to namespace and configuring policies under the namespaces section. In the following example, a 60 second time to live (TTL) is applied to resources in the dev-project namespace:
You can use the following parameters in your TektonConfig CR tektonpruner:
The following is an example of the TektonConfig CR with the default configuration that uses global pruning rules:
-
failedHistoryLimit: The amount of retained failed runs. -
historyLimit: The amount of runs to retain. Pruner uses this setting if status-specific limits are not defined. -
namespaces: Definition of per-namespace pruning policies, when you setenforcedConfigLeveltonamespace. -
successfulHistoryLimit: The amount of retained successful runs. -
ttlSecondsAfterFinished: Time in seconds after completion, after which the pruner deletes resources.
You can define pruning rules for individual namespaces by setting enforcedConfigLevel to namespace and configuring policies under the namespaces section. In the following example, a 60 second time to live (TTL) is applied to resources in the dev-project namespace:
You can use the following parameters in your TektonConfig CR tektonpruner:
| Parameter | Description |
|---|---|
|
| Delete resources a fixed number of seconds after they complete. |
|
| Retain the specified number of the most recent successful runs. Delete older successful runs. |
|
| Retain the specified number of the most recent failed runs. Delete older failed runs. |
|
|
Apply a generic history limit when |
|
|
Specify the level at which pruner applies the configuration. Accepted values: |
|
| Define per-namespace pruning policies. |
You can use TTL-based pruning to prune resources exceeding set expiration times. Use history-based pruning to prune resources exceeding the configured historyLimit.
3.16.2. Observability metrics of the event-based pruner Copy linkLink copied to clipboard!
The event-based pruner is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can monitor the performance and health of the event-based pruner using the metrics exposed by the tekton-pruner-controller. These metrics, available in OpenTelemetry format, provide insights into resource processing, error rates, and reconciliation times for effective troubleshooting and capacity planning.
The event-based pruner exposes detailed metrics through the tekton-pruner-controller controller Service definition on port 9090 in OpenTelemetry format for monitoring, troubleshooting, and capacity planning.
The following bullets are categories of the metrics exposed:
- Resource processing
- Performance timing
- State tracking
- Error monitoring
Most pruner metrics use labels to provide additional context. You can use these labels in PromQL queries or dashboards to filter and group the metrics.
| Label | Description |
|---|---|
|
|
The Kubernetes namespace of the |
|
| The Tekton resource type. |
|
| The outcome of processing a resource. |
|
| The pruning method that deleted a resource. |
|
| Specific cause for skipping or error outcomes. |
- Resource processing metrics
The following resource processing metrics are exposed by the event-based pruner:
Expand Name Type Description Labels tekton_pruner_controller_resources_processed_totalCounter
Total resources processed
namespace, resource_type, status
tekton_pruner_controller_resources_deleted_totalCounter
Total resources deleted
namespace, resource_type, operation
- Performance timing metrics
The following performance timing metrics are exposed by the event-based pruner:
Expand Name Type Description Labels Bucket tekton_pruner_controller_reconciliation_duration_secondsHistogram
Time spent in reconciliation
namespace, resource_type
0.1 to 30 seconds
tekton_pruner_controller_ttl_processing_duration_secondsHistogram
Time spent processing TTL
namespace, resource_type
0.1 to 30 seconds
tekton_pruner_controller_history_processing_duration_secondsHistogram
Time spent processing history limits
namespace, resource_type
0.1 to 30 seconds
- State tracking metrics
The following state tracking metrics are exposed by the event-based pruner:
Expand Name Type Description kn_workqueue_adds_totalCounter
Total resources queued
kn_workqueue_depthGauge
Number of current items in queue
- Error monitoring metrics
The following error monitoring metrics are exposed by the event-based pruner:
Expand Name Type Description Labels tekton_pruner_controller_resources_errors_totalCounter
Total processing errors
namespace, resource_type, reason
3.17. Setting additional options for webhooks Copy linkLink copied to clipboard!
You can configure advanced webhook options, such as failure policies and timeouts, for OpenShift Pipelines controllers to improve stability and error handling. These settings are applied by using the TektonConfig custom resource (CR) and allow you to customize how admission controllers interact with the Kubernetes API server.
Prerequisites
-
You installed the
occommand-line utility. -
You have logged in to your OpenShift Container Platform cluster with administrator rights for the namespace in which OpenShift Pipelines is installed, typically the
openshift-pipelinesnamespace.
Procedure
View the list of webhooks that the OpenShift Pipelines controllers created. There are two types of webhooks: mutating webhooks and validating webhooks.
To view the list of mutating webhooks, enter the following command:
oc get MutatingWebhookConfiguration
$ oc get MutatingWebhookConfigurationCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To view the list of validating webhooks, enter the following command:
oc get ValidatingWebhookConfiguration
$ oc get ValidatingWebhookConfigurationCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
In the
TektonConfigcustom resource (CR), add configuration for mutating and validating webhooks under the section for each of the controllers as necessary, as shown in the following examples. Use thevalidation.webhook.pipeline.tekton.devspec for the validating webhooks and thewebhook.pipeline.tekton.devspec for the mutating webhooks.Important-
You cannot set configuration for
operatorwebhooks. -
All settings are optional. For example, you can set the
timeoutSecondsparameter and omit thefailurePolicyandsideEffectsparameters.
Example settings for the Pipelines controller
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example settings for the Triggers controller
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example settings for the Pipelines as Code controller
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example settings for the Tekton Hub controller
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
You cannot set configuration for