Chapter 1. Using Tekton Results for OpenShift Pipelines observability
Tekton Results is a service that archives the complete information for every pipeline run and task run. You can prune the PipelineRun and TaskRun resources as necessary and use the Tekton Results API or the opc command line utility to access their YAML manifests and logging information.
1.1. Tekton Results concepts Copy linkLink copied to clipboard!
Tekton Results archives pipeline runs and task runs in the form of results and records.
Tekton Results archives pipeline runs and task runs in the form of results and records.
For every PipelineRun and TaskRun custom resource (CR) that completes running, Tekton Results creates a record.
A result can contain one or several records. A record is always a part of exactly one result.
A result corresponds to a pipeline run, and includes the records for the PipelineRun CR itself and for all the TaskRun CRs that the pipeline run started.
If you start a task run directly, without the use of a pipeline run, Tekton Results creates a result for this task run. This result has the record for the same task run.
Each result has a name that includes the namespace where you created the PipelineRun or TaskRun CR and the UUID of the CR. The format for the result name is <namespace_name>/results/<parent_run_uuid>. In this format, <parent_run_uuid> is the universally unique identifier (UUID) of a pipeline run or else of a task run that you started directly.
Example result name
results-testing/results/04e2fbf2-8653-405f-bc42-a262bcf02bed
Each record has a name that includes name of the result that has the record, and the UUID of the PipelineRun or TaskRun CR to which the record corresponds. The format for the result name is <namespace_name>/results/<parent_run_uuid>/results/<run_uuid>.
Example record name
results-testing/results/04e2fbf2-8653-405f-bc42-a262bcf02bed/records/e9c736db-5665-441f-922f-7c1d65c9d621
The record includes the full YAML manifest of the TaskRun or PipelineRun CR as it existed after the run completed. This manifest has the specification of the run and any annotation that you specify for it. It also includes information about the results, such as the completion time and whether the run was successful.
While the TaskRun or PipelineRun CR exists, you can view the YAML manifest by using the following command:
$ oc get pipelinerun <cr_name> -o yaml
Tekton Results preserves this manifest after you delete the TaskRun or PipelineRun CR and makes it available for viewing and searching.
Example YAML manifest of a pipeline run after its completion
kind: PipelineRun
spec:
params:
- name: message
value: five
timeouts:
pipeline: 1h0m0s
pipelineRef:
name: echo-pipeline
taskRunTemplate:
serviceAccountName: pipeline
status:
startTime: "2023-08-07T11:41:40Z"
conditions:
- type: Succeeded
reason: Succeeded
status: "True"
message: 'Tasks Completed: 1 (Failed: 0, Cancelled 0), Skipped: 0'
lastTransitionTime: "2023-08-07T11:41:49Z"
pipelineSpec:
tasks:
- name: echo-task
params:
- name: message
value: five
taskRef:
kind: Task
name: echo-task-pipeline
params:
- name: message
type: string
completionTime: "2023-08-07T11:41:49Z"
childReferences:
- kind: TaskRun
name: echo-pipeline-run-gmzrx-echo-task
apiVersion: tekton.dev/v1
pipelineTaskName: echo-task
metadata:
uid: 62c3b02e-f12b-416c-9771-c02af518f6d4
name: echo-pipeline-run-gmzrx
labels:
tekton.dev/pipeline: echo-pipeline
namespace: releasetest-js5tt
finalizers:
- chains.tekton.dev/pipelinerun
generation: 2
annotations:
results.tekton.dev/log: releasetest-js5tt/results/62c3b02e-f12b-416c-9771-c02af518f6d4/logs/c1e49dd8-d641-383e-b708-e3a02b6a4378
chains.tekton.dev/signed: "true"
results.tekton.dev/record: releasetest-js5tt/results/62c3b02e-f12b-416c-9771-c02af518f6d4/records/62c3b02e-f12b-416c-9771-c02af518f6d4
results.tekton.dev/result: releasetest-js5tt/results/62c3b02e-f12b-416c-9771-c02af518f6d4
generateName: echo-pipeline-run-
managedFields:
- time: "2023-08-07T11:41:39Z"
manager: kubectl-create
fieldsV1:
f:spec:
.: {}
f:params: {}
f:pipelineRef:
.: {}
f:name: {}
f:metadata:
f:generateName: {}
operation: Update
apiVersion: tekton.dev/v1
fieldsType: FieldsV1
- time: "2023-08-07T11:41:40Z"
manager: openshift-pipelines-controller
fieldsV1:
f:metadata:
f:labels:
.: {}
f:tekton.dev/pipeline: {}
operation: Update
apiVersion: tekton.dev/v1
fieldsType: FieldsV1
- time: "2023-08-07T11:41:49Z"
manager: openshift-pipelines-chains-controller
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"chains.tekton.dev/pipelinerun": {}
f:annotations:
.: {}
f:chains.tekton.dev/signed: {}
operation: Update
apiVersion: tekton.dev/v1
fieldsType: FieldsV1
- time: "2023-08-07T11:41:49Z"
manager: openshift-pipelines-controller
fieldsV1:
f:status:
f:startTime: {}
f:conditions: {}
f:pipelineSpec:
.: {}
f:tasks: {}
f:params: {}
f:completionTime: {}
f:childReferences: {}
operation: Update
apiVersion: tekton.dev/v1
fieldsType: FieldsV1
subresource: status
- time: "2023-08-07T11:42:15Z"
manager: openshift-pipelines-results-watcher
fieldsV1:
f:metadata:
f:annotations:
f:results.tekton.dev/log: {}
f:results.tekton.dev/record: {}
f:results.tekton.dev/result: {}
operation: Update
apiVersion: tekton.dev/v1
fieldsType: FieldsV1
resourceVersion: "126429"
creationTimestamp: "2023-08-07T11:41:39Z"
deletionTimestamp: "2023-08-07T11:42:23Z"
deletionGracePeriodSeconds: 0
apiVersion: tekton.dev/v1
You can access every result and record by its name. You can also use Common Expression Language (CEL) queries to search for results and records by the information they contain, including the YAML manifest.
You can configure Tekton Results to help forwarding the logging information of all the tools that ran as a part of a pipeline or task to LokiStack. You can then query Tekton Results for logging information of the task run associated with a Tekton Results record.
You can also query results and logs by the names of pipeline runs and task runs.
1.2. Configuring Tekton Results Copy linkLink copied to clipboard!
Installing OpenShift Pipelines enables Tekton Results by default.
However, if you want to store and access logging information for your pipeline runs and task runs, you must configure forwarding this information to LokiStack.
You can optionally complete additional configuration for Tekton Results.
1.2.1. Configuring LokiStack forwarding for logging information Copy linkLink copied to clipboard!
If you want to use Tekton Results to query logging information for task runs, you must install LokiStack and OpenShift Logging on your OpenShift Container Platform cluster and configure forwarding of the logging information to LokiStack.
If you want to use Tekton Results to query logging information for task runs, you must install LokiStack and OpenShift Logging on your OpenShift Container Platform cluster and configure forwarding of the logging information to LokiStack.
If you do not configure LokiStack forwarding for logging information, Tekton Results does not store this information or give it from the command-line interface or API.
Prerequisites
-
You installed the OpenShift CLI (
oc) utility. - You logged in to your OpenShift Container Platform cluster as a cluster administrator user.
Procedure
-
On your OpenShift Container Platform cluster, install
LokiStackby using the Loki Operator and also install the OpenShift Logging Operator. Create a
ClusterLogForwarder.yamlmanifest file for theClusterLogForwardercustom resource (CR) with one of the following YAML manifests, depending on whether you installed OpenShift Logging version 6 or version 5:YAML manifest for the
ClusterLogForwarderCR if you installed OpenShift Logging version 6apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: inputs: - application: selector: matchExpressions: - key: app.kubernetes.io/managed-by operator: In values: ["tekton-pipelines", "pipelinesascode.tekton.dev"] name: only-tekton type: application managementState: Managed outputs: - lokiStack: labelKeys: application: ignoreGlobal: true labelKeys: - log_type - kubernetes.namespace_name - openshift_cluster_id authentication: token: from: serviceAccount target: name: logging-loki namespace: openshift-logging name: default-lokistack tls: ca: configMapName: openshift-service-ca.crt key: service-ca.crt type: lokiStack pipelines: - inputRefs: - only-tekton name: default-logstore outputRefs: - default-lokistack serviceAccount: name: collectorYAML manifest for the
ClusterLogForwarderCR if you installed OpenShift Logging version 5apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: inputs: - name: only-tekton application: selector: matchLabels: app.kubernetes.io/managed-by: tekton-pipelines pipelines: - name: enable-default-log-store inputRefs: [ only-tekton ] outputRefs: [ default ]Create the
ClusterLogForwarderCR in theopenshift-loggingnamespace by entering the following command:$ oc apply -n openshift-logging ClusterLogForwarder.yamlEdit the
TektonConfigCR by using the following command:$ oc edit TektonConfig configMake the following changes in the
resultspec:apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: result: loki_stack_name: logging-loki loki_stack_namespace: openshift-loggingloki_stack_name-
The name of the
LokiStackCR, typicallylogging-loki. loki_stack_namespace-
The name of the namespace where you deployed
LokiStack, typicallyopenshift-logging.
1.2.2. Configuring an external database server Copy linkLink copied to clipboard!
Configure Tekton Results to connect to an external PostgreSQL-compatible database for production environments instead of the default internal instance.
Tekton Results stores data in a PostgreSQL database. By default, the installation includes an internal PostgreSQL instance intended for testing and nonproduction use. The default instance does not give production-grade capabilities, such as automated backups, performance tuning, storage lifecycle management, or support for database-level modifications through the OpenShift Pipelines Operator.
For production environments or business-critical workloads, you can configure Tekton Results to connect to an external PostgreSQL-compatible database that is present in your environment.
Procedure
Create a secret with the credentials for connecting to your PostgreSQL server by entering the following command:
$ oc create secret generic tekton-results-postgres \ --namespace=openshift-pipelines \ --from-literal=POSTGRES_USER=<user> \ --from-literal=POSTGRES_PASSWORD=<password>Edit the
TektonConfigcustom resource (CR) by using the following command:$ oc edit TektonConfig configMake the following changes in the
resultspec:apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: result: is_external_db: true db_host: database.example.com db_port: 5342db_host- Give the hostname of your PostgreSQL server.
db_port- Give the port number of your PostgreSQL server.
1.2.3. Configuring the retention policy for Tekton Results Copy linkLink copied to clipboard!
By default, Tekton Results stores pipeline runs, task runs, events, and logs indefinitely. This leads to an unnecessary use of storage resources and can affect your database performance.
You can configure the retention policy for Tekton Results at the cluster level to remove older results and their associated records and logs.
Procedure
Edit the
TektonConfigcustom resource (CR) by using the following command:$ oc edit TektonConfig configMake the following changes in the
resultspec:apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: result: options: configMaps: config-results-retention-policy: data: runAt: "3 5 * * 0" maxRetention: "30"runAt- Specify, in cron format, when to run the pruning job in the database. This example runs the job at 5:03 AM every Sunday.
maxRetention- Specify how many days to keep the data in the database. This example retains the data for 30 days.
1.2.4. Observability metrics for Tekton Results Copy linkLink copied to clipboard!
You can monitor the health and performance of Tekton Results storage and deletion operations by using the metrics exposed by the tekton-results-watcher component. These metrics provide insights into storage latency, deletion timing, and help identify runs deleted before successful storage.
The tekton-results-watcher service exposes the metrics on port 9090 at the /metrics endpoint. If you have configured ServiceMonitor resources for Tekton Results, Prometheus Operator automatically discovers these metrics.
The following are the metric categories:
- Storage performance
- Storage failures
- Deletion duration
- Deletion count
Most Tekton Results metrics use labels to provide additional context, which you can use in PromQL queries or dashboards to filter and group the metrics.
| Label | Description |
|---|---|
|
|
The Tekton resource type: |
|
| The Kubernetes namespace of the run. |
|
| The name of the pipeline. This label is optional. |
|
| The completion status of the run. |
|
| The name of the task. This label is optional and applies only to TaskRuns. |
|
| The name of the TaskRun. This label is optional and applies only to TaskRuns. |
- Storage performance metrics
Tekton Results exposes the following storage performance metrics:
Expand Name Type Description Labels Buckets watcher_run_storage_latency_secondsHistogram
The duration between run completion and successful storage
kind, namespace
0.1, 0.5, 1, 2, 5, 10, 30, 60, 120, 300, 600, and 1800 seconds
NoteThis metric tracks latency only when the watcher stores a run after completion. When you set
DisableStoringIncompleteRunstofalse, the watcher stores runs before completion but only records storage latency when the run completes and the watcher stores it again.- Storage failure metrics
Tekton Results exposes the following storage failure metrics:
Expand Name Type Description Labels runs_not_stored_countCounter
Total number of runs the system deletes without successful storage
kind, namespace
NoteThis metric might show inflated values in rare cases because the system can increment it multiple times during reconciliation when the controller attempts to store the run again or when the system does not complete deletion immediately.
- Deletion duration metrics
Tekton Results exposes the following deletion duration metrics:
Expand Name Type Description Labels watcher_pipelinerun_delete_duration_secondsHistogram
The duration that the watcher takes to delete the PipelineRun since completion
pipeline, status, namespace
watcher_taskrun_delete_duration_secondsHistogram
The duration that the watcher takes to delete the TaskRun since completion
pipeline, status, task, taskrun, namespace
- Deletion count metrics
Tekton Results exposes the following deletion count metrics:
Expand Name Type Description Labels watcher_pipelinerun_delete_countCounter
The total count of deleted PipelineRuns
status, namespace
watcher_taskrun_delete_countCounter
The total count of deleted TaskRuns
status, namespace
1.3. Querying Tekton Results for results and records Copy linkLink copied to clipboard!
You can use the opc command line utility to query Tekton Results for results and records. To install the opc command line utility, install the package for the tkn command line utility. For instructions about installing this package, see Installing tkn.
You can use the names of records and results to retrieve the data in them.
You can search for results and records by using Common Expression Language (CEL) queries. These searches display the universally unique identifiers (UUIDs) of the results or records. You can use the provided examples to create queries for common search types. You can also use reference information to create other queries.
1.3.1. Preparing the opc utility environment for querying Tekton Results Copy linkLink copied to clipboard!
Before you can query Tekton Results, you must prepare the environment for the opc utility.
Prerequisites
-
You installed the
opcutility. -
You logged on to the OpenShift Container Platform cluster by using the OpenShift CLI (
oc).
Procedure
Set the
RESULTS_APIenvironment variable to the route to the Tekton Results API by entering the following command:$ export RESULTS_API=$(oc get route tekton-results-api-service -n openshift-pipelines --no-headers -o custom-columns=":spec.host"):443Create an authentication token for the Tekton Results API by entering the following command:
$ oc create token <service_account>Replace
<service_account>with the name of an OpenShift Container Platform service account that has read access to the namespaces where OpenShift Pipelines ran the pipeline runs and task runs.Save the string that this command outputs.
Optional: Create the
~/.config/tkn/results.yamlfile for automatic authentication with the Tekton Results API. The file must have the following contents:address: <tekton_results_route> token: <authentication_token> ssl: roots_file_path: /home/example/cert.pem server_name_override: tekton-results-api-service.openshift-pipelines.svc.cluster.local service_account: namespace: service_acc_1 name: service_acc_1address-
The route to the Tekton Results API. Use the same value as you set for
RESULTS_API. token-
The authentication token that the
oc create tokencommand created. If you give this token, it overrides theservice_accountsetting andopcuses this token to authenticate. ssl.roots_file_path- The location of the file with the SSL/TLS certificate that you configured for the API endpoint.
ssl.server_name_override-
If you configured a custom target namespace for OpenShift Pipelines, replace
openshift-pipelineswith the name of this namespace. service_accountThe name of a service account for authenticating with the Tekton Results API. If you provided the authentication token, you do not need to give the
service_accountparameters.Or, if you do not create the
~/.config/tkn/results.yamlfile, you can pass the token to eachopccommand by using the--authtokenoption.
1.3.2. Querying for results and records by name Copy linkLink copied to clipboard!
You can list and query results and records by using their names.
Prerequisites
-
You installed the
opcutility and prepared its environment to query Tekton Results. -
You installed the
jqpackage. -
If you want to query logging information, you configured log forwarding to
LokiStack.
Procedure
List the names of all results that correspond to pipeline runs and task runs created in a namespace. Enter the following command:
$ opc results result list --addr ${RESULTS_API} <namespace_name>Example command
$ opc results result list --addr ${RESULTS_API} results-testingExample output
Name Start Update results-testing/results/04e2fbf2-8653-405f-bc42-a262bcf02bed 2023-06-29 02:49:53 +0530 IST 2023-06-29 02:50:05 +0530 IST results-testing/results/ad7eb937-90cc-4510-8380-defe51ad793f 2023-06-29 02:49:38 +0530 IST 2023-06-29 02:50:06 +0530 IST results-testing/results/d064ce6e-d851-4b4e-8db4-7605a23671e4 2023-06-29 02:49:45 +0530 IST 2023-06-29 02:49:56 +0530 ISTList the names of all records in a result by entering the following command:
$ opc results records list --addr ${RESULTS_API} <result_name>Example command
$ opc results records list --addr ${RESULTS_API} results-testing/results/04e2fbf2-8653-405f-bc42-a262bcf02bedExample output
Name Type Start Update results-testing/results/04e2fbf2-8653-405f-bc42-a262bcf02bed/records/e9c736db-5665-441f-922f-7c1d65c9d621 tekton.dev/v1.TaskRun 2023-06-29 02:49:53 +0530 IST 2023-06-29 02:49:57 +0530 IST results-testing/results/04e2fbf2-8653-405f-bc42-a262bcf02bed/records/5de23a76-a12b-3a72-8a6a-4f15a3110a3e results.tekton.dev/v1alpha2.Log 2023-06-29 02:49:57 +0530 IST 2023-06-29 02:49:57 +0530 IST results-testing/results/04e2fbf2-8653-405f-bc42-a262bcf02bed/records/57ce92f9-9bf8-3a0a-aefb-dc20c3e2862d results.tekton.dev/v1alpha2.Log 2023-06-29 02:50:05 +0530 IST 2023-06-29 02:50:05 +0530 IST results-testing/results/04e2fbf2-8653-405f-bc42-a262bcf02bed/records/e9a0c21a-f826-42ab-a9d7-a03bcefed4fd tekton.dev/v1.TaskRun 2023-06-29 02:49:57 +0530 IST 2023-06-29 02:50:05 +0530 IST results-testing/results/04e2fbf2-8653-405f-bc42-a262bcf02bed/records/04e2fbf2-8653-405f-bc42-a262bcf02bed tekton.dev/v1.PipelineRun 2023-06-29 02:49:53 +0530 IST 2023-06-29 02:50:05 +0530 IST results-testing/results/04e2fbf2-8653-405f-bc42-a262bcf02bed/records/e6eea2f9-ec80-388c-9982-74a018a548e4 results.tekton.dev/v1alpha2.Log 2023-06-29 02:50:05 +0530 IST 2023-06-29 02:50:05 +0530 ISTRetrieve the YAML manifest for a pipeline run or task run from a record by entering the following command:
$ opc results records get --addr ${RESULTS_API} <record_name> \ | jq -r .data.value | base64 -d | \ xargs -0 python3 -c 'import sys, yaml, json; j=json.loads(sys.argv[1]); print(yaml.safe_dump(j))'Example command
$ opc results records get --addr ${RESULTS_API} \ results-testing/results/04e2fbf2-8653-405f-bc42-a262bcf02bed/records/e9c736db-5665-441f-922f-7c1d65c9d621 | \ jq -r .data.value | base64 -d | \ xargs -0 python3 -c 'import sys, yaml, json; j=json.loads(sys.argv[1]); print(yaml.safe_dump(j))'Optional: Retrieve the logging information for a task run from a record using the log record name. To get the log record name, replace
recordswithlogsin the record name. Enter the following command:$ opc results logs get --addr ${RESULTS_API} <log_record_name> | jq -r .data | base64 -dExample command
$ opc results logs get --addr ${RESULTS_API} \ results-testing/results/04e2fbf2-8653-405f-bc42-a262bcf02bed/logs/e9c736db-5665-441f-922f-7c1d65c9d621 | \ jq -r .data | base64 -d
1.3.3. Searching for results Copy linkLink copied to clipboard!
You can search for results by using Common Expression Language (CEL) queries. For example, you can find results for pipeline runs that did not succeed. However, most of the relevant information is not contained in result objects; to search by the names, completion times, and other data, search for records.
Prerequisites
-
You installed the
opcutility and prepared its environment to query Tekton Results.
Procedure
Search for results using a CEL query by entering the following command:
$ opc results result list --addr ${RESULTS_API} --filter="<cel_query>" <namespace_name>Replace
<namespace_name>with the namespace in which you created the pipeline runs or task runs.Expand Table 1.1. Example CEL queries for results Purpose CEL query The results of all runs that failed
!(summary.status == SUCCESS)The results all pipeline runs that contained the annotations
ann1andann2summary.annotations.contains('ann1') && summary.annotations.contains('ann2') && summary.type=='PIPELINE_RUN'
1.3.4. Searching for records Copy linkLink copied to clipboard!
You can search for records by using Common Expression Language (CEL) queries. As each record has full YAML information for a pipeline run or task run, you can find records by many different criteria.
Prerequisites
-
You installed the
opcutility and prepared its environment to query Tekton Results.
Procedure
Search for records using a CEL query by entering the following command:
$ opc results records list --addr ${RESULTS_API} --filter="<cel_query>" <namespace_name>/result/-Replace
<namespace_name>with the namespace in which you created the pipeline runs or task runs. Or, search for records within a single result by entering the following command:$ opc results records list --addr ${RESULTS_API} --filter="<cel_query>" <result_name>Replace
<result_name>with the full name of the result.Expand Table 1.2. Example CEL queries for records Purpose CEL query Records of all task runs or pipeline runs that failed
!(data.status.conditions[0].status == 'True')Records where the name of the
TaskRunorPipelineRuncustom resource (CR) wasrun1data.metadata.name == 'run1'Records for all task runs that the
PipelineRunCR namedrun1starteddata_type == 'TASK_RUN' && data.metadata.labels['tekton.dev/pipelineRun'] == 'run1'Records of all pipeline runs and task runs associated with a
PipelineCR namedpipeline1data.metadata.labels['tekton.dev/pipeline'] == 'pipeline1'Records of all pipeline runs associated with a
PipelineCR namedpipeline1data.metadata.labels['tekton.dev/pipeline'] == 'pipeline1' && data_type == 'PIPELINE_RUN'Records of all task runs where the
TaskRunCR name stared withhellodata.metadata.name.startsWith('hello') && data_type=='TASK_RUN'Records of all pipeline runs that took more than five minutes to complete
data.status.completionTime - data.status.startTime > duration('5m') && data_type == 'PIPELINE_RUN'Records of all pipeline runs and task runs that completed on October 7, 2023
data.status.completionTime.getDate() == 7 && data.status.completionTime.getMonth() == 10 && data.status.completionTime.getFullYear() == 2023Records of all pipeline runs that included three or more tasks
size(data.status.pipelineSpec.tasks) >= 3 && data_type == 'PIPELINE_RUN'Records of all pipeline runs that had annotations containing
ann1data.metadata.annotations.contains('ann1') && data_type == 'PIPELINE_RUN'Records of all pipeline runs that had annotations containing
ann1and the name of thePipelineRunCR started withhellodata.metadata.annotations.contains('ann1') && data.metadata.name.startsWith('hello') && data_type == 'PIPELINE_RUN'
1.3.5. Reference information for searching results Copy linkLink copied to clipboard!
You can use the following fields in Common Expression Language (CEL) queries for results:
You can use the following fields in Common Expression Language (CEL) queries for results:
| CEL field | Description |
|---|---|
|
|
The namespace in which you created the |
|
| Unique identifier for the result. |
|
|
Annotations added to the |
|
| The summary of the result. |
|
| The creation time of the result. |
|
| The last update time of the result. |
You can use the summary.status field to find whether the pipeline run was successful. This field can have the following values:
-
UNKNOWN -
SUCCESS -
FAILURE -
TIMEOUT -
CANCELLED
Do not use quote characters such as " or ' to give the value for this field.
1.3.6. Reference information for searching records Copy linkLink copied to clipboard!
You can use the following fields in Common Expression Language (CEL) queries for records:
| CEL field | Description | Values |
|---|---|---|
|
| Record name | |
|
| Record type identifier |
|
|
| The YAML data for the task run or pipeline run. In log records, this field has the logging output. |
Because the data field has the entire YAML data for the task run or pipeline run, you can use all elements of this data in your CEL query. For example, data.status.completionTime has the completion time of the task run or pipeline run.
1.4. Querying results and logs by the names of pipeline runs and task runs Copy linkLink copied to clipboard!
You can use the opc command line utility to query Tekton Results for lists of pipeline runs and task runs. You can then retrieve manifest and log information by using the names of specific runs.
This approach requires different configuration of the opc command line utility, compared to queries for results and records.
Querying results and logs by the names of pipeline runs and task runs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
1.4.1. Configuring the opc utility for querying results by pipeline run and task run names Copy linkLink copied to clipboard!
Before you can query results from Tekton Results by pipeline run and task run names, you must configure the opc utility.
Prerequisites
-
You installed the
opcutility. -
You logged on to the OpenShift Container Platform cluster by using the OpenShift CLI (
oc).
Procedure
Create an authentication token for the Tekton Results API by entering the following command:
$ oc create token <service_account>Replace
<service_account>with the name of an OpenShift Container Platform service account that has read access to the namespaces where OpenShift Pipelines ran the pipeline runs and task runs.Save the string that this command outputs.
Complete one of the following steps:
Configure the
opcutility interactively by entering the following command:$ opc results config setReply to the prompts that the utility displays. For
Token, enter the authentication token that you created.Configure the
opcutility from a command by entering the following command:$ opc results config set --host="https://tekton-results.example.com" --token="<token>"Replace the hostname with the fully qualified domain name of your Tekton Results route. Replace
<token>with the authentication token that you generated.
Verification
You can view the configuration that you set for the
opcutility by entering the following command:$ opc results config viewExample output
api-path: "" apiVersion: results.tekton.dev/v1alpha2 host: https://tekton-results.openshiftapps.com insecure-skip-tls-verify: "true" kind: Client token: sha256~xyz
1.4.2. Viewing a list of pipeline run names and identifiers Copy linkLink copied to clipboard!
You can use the opc utility to view a list of names and identifiers of pipeline runs in a namespace.
Prerequisites
-
You installed the
opcutility. -
You configured the
opcutility to query results from Tekton Results by pipeline run and task run names.
Procedure
Use any of the following commands to view pipeline runs:
To view all pipeline runs in a specified namespace, enter the following command:
$ opc results pipelinerun list -n <namespace_name>Optionally, specify the
--limitcommand line option, for example,--limit=10. With this setting, theopccommand displays the specified number of lines containing pipeline run names and then exits. If you add the--single-page=falsecommand line option, the command displays the specified number of lines and then prompts you to continue or quit.Optionally, specify the
--labelscommand line option, for example,--labels="app.kubernetes.io/name=test-app, app.kubernetes.io/component=database. With this setting, the list includes only the pipeline runs that have the specified labels or annotations.Example output of the
opc results pipelinerun listcommandNAME UID STARTED DURATION STATUS openshift-pipelines-main-release-tests-zscq8 78515e3a-8e20-43e8-a064-d2442c2ae845 1 week ago 5s Failed(CouldntGetPipeline) openshift-pipelines-main-release-tests-zrgv6 14226144-2d08-440d-a600-d602ca46cdf6 1 week ago 26m13s Failed openshift-pipelines-main-release-tests-jdc24 e34daea2-66fb-4c7d-9d4b-d9d82a07b6cd 1 week ago 5s Failed(CouldntGetPipeline) openshift-pipelines-main-release-tests-6zj7f 9b3e5d68-70ab-4c23-8872-e7ad7121e60b 1 week ago 5s Failed(CouldntGetPipeline) openshift-pipelines-main-release-tests-kkk9t 2fd28c48-388b-4e6a-9ec3-2bcd9dedebc3 1 week ago 5s Failed(CouldntGetPipeline)To view pipeline runs related to specified named pipelines, enter the following command:
$ opc results pipelinerun list <pipeline_name> -n <namespace_name>The command lists all pipeline runs for pipelines that have names containing
<pipeline_name>. For example, if you specifybuild, the command displays all pipeline runs related to pipelines namedbuild,build_123, orenhancedbuild.Optionally, specify the
--limitcommand line option, for example,--limit=10. With this setting, theopccommand displays the specified number of lines containing pipeline run names and then exits. If you add the--single-page=falsecommand line option, the command displays the specified number of lines and then prompts you to continue or quit.
1.4.3. Viewing a list of task run names and identifiers Copy linkLink copied to clipboard!
You can use the opc utility to view a lists of names and identifiers of task runs in a namespace or of task runs associated with a pipeline dun.
Prerequisites
-
You installed the
opcutility. -
You configured the
opcutility to query results from Tekton Results by pipeline run and task run names.
Procedure
To view a list of all task runs in a namespace, enter the following command:
$ opc results taskrun list -n <namespace_name>Optionally, specify the
--limitcommand line option, for example,--limit=10. With this setting, theopccommand displays the specified number of lines containing pipeline run names and then exits. If you add the--single-page=falsecommand line option, the command displays the specified number of lines and then prompts you to continue or quit.Optionally, specify the
--labelsparameter, for example,--labels="app.kubernetes.io/name=test-app, app.kubernetes.io/component=database. With this setting, the list includes only the task runs that have the specified labels or annotations.Example output of the
opc results pipelinerun listcommand for a namespaceNAME UID STARTED DURATION STATUS openshift-pipelines-main-release-tests-zrgv6-e2e-test 10d6952f-b926-4e4b-a976-519867969ce7 16d ago 12m41s Failed openshift-pipelines-main-release-tests-zrgv6-deploy-operator ab41b63b-16ec-4a32-8b95-f2678eb5c945 16d ago 22s Succeeded openshift-pipelines-main-release-tests-zrgv6-provision-cluster b374df00-5132-4633-91df-3259670756b3 16d ago 12m30s Succeeded operator-main-index-4-18-on-pull-request-ml4ww-show-sbom c5b77784-cd87-4be8-bc12-28957762f382 16d ago 16s Succeeded openshift-c4ae3a5a28e19ffc930e7c2aa758d85c-provision-eaas-space 22535d8e-d360-4143-9c0c-4bd0414a22b0 16d ago 17s SucceededTo view a list of task runs associate with a pipeline run, enter the following command:
$ opc results taskrun list --pipelinerun <pipelinerun_name> -n <namespace_name>Optionally, specify the
--limitcommand line option, for example,--limit=10. With this setting, theopccommand displays the specified number of lines containing pipeline run names and then exits. If you add the--single-page=falsecommand line option, the command displays the specified number of lines and then prompts you to continue or quit.
Example output of the opc results taskrun list command for a pipeline run
+
NAME UID STARTED DURATION STATUS
operator-main-index-4-18-on-pull-request-g95fk-show-sbom 5b405941-0d3e-4f8c-a68a-9ffcc481abf1 16d ago 13s Succeeded
operator-main-index-4-18-on-pul2b222db723593a186d12f1b82f1a1fd9 89588ae7-aa36-4b62-97d1-5634ee201850 16d ago 36s Succeeded
operator-fb80434867bc15d89fea82506058f664-fbc-fips-check-oci-ta 7598d44a-4370-459b-8ef0-ae4165c58ba5 16d ago 5m52s Succeeded
operator-main-index-4-18-on-pull-request-g95fk-validate-fbc fb80d962-807b-4b63-80cb-6a57d383755a 16d ago 1m26s Succeeded
operator-main-index-4-18-on-pull-request-g95fk-apply-tags 8a34b46d-74a9-4f20-9e99-a285f7b258d6 16d ago 13s Succeeded
1.4.4. Viewing result information for a pipeline run Copy linkLink copied to clipboard!
You can use the opc utility to view a description of when and how a pipeline run completed, a full manifest for the pipeline run, and any logs that the pipeline run produced.
Prerequisites
-
You installed the
opcutility. -
You configured the
opcutility to query results from Tekton Results by pipeline run and task run names. -
You have the name or universally unique identifier (UUID) of the pipeline run. You can use the
ocp results list pipelinerunscommands to view names and universally unique identifiers (UUIDs) of pipeline runs for which results are available.
Procedure
Use any of the following commands to view the result information for a pipeline run:
To view a description of when and how the pipeline run completed, enter the following command:
$ opc results pipelinerun describe -n <namespace_name> <pipelinerun_name>Or, you can use the pipeline run UUID instead of the name:
$ opc results pipelinerun describe -n <namespace_name> --uid <pipelinerun_uuid>Example output of the
opc results pipelinerun describecommandName: operator-main-index-4-18-on-pull-request-7kssl Namespace: tekton-ecosystem-tenant Service Account: appstudio-pipeline Labels: app.kubernetes.io/managed-by=pipelinesascode.tekton.dev app.kubernetes.io/version=v0.33.0 Annotations: appstudio.openshift.io/snapshot=openshift-pipelines-main-b7jj6 build.appstudio.openshift.io/repo=https://github.com/openshift-pipelines/operator?rev=ba5e62e51af0c88bc6c3fd4201e789bdfc093daa 📌 Status STARTED DURATION STATUS 27d ago 9m54s Succeeded ⏱ Timeouts Pipeline: 2h0m0s ⚓ Params NAME VALUE • git-url https://github.com/pramodbindal/operator • revision ba5e62e51af0c88bc6c3fd4201e789bdfc093daa 🗂 Workspaces NAME SUB PATH WORKSPACE BINDING • workspace --- VolumeClaimTemplate • git-auth --- Secret (secret=pac-gitauth-ceqzjt) 📦 Taskruns NAME TASK NAME • operator-main-index-4-18-on-pull-request-7kssl-init init • operator-main-index-4-18-on-pull-request-7kssl-clone-repository clone-repository
To view the full YAML manifest of the pipeline run, enter the following command:
$ opc results pipelinerun describe -n <namespace_name> --output yaml <pipelinerun_name>Or, you can use the pipeline run UUID instead of the name:
$ opc results pipelinerun describe -n <namespace_name> --output yaml --uid <pipelinerun_uuid>To view the logs associated with the pipeline run, enter the following command:
$ opc results pipelinerun logs -n <namespace_name> <pipelinerun_name>Or, you can use the pipeline run UUID instead of the name:
$ opc results pipelinerun logs -n <namespace_name> --uid <pipelinerun_uuid>
Logs that the opc results pipelinerun logs displays do not include logs of task runs that completed within this pipeline run. To view these logs, find the names of the task runs in this pipeline run using the opc results taskrun list --pipelinerun command and specify the name of the pipeline run. Then use the opc results taskrun log command to view the logs for the task runs.
1.4.5. Viewing result information for a task run Copy linkLink copied to clipboard!
You can use the opc utility to view a description of when and how a task run completed, a full manifest for the task run, and any logs that the task run produced.
Prerequisites
-
You installed the
opcutility. -
You configured the
opcutility to query results from Tekton Results by pipeline run and task run names. -
You have the name or universally unique identifier (UUID) of the task run. You can use the
opc results taskrun listcommand to view names or universally unique identifiers (UUIDs) of task runs for which results are available. -
If you want to retrieve logs, you configured forwarding logs to
LokiStack.
Procedure
Use any of the following commands to view the result information for a task run:
To view a description of when and how the task run completed, enter the following command:
$ opc results taskrun describe -n <namespace_name> <taskrun_name>Or, you can use the task run UUID instead of the name:
$ opc results taskrun describe -n <namespace_name> --uid <taskrun_uuid>Example output of the
opc results taskrun describecommandName: operator-main-index-4-18-on-push-gc699-build-images-0 Namespace: tekton-ecosystem-tenant Service Account: appstudio-pipeline Labels: tekton.dev/pipelineTask=build-images tekton.dev/task=buildah-remote-oci-ta Annotations: pipelinesascode.tekton.dev/branch=main pipelinesascode.tekton.dev/check-run-id=40080193061 📌 Status STARTED DURATION STATUS 28d ago 3m22s Failed ⚓ Params NAME VALUE • PLATFORM linux-m2xlarge/arm64 • IMAGE quay.io/redhat-user-workloads/tekton-ecosystem
To view the full YAML manifest of the task run, enter the following command:
$ opc results taskrun describe -n <namespace_name> --output yaml <taskrun_name>Or, you can use the task run UUID instead of the name:
$ opc results taskrun describe -n <namespace_name> --output yaml --uid <taskrun_uuid>To view the logs associated with the task run, enter the following command:
$ opc results taskrun logs -n <namespace_name> <taskrun_name>Or, you can use the task run UUID instead of the name:
$ opc results taskrun logs -n <namespace_name> --uid <taskrun_uuid>
1.4.6. Short names for command-line arguments Copy linkLink copied to clipboard!
When you use the opc utility to query Tekton Results, you can use short names for pipeline run and task run arguments.
| Full parameter name | Short parameter name |
|---|---|
|
|
|
|
|
|
|
|
|