This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Questo contenuto non è disponibile nella lingua selezionata.
Pipelines
Configuring and using Pipelines in OpenShift Container Platform
Abstract
Chapter 1. Understanding OpenShift Pipelines Copia collegamentoCollegamento copiato negli appunti!
OpenShift Pipelines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
OpenShift Pipelines is a cloud-native, continuous integration and continuous delivery (CI/CD) solution based on Kubernetes resources. It uses Tekton building blocks to automate deployments across multiple platforms by abstracting away the underlying implementation details. Tekton introduces a number of standard Custom Resource Definitions (CRDs) for defining CI/CD pipelines that are portable across Kubernetes distributions.
1.1. Key features Copia collegamentoCollegamento copiato negli appunti!
- OpenShift Pipelines is a serverless CI/CD system that runs Pipelines with all the required dependencies in isolated containers.
- OpenShift Pipelines are designed for decentralized teams that work on microservice-based architecture.
- OpenShift Pipelines use standard CI/CD pipeline definitions that are easy to extend and integrate with the existing Kubernetes tools, enabling you to scale on-demand.
- You can use OpenShift Pipelines to build images with Kubernetes tools such as Source-to-Image (S2I), Buildah, Buildpacks, and Kaniko that are portable across any Kubernetes platform.
- You can use the OpenShift Container Platform Developer Console to create Tekton resources, view logs of Pipeline runs, and manage pipelines in your OpenShift Container Platform namespaces.
1.2. OpenShift Pipelines concepts Copia collegamentoCollegamento copiato negli appunti!
OpenShift Pipelines provide a set of standard Custom Resource Definitions (CRDs) that act as the building blocks from which you can assemble a CI/CD pipeline for your application.
- Task
- A Task is the smallest configurable unit in a Pipeline. It is essentially a function of inputs and outputs that form the Pipeline build. It can run individually or as a part of a Pipeline. A Pipeline includes one or more Tasks, where each Task consists of one or more steps. Steps are a series of commands that are sequentially executed by the Task.
- Pipeline
- A Pipeline consists of a series of Tasks that are executed to construct complex workflows that automate the build, deployment, and delivery of applications. It is a collection of PipelineResources, parameters, and one or more Tasks. A Pipeline interacts with the outside world by using PipelineResources, which are added to Tasks as inputs and outputs.
- PipelineRun
- A PipelineRun is the running instance of a Pipeline. A PipelineRun initiates a Pipeline and manages the creation of a TaskRun for each Task being executed in the Pipeline.
- TaskRun
- A TaskRun is automatically created by a PipelineRun for each Task in a Pipeline. It is the result of running an instance of a Task in a Pipeline. It can also be manually created if a Task runs outside of a Pipeline.
- PipelineResource
- A PipelineResource is an object that is used as an input and output for Pipeline Tasks. For example, if an input is a Git repository and an output is a container image built from that Git repository, these are both classified as PipelineResources. PipelineResources currently support Git resources, Image resources, Cluster resources, Storage Resources and CloudEvent resources.
- Trigger
- A Trigger captures an external event, such as a Git pull request and processes the event payload to extract key pieces of information. This extracted information is then mapped to a set of predefined parameters, which trigger a series of tasks that may involve creation and deployment of Kubernetes resources. You can use Triggers along with Pipelines to create full-fledged CI/CD systems where the execution is defined entirely through Kubernetes resources.
- Condition
-
A Condition refers to a validation or check, which is executed before a Task is run in your Pipeline. Conditions are like
ifstatements which perform logical tests, with a return value ofTrueorFalse. A Task is executed if all Conditions returnTrue, but if any of the Conditions fail, the Task and all subsequent Tasks are skipped. You can use Conditions in your Pipeline to create complex workflows covering multiple scenarios.
Additional resources
- For information on installing Pipelines, see Installing OpenShift Pipelines.
- For more details on creating custom CI/CD solutions, see Creating applications with CI/CD Pipelines.
Chapter 2. Installing OpenShift Pipelines Copia collegamentoCollegamento copiato negli appunti!
Prerequisites
-
You have access to an OpenShift Container Platform cluster using an account with
cluster-adminpermissions. -
You have installed
ocCLI. -
You have installed OpenShift Pipelines (
tkn) CLI on your local system.
2.1. Installing the OpenShift Pipelines Operator in web console Copia collegamentoCollegamento copiato negli appunti!
You can install OpenShift Pipelines using the Operator listed in the OpenShift Container Platform OperatorHub. When you install the OpenShift Pipelines Operator, the Custom Resources (CRs) required for the Pipelines configuration are automatically installed along with the Operator.
Procedure
- In the Administrator perspective of the web console, navigate to Operators → OperatorHub.
Use the Filter by keyword box to search for
OpenShift Pipelines Operatorin the catalog. Click the OpenShift Pipelines Operator tile.NoteEnsure that you do not select the Community version of the OpenShift Pipelines Operator.
- Read the brief description about the Operator on the OpenShift Pipelines Operator page. Click Install.
On the Create Operator Subscription page:
-
Select All namespaces on the cluster (default) for the Installation Mode. This mode installs the Operator in the default
openshift-operatorsnamespace, which enables the Operator to watch and be made available to all namespaces in the cluster. - Select Automatic for the Approval Strategy. This ensures that the future upgrades to the Operator are handled automatically by the Operator Lifecycle Manager (OLM). If you select the Manual approval strategy, OLM creates an update request. As a cluster administrator, you must then manually approve the OLM update request to update the Operator to the new version.
Select an Update Channel.
- The ocp-<4.x> channel enables installation of the latest stable release of the OpenShift Pipelines Operator.
- The preview channel enables installation of the latest preview version of the OpenShift Pipelines Operator, which may contain features that are not yet available from the 4.x update channel.
-
Select All namespaces on the cluster (default) for the Installation Mode. This mode installs the Operator in the default
Click Subscribe. You will see the Operator listed on the Installed Operators page.
NoteThe Operator is installed automatically into the
openshift-operatorsnamespace.- Verify that the Status is set to Succeeded Up to date to confirm successful installation of OpenShift Pipelines Operator.
2.2. Installing the OpenShift Pipelines Operator using the CLI Copia collegamentoCollegamento copiato negli appunti!
You can install OpenShift Pipelines Operator from the OperatorHub using the CLI.
Procedure
Create a Subscription object YAML file to subscribe a namespace to the OpenShift Pipelines Operator, for example,
sub.yaml:Example Subscription
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the Subscription object:
oc apply -f sub.yaml
$ oc apply -f sub.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow The OpenShift Pipelines Operator is now installed in the default target namespace
openshift-operators.
Additional Resources
- You can learn more about installing Operators on OpenShift Container Platform in the adding Operators to a cluster section.
Chapter 3. Uninstalling OpenShift Pipelines Copia collegamentoCollegamento copiato negli appunti!
Uninstalling the OpenShift Pipelines Operator is a two-step process:
- Delete the Custom Resources (CRs) that were added by default when you installed the OpenShift Pipelines Operator.
- Uninstall the OpenShift Pipelines Operator.
Uninstalling only the Operator will not remove the OpenShift Pipelines components created by default when the Operator is installed.
3.1. Deleting the OpenShift Pipelines components and Custom Resources Copia collegamentoCollegamento copiato negli appunti!
Delete the Custom Resources (CRs) created by default during installation of the OpenShift Pipelines Operator.
Procedure
- In the Administrator perspective of the web console, navigate to Administration → Custom Resource Definition.
-
Type
config.operator.tekton.devin the Filter by name box to search for the OpenShift Pipelines Operator CRs. - Click CRD Config to see the Custom Resource Definition Details page.
Click the Actions drop-down menu and select Delete Custom Resource Definition.
NoteDeleting the CRs will delete the OpenShift Pipelines components, and all the Tasks and Pipelines on the cluster will be lost.
- Click Delete to confirm the deletion of the CRs.
3.2. Uninstalling the OpenShift Pipelines Operator Copia collegamentoCollegamento copiato negli appunti!
Procedure
-
From the Operators → OperatorHub page, use the Filter by keyword box to search for
OpenShift Pipelines Operator. - Click the OpenShift Pipelines Operator tile. The Operator tile indicates it is installed.
- In the OpenShift Pipelines Operator descriptor page, click Uninstall.
Additional Resources
- You can learn more about uninstalling Operators on OpenShift Container Platform in the deleting Operators from a cluster section.
Chapter 4. Creating applications with OpenShift Pipelines Copia collegamentoCollegamento copiato negli appunti!
With OpenShift Pipelines, you can create a customized CI/CD solution to build, test, and deploy your application.
To create a full-fledged, self-serving CI/CD Pipeline for an application, you must perform the following tasks:
- Create custom Tasks, or install existing reusable Tasks.
- Create a Pipeline and PipelineResources to define the delivery Pipeline for your application.
- Create a PipelineRun to instantiate and invoke the Pipeline.
- Add Triggers to capture any events in the source repository.
This section uses the pipelines-tutorial example to demonstrate the preceding tasks. The example uses a simple application which consists of:
-
A front-end interface
vote-ui, with the source code inui-repoGit repository. -
A back-end interface
vote-api, with the source code inapi-repoGit repository. -
apply_manifestandupdate-deploymentTasks inpipelines-tutorialGit repository.
Prerequisites
- You have access to an OpenShift Container Platform cluster.
- You have installed OpenShift Pipelines using the OpenShift Pipelines Operator listed in the OpenShift OperatorHub. Once installed, it is applicable to the entire cluster.
- You have installed OpenShift Pipelines CLI.
-
You have forked the front-end
ui-repoand back-endapi-repoGit repositories using your GitHub ID. - You have Administrator access to your repositories.
4.1. Creating a project and checking your Pipeline ServiceAccount Copia collegamentoCollegamento copiato negli appunti!
Procedure
Log in to your OpenShift Container Platform cluster:
oc login -u <login> -p <password> https://openshift.example.com:6443
$ oc login -u <login> -p <password> https://openshift.example.com:6443Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a project for the sample application. For this example workflow, create the
pipelines-tutorialproject:oc new-project pipelines-tutorial
$ oc new-project pipelines-tutorialCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you create a project with a different name, be sure to update the resource URLs used in the example with your project name.
View the
pipelineServiceAccount:OpenShift Pipelines Operator adds and configures a ServiceAccount named
pipelinethat has sufficient permissions to build and push an image. This ServiceAccount is used by PipelineRun.oc get serviceaccount pipeline
$ oc get serviceaccount pipelineCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2. About Tasks Copia collegamentoCollegamento copiato negli appunti!
Tasks are the building blocks of a Pipeline and consist of sequentially executed Steps. Steps are a series of commands that achieve a specific goal, such as building an image.
Every Task runs as a pod and each Step runs in its own container within the same pod. Because Steps run within the same pod, they have access to the same volumes for caching files, configmaps, and secrets.
A Task uses inputs parameters, such as a Git resource, and outputs parameters, such as an image in a registry, to interact with other Tasks. They are reusable and can be used in multiple Pipelines.
Here is an example of a Maven Task with a single Step to build a Maven-based Java application.
This Task starts the pod and runs a container inside that pod using the maven:3.6.0-jdk-8-slim image to run the specified commands. It receives an input directory called workspace-git that contains the source code of the application.
The Task only declares the placeholder for the Git repository, it does not specify which Git repository to use. This allows Tasks to be reusable for multiple Pipelines and purposes.
4.3. Creating Pipeline Tasks Copia collegamentoCollegamento copiato negli appunti!
Procedure
Install the
apply-manifestsandupdate-deploymentTasks from thepipelines-tutorialrepository, which contains a list of reusable Tasks for Pipelines:oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/release-tech-preview-1/01_pipeline/01_apply_manifest_task.yaml oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/release-tech-preview-1/01_pipeline/02_update_deployment_task.yaml
$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/release-tech-preview-1/01_pipeline/01_apply_manifest_task.yaml $ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/release-tech-preview-1/01_pipeline/02_update_deployment_task.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
tkn task listcommand to list the Tasks you created:tkn task list
$ tkn task listCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output verifies that the
apply-manifestsandupdate-deploymentTasks were created:NAME DESCRIPTION AGE apply-manifests 1 minute ago update-deployment 48 seconds ago
NAME DESCRIPTION AGE apply-manifests 1 minute ago update-deployment 48 seconds agoCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
tkn clustertasks listcommand to list the Operator-installed additional ClusterTasks, for example --buildahands2i-python-3:NoteYou must use a privileged Pod container to run the
buildahClusterTask because it requires a privileged security context. To learn more about security context constraints (SCC) for pods, see the Additional resources section.tkn clustertasks list
$ tkn clustertasks listCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output lists the Operator-installed ClusterTasks:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4. Defining and creating PipelineResources Copia collegamentoCollegamento copiato negli appunti!
PipelineResources are artifacts that are used as inputs or outputs of a Task.
After you create Tasks, create PipelineResources that contain the specifics of the Git repository and the image registry to be used in the Pipeline during execution:
If you are not in the pipelines-tutorial namespace, and are using another namespace, ensure you update the front-end and back-end image resource to the correct URL with your namespace in the steps below. For example:
image-registry.openshift-image-registry.svc:5000/<namespace-name>/vote-api:latest
image-registry.openshift-image-registry.svc:5000/<namespace-name>/vote-api:latest
Procedure
Create a PipelineResource that defines the Git repository for the front-end application:
tkn resource create
$ tkn resource create ? Enter a name for a pipeline resource : ui-repo ? Select a resource type to create : git ? Enter a value for url : http://github.com/openshift-pipelines/vote-ui.git ? Enter a value for revision : release-tech-preview-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output verifies that the
ui-repoPipelineResource was created.New git resource "ui-repo" has been created
New git resource "ui-repo" has been createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a PipelineResource that defines the OpenShift Container Platform internal image registry to where you want to push the front-end image:
tkn resource create
$ tkn resource create ? Enter a name for a pipeline resource : ui-image ? Select a resource type to create : image ? Enter a value for url : image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/ui:latest ? Enter a value for digest :Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output verifies that the
ui-imagePipelineResource was created.New image resource "ui-image" has been created
New image resource "ui-image" has been createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a PipelineResource that defines the Git repository for the back-end application:
tkn resource create
$ tkn resource create ? Enter a name for a pipeline resource : api-repo ? Select a resource type to create : git ? Enter a value for url : http://github.com/openshift-pipelines/vote-api.git ? Enter a value for revision : release-tech-preview-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output verifies that the
api-repoPipelineResource was created.New git resource "api-repo" has been created
New git resource "api-repo" has been createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a PipelineResource that defines the OpenShift Container Platform internal image registry to where you want to push the back-end image:
tkn resource create
$ tkn resource create ? Enter a name for a pipeline resource : api-image ? Select a resource type to create : image ? Enter a value for url : image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/api:latest ? Enter a value for digest :Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output verifies that the
api-imagePipelineResource was created.New image resource "api-image" has been created
New image resource "api-image" has been createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the list of
resourcescreated:tkn resource list
$ tkn resource listCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output lists all the PipelineResource that were created.
NAME TYPE DETAILS api-repo git url: http://github.com/openshift-pipelines/vote-api.git ui-repo git url: http://github.com/openshift-pipelines/vote-ui.git api-image image url: image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/api:latest ui-image image url: image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/ui:latest
NAME TYPE DETAILS api-repo git url: http://github.com/openshift-pipelines/vote-api.git ui-repo git url: http://github.com/openshift-pipelines/vote-ui.git api-image image url: image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/api:latest ui-image image url: image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/ui:latestCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5. Assembling a Pipeline Copia collegamentoCollegamento copiato negli appunti!
A Pipeline represents a CI/CD flow and is defined by the Tasks to be executed. It specifies how the Tasks interact with each other and their order of execution using the inputs , outputs, and runAfter parameters. It is designed to be generic and reusable in multiple applications and environments.
In this section, you will create a Pipeline that takes the source code of the application from GitHub and then builds and deploys it on OpenShift Container Platform.
The Pipeline performs the following tasks for the back-end application vote-api and front-end application vote-ui:
-
Clones the source code of the application from the Git repositories
api-repoandui-repo. -
Builds the container images
api-imageandui-imageusing thebuildahClusterTask. -
Pushes the
api-imageandui-imageimages to the internal image registry. -
Deploys the new images on OpenShift Container Platform using the
apply-manifestsandupdate-deploymentTasks.
Procedure
Copy the contents of the following sample Pipeline YAML file and save it:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Notice that the Pipeline definition abstracts away the specifics of the Git source repository and image registries to be used during the Pipeline execution.
Create the Pipeline:
oc create -f <pipeline-yaml-file-name.yaml>
$ oc create -f <pipeline-yaml-file-name.yaml>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can also execute the YAML file directly from the Git repository:
oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/release-tech-preview-1/01_pipeline/04_pipeline.yaml
$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/release-tech-preview-1/01_pipeline/04_pipeline.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
tkn pipeline listcommand to verify that the Pipeline is added to the application:tkn pipeline list
$ tkn pipeline listCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output verifies that the
build-and-deployPipeline was created:NAME AGE LAST RUN STARTED DURATION STATUS build-and-deploy 1 minute ago --- --- --- ---
NAME AGE LAST RUN STARTED DURATION STATUS build-and-deploy 1 minute ago --- --- --- ---Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.6. Running a Pipeline Copia collegamentoCollegamento copiato negli appunti!
A PipelineRun starts a Pipeline and ties it to the Git and image resources that should be used for the specific invocation. It automatically creates and starts the TaskRuns for each Task in the Pipeline.
Procedure
Start the Pipeline for the back-end application:
tkn pipeline start build-and-deploy -r git-repo=api-repo -r image=api-image -p deployment-name=vote-api
$ tkn pipeline start build-and-deploy -r git-repo=api-repo -r image=api-image -p deployment-name=vote-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note the PipelineRun ID returned in the command output.
Track the PipelineRun progress:
tkn pipelinerun logs <pipelinerun ID> -f
$ tkn pipelinerun logs <pipelinerun ID> -fCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the Pipeline for the front-end application:
tkn pipeline start build-and-deploy -r git-repo=ui-repo -r image=ui-image -p deployment-name=vote-ui
$ tkn pipeline start build-and-deploy -r git-repo=ui-repo -r image=ui-image -p deployment-name=vote-uiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note the PipelineRun ID returned in the command output.
Track the PipelineRun progress:
tkn pipelinerun logs <pipelinerun ID> -f
$ tkn pipelinerun logs <pipelinerun ID> -fCopy to Clipboard Copied! Toggle word wrap Toggle overflow After a few minutes, use
tkn pipelinerun listcommand to verify that the Pipeline ran successfully by listing all the PipelineRuns:tkn pipelinerun list
$ tkn pipelinerun listCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output lists the PipelineRuns:
NAME STARTED DURATION STATUS build-and-deploy-run-xy7rw 1 hour ago 2 minutes Succeeded build-and-deploy-run-z2rz8 1 hour ago 19 minutes Succeeded
NAME STARTED DURATION STATUS build-and-deploy-run-xy7rw 1 hour ago 2 minutes Succeeded build-and-deploy-run-z2rz8 1 hour ago 19 minutes SucceededCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get the application route:
oc get route vote-ui --template='http://{{.spec.host}}'$ oc get route vote-ui --template='http://{{.spec.host}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note the output of the previous command. You can access the application using this route.
To rerun the last PipelineRun, using the PipelineResources and ServiceAccount of the previous Pipeline, run:
tkn pipeline start build-and-deploy --last
$ tkn pipeline start build-and-deploy --lastCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7. About Triggers Copia collegamentoCollegamento copiato negli appunti!
Use Triggers in conjunction with Pipelines to create a full-fledged CI/CD system where the Kubernetes resources define the entire CI/CD execution. Pipeline Triggers capture the external events and process them to extract key pieces of information. Mapping this event data to a set of predefined parameters triggers a series of tasks that can then create and deploy Kubernetes resources.
For example, you define a CI/CD workflow using OpenShift Pipelines for your application. The PipelineRun must start for any new changes to take effect in the application repository. Triggers automate this process by capturing and processing any change events, and by triggering a PipelineRun that deploys the new image with the latest changes.
Triggers consist of the following main components that work together to form a reusable, decoupled, and self-sustaining CI/CD system:
- EventListeners provide endpoints, or an event sink, that listen for incoming HTTP-based events with a JSON payload. The EventListener performs lightweight event processing on the payload using Event Interceptors, which identify the type of payload and optionally modify it. Currently, Pipeline Triggers support four types of Interceptors: Webhook Interceptors, GitHub Interceptors, GitLab Interceptors, and Common Expression Language (CEL) Interceptors.
- TriggerBindings extract the fields from an event payload and store them as parameters.
- TriggerTemplates specify how to use the parameterized data from the TriggerBindings. A TriggerTemplate defines a resource template that receives input from the TriggerBindings, while then performing a series of actions that result in creation of new PipelineResources and initiation of a new PipelineRun.
EventListeners tie the concepts of TriggerBindings and TriggerTemplates together. The EventListener listens for the incoming event, handles basic filtering using Interceptors, extracts data using TriggerBindings, and then processes this data to create Kubernetes resources using TriggerTemplates.
4.8. Adding Triggers to a Pipeline Copia collegamentoCollegamento copiato negli appunti!
After you have assembled and started the Pipeline for the application, add TriggerBindings, TriggerTemplates, and an EventListener to capture GitHub events.
Procedure
Copy the content of the following sample
TriggerBindingYAML file and save it:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
TriggerBinding:oc create -f <triggerbinding-yaml-file-name.yaml>
$ oc create -f <triggerbinding-yaml-file-name.yaml>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can create the
TriggerBindingdirectly from thepipelines-tutorialGit repository:oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/release-tech-preview-1/03_triggers/01_binding.yaml
$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/release-tech-preview-1/03_triggers/01_binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the content of the following sample
TriggerTemplateYAML file and save it:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
TriggerTemplate:oc create -f <triggertemplate-yaml-file-name.yaml>
$ oc create -f <triggertemplate-yaml-file-name.yaml>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can create the
TriggerTemplatedirectly from thepipelines-tutorialGit repository:oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/release-tech-preview-1/03_triggers/02_template.yaml
$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/release-tech-preview-1/03_triggers/02_template.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the contents of the following sample
EventListenerYAML file and save it:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
EventListener:oc create -f <eventlistener-yaml-file-name.yaml>
$ oc create -f <eventlistener-yaml-file-name.yaml>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can create the
EvenListenerdirectly from thepipelines-tutorialGit repository:oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/release-tech-preview-1/03_triggers/03_event_listener.yaml
$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/release-tech-preview-1/03_triggers/03_event_listener.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Expose the EventListener service as an OpenShift Container Platform route to make it publicly accessible:
oc expose svc el-vote-app
$ oc expose svc el-vote-appCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.9. Creating Webhooks Copia collegamentoCollegamento copiato negli appunti!
Webhooks are HTTP POST messages that are received by the EventListeners whenever a configured event occurs in your repository. The event payload is then mapped to TriggerBindings, and processed by TriggerTemplates. The TriggerTemplates eventually start one or more PipelineRuns, leading to the creation and deployment of Kubernetes resources.
In this section, you will configure a Webhook URL on your forked Git repositories vote-ui and vote-api. This URL points to the publicly accessible EventListener service route.
Adding Webhooks requires administrative privileges to the repository. If you do not have administrative access to your repository, contact your system administrator for adding Webhooks.
Procedure
Get the Webhook URL:
echo "URL: $(oc get route el-vote-app --template='http://{{.spec.host}}')"$ echo "URL: $(oc get route el-vote-app --template='http://{{.spec.host}}')"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note the URL obtained in the output.
Configure Webhooks manually on the front-end repository:
-
Open the front-end Git repository
vote-uiin your browser. - Click Settings → Webhooks → Add Webhook
On the Webhooks/Add Webhook page:
- Enter the Webhook URL from step 1 in Payload URL field
- Select application/json for the Content type
- Specify the secret in the Secret field
- Ensure that the Just the push event is selected
- Select Active
- Click Add Webhook
-
Open the front-end Git repository
-
Repeat step 2 for the back-end repository
vote-api.
4.10. Triggering a PipelineRun Copia collegamentoCollegamento copiato negli appunti!
Whenever a push event occurs in the Git repository, the configured Webhook sends an event payload to the publicly exposed EventListener service route. The EventListener service of the application processes the payload, and passes it to the relevant TriggerBindings and TriggerTemplates pair. The TriggerBinding extracts the parameters and the TriggerTemplate uses these parameters to create resources. This may rebuild and redeploy the application.
In this section, you will push an empty commit to the front-end vote-api repository, which will trigger the PipelineRun.
Procedure
From the terminal, clone your forked Git repository
vote-api:git clone git@github.com:<your GitHub ID>/vote-api.git -b release-tech-preview-1
$ git clone git@github.com:<your GitHub ID>/vote-api.git -b release-tech-preview-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push an empty commit:
git commit -m "empty-commit" --allow-empty && git push origin release-tech-preview-1
$ git commit -m "empty-commit" --allow-empty && git push origin release-tech-preview-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check if the PipelineRun was triggered:
tkn pipelinerun list
$ tkn pipelinerun listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Notice that a new PipelineRun was initiated.
Additional resources
- For more details on pipelines in the Developer perspective, see the working with Pipelines in the Developer perspective section.
- To learn more about Security Context Constraints (SCCs), see Managing Security Context Constraints section.
- For more examples of reusable Tasks, see the OpenShift Catalog repository. Additionally, you can also see the Tekton Catalog in the Tekton project.
Chapter 5. Working with OpenShift Pipelines using the Developer perspective Copia collegamentoCollegamento copiato negli appunti!
You can use the Developer perspective of the OpenShift Container Platform web console to create CI/CD Pipelines for your software delivery process while creating an application on OpenShift Container Platform.
After you create Pipelines, you can view and visually interact with your deployed Pipelines.
<Discrete><title>Prerequisites</title>- You have access to an OpenShift Container Platform cluster and have logged in to the web console.
- You have cluster administrator privileges to install Operators and have installed the OpenShift Pipelines Operator.
- You are in the Developer perspective.
- You have created a project.
5.1. Interacting with Pipelines using the Developer perspective Copia collegamentoCollegamento copiato negli appunti!
The Pipelines view in the Developer perspective lists all the Pipelines in a project along with details, such as the namespace in which the Pipeline was created, the last PipelineRun, the status of the Tasks in the PipelineRun, the status of the PipelineRun, and the time taken for the run.
Procedure
- In the Pipelines view of the Developer perspective, select a project from the Project drop-down list to see the Pipelines in that project.
- Click on the required Pipeline to see the Pipeline Details page. This provides a visual representation of all the serial and parallel Tasks in the Pipeline. The Tasks are also listed at the lower right of the page. You can click the listed Tasks to view Task details.
Optionally, in the Pipeline Details page:
-
Click the Pipeline Runs tab to see the completed, running, or failed runs for the Pipeline. You can use the Options menu
to stop a running Pipeline, to rerun a Pipeline using the same parameters and resources as that of the previous Pipeline execution, or to delete a PipelineRun.
- Click the Parameters tab to see the parameters defined in the Pipeline. You can also add or edit additional parameters as required.
- Click the Resources tab to see the resources defined in the Pipeline. You can also add or edit additional resources as required.
-
Click the Pipeline Runs tab to see the completed, running, or failed runs for the Pipeline. You can use the Options menu
Chapter 6. OpenShift Pipelines release notes Copia collegamentoCollegamento copiato negli appunti!
OpenShift Pipelines is a cloud-native CI/CD experience based on the Tekton project which provides:
- Standard Kubernetes-native pipeline definitions (CRDs).
- Serverless pipelines with no CI server management overhead.
- Extensibility to build images using any Kubernetes tool, such as S2I, Buildah, JIB, and Kaniko.
- Portability across any Kubernetes distribution.
- Powerful CLI for interacting with pipelines.
- Integrated user experience with the Developer perspective of the OpenShift Container Platform web console.
For an overview of OpenShift Pipelines, see Understanding OpenShift Pipelines.
6.1. Getting support Copia collegamentoCollegamento copiato negli appunti!
If you experience difficulty with a procedure described in this documentation, visit the Red Hat Customer Portal to learn more about Red Hat Technology Preview features support scope.
For questions and feedback, you can send an email to the product team at pipelines-interest@redhat.com.
6.2. Release notes for Red Hat OpenShift Pipelines Technology Preview 1.0 Copia collegamentoCollegamento copiato negli appunti!
6.2.1. New features Copia collegamentoCollegamento copiato negli appunti!
OpenShift Pipelines Technology Preview (TP) 1.0 is now available on OpenShift Container Platform 4.4. OpenShift Pipelines TP 1.0 is updated to support:
- Tekton Pipelines 0.11.3
-
Tekton
tknCLI 0.9.0 - Tekton Triggers 0.4.0
- ClusterTasks based on Tekton Catalog 0.11
In addition to the fixes and stability improvements, here is a highlight of what’s new in OpenShift Pipelines 1.0.
6.2.1.1. Pipelines Copia collegamentoCollegamento copiato negli appunti!
- Support for v1beta1 API Version.
- Support for an improved LimitRange. Previously, LimitRange was specified exclusively for the TaskRun and the PipelineRun. Now there is no need to explicitly specify the LimitRange. The minimum LimitRange across the namespace is used.
- Support for sharing data between Tasks using TaskResults and TaskParams.
-
Pipelines can now be configured to not overwrite the
HOMEenvironment variable andworkingDirof Steps. -
Similar to Task Steps,
sidecarsnow support script mode. -
You can now specify a different scheduler name in TaskRun
podTemplate. - Support for variable substitution using Star Array Notation.
- Tekton Controller can now be configured to monitor an individual namespace.
- A new description field is now added to the specification of Pipeline, Task, ClusterTask, Resource, and Condition.
- Addition of proxy parameters to Git PipelineResources.
6.2.1.2. Pipelines CLI Copia collegamentoCollegamento copiato negli appunti!
-
The
describesubcommand is now added for the followingtknresources:eventlistener,condition,triggertemplate,clustertask, andtriggerbinding. -
Support added for
v1beta1to the following commands along with backward comptibility forv1alpha1:clustertask,task,pipeline,pipelinerun, andtaskrun. The following commands can now list output from all namespaces using the
--all-namespacesflag option:-
tkn task list -
tkn pipeline list -
tkn taskrun list tkn pipelinerun listThe output of these commands is also enhanced to display information without headers using the
--no-headersflag option.
-
-
You can now start a Pipeline using default parameter values by specifying
--use-param-defaultsflag in thetkn pipelines startcommand. -
Support for Workspace is now added to
tkn pipeline startandtkn task startcommands. -
A new
clustertriggerbindingcommand is now added with the following subcommands:describe,delete, andlist. -
You can now directly start a pipeline run using a local or remote
yamlfile. -
The
describesubcommand now displays an enhanced and detailed output. With the addition of new fields, such asdescription,timeout,param description, andsidecar status, the command output now provides more detailed information about a specifictknresource. -
The
tkn task logcommand now displays logs directly if only one task is present in the namespace.
6.2.1.3. Triggers Copia collegamentoCollegamento copiato negli appunti!
-
Triggers can now create both
v1alpha1andv1beta1Pipeline resources. -
Support for new Common Expression Language (CEL) interceptor function -
compareSecret. This function securely compares strings to secrets in CEL expressions. - Support for authentication and authorization at the EventListener Trigger level.
6.2.2. Deprecated features Copia collegamentoCollegamento copiato negli appunti!
The following items are deprecated in this release:
The environment variable
$HOME, and variableworkingDirin the Steps specification are deprecated and might be changed in a future release. Currently in a Step container,HOMEandworkingDirare overwritten to/tekton/homeand/workspacerespectively.In a later release, these two fields will not be modified, and will be set to values defined in the container image and Task YAML. For this release, use flags
disable-home-env-overwriteanddisable-working-directory-overwriteto disable overwriting of theHOMEandworkingDirvariables.The following commands are deprecated and might be removed in the future release:
-
tkn pipeline create -
tkn task create
-
-
The
-fflag with thetkn resource createcommand is now deprecated. It might be removed in the future release. -
The
-tflag and the--timeoutflag (with seconds format) for thetkn clustertask createcommand are now deprecated. Only duration timeout format is now supported, for example1h30s. These deprecated flags might be removed in the future release.
6.2.3. Known issues Copia collegamentoCollegamento copiato negli appunti!
- If you are upgrading from an older version of OpenShift Pipelines, you must delete your existing deployments before upgrading to OpenShift Pipelines version 1.0. To delete an existing deployment, you must first delete Custom Resources and then uninstall the OpenShift Pipelines Operator. For more details, see the uninstalling OpenShift Pipelines section.
-
Submitting the same
v1alpha1Tasks more than once results in an error. Useoc replaceinstead ofoc applywhen re-submitting av1alpha1Task. The
buildahClusterTask does not work when a new user is added to a container.When the Operator is installed, the
--storage-driverflag for thebuildahClusterTask is not specified, therefore the flag is set to its default value. In some cases, this causes the storage driver to be set incorrectly. When a new user is added, the incorrect storage-driver results in the failure of thebuildahClusterTask with the following error:useradd: /etc/passwd.8: lock file already used useradd: cannot lock /etc/passwd; try again later.
useradd: /etc/passwd.8: lock file already used useradd: cannot lock /etc/passwd; try again later.Copy to Clipboard Copied! Toggle word wrap Toggle overflow As a workaround, manually set the
--storage-driverflag value tooverlayin thebuildah-task.yamlfile:Login to your cluster as a
cluster-admin:oc login -u <login> -p <password> https://openshift.example.com:6443
$ oc login -u <login> -p <password> https://openshift.example.com:6443Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
oc editcommand to editbuildahClusterTask:oc edit clustertask buildah
$ oc edit clustertask buildahCopy to Clipboard Copied! Toggle word wrap Toggle overflow The current version of the
buildahclustertask YAML file opens in the editor set by yourEDITORenvironment variable.Under the
stepsfield, locate the followingcommandfield:command: ['buildah', 'bud', '--format=$(params.FORMAT)', '--tls-verify=$(params.TLSVERIFY)', '--layers', '-f', '$(params.DOCKERFILE)', '-t', '$(resources.outputs.image.url)', '$(params.CONTEXT)']
command: ['buildah', 'bud', '--format=$(params.FORMAT)', '--tls-verify=$(params.TLSVERIFY)', '--layers', '-f', '$(params.DOCKERFILE)', '-t', '$(resources.outputs.image.url)', '$(params.CONTEXT)']Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the
commandfield with the following:command: ['buildah', '--storage-driver=overlay', 'bud', '--format=$(params.FORMAT)', '--tls-verify=$(params.TLSVERIFY)', '--no-cache', '-f', '$(params.DOCKERFILE)', '-t', '$(params.IMAGE)', '$(params.CONTEXT)']
command: ['buildah', '--storage-driver=overlay', 'bud', '--format=$(params.FORMAT)', '--tls-verify=$(params.TLSVERIFY)', '--no-cache', '-f', '$(params.DOCKERFILE)', '-t', '$(params.IMAGE)', '$(params.CONTEXT)']Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file and exit.
Alternatively, you can also modify the
buildahClusterTask YAML file directly on the web console by navigating to Pipelines → Cluster Tasks → buildah. Select Edit Cluster Task from the Actions menu and replace thecommandfield as shown in the previous procedure.
6.2.4. Fixed issues Copia collegamentoCollegamento copiato negli appunti!
-
Previously, the
DeploymentConfigTask triggered a new deployment build even when an image build was already in progress. This caused the deployment of the Pipeline to fail. With this fix, thedeploy taskcommand is now replaced with theoc rollout statuscommand which waits for the in-progress deployment to finish. -
Support for
APP_NAMEparameter is now added in Pipeline templates. -
Previously, the Pipeline template for Java S2I failed to look up the image in the registry. With this fix, the image is looked up using the existing image PipelineResources instead of the user provided
IMAGE_NAMEparameter. - All the OpenShift Pipelines images are now based on the Red Hat Universal Base Images (UBI).
-
Previously, when the Pipeline was installed in a namespace other than
tekton-pipelines, thetkn versioncommand displayed the Pipeline version asunknown. With this fix, thetkn versioncommand now displays the correct Pipeline version in any namespace. -
The
-cflag is no longer supported for thetkn versioncommand. - Non-admin users can now list the ClusterTriggerBindings.
- The EventListener CompareSecret function is now fixed for the CEL Interceptor.
-
The
list,describe, andstartsubcommands fortaskandclustertasknow correctly display the output in case a Task and ClusterTask have the same name. - Previously, the OpenShift Pipelines Operator modified the privileged security context constraints (SCCs), which caused an error during cluster upgrade. This error is now fixed.
-
In the
tekton-pipelinesnamespace, the timeouts of all TaskRuns and PipelineRuns are now set to the value ofdefault-timeout-minutesfield using the ConfigMap. - Previously, the Pipelines section in the web console was not displayed for non-admin users. This issue is now resolved.
Legal Notice
Copia collegamentoCollegamento copiato negli appunti!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.