Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 1. Creating CI/CD solutions for applications using OpenShift Pipelines


With Red Hat OpenShift Pipelines, you can create a customized CI/CD solution to build, test, and deploy your application.

To create a full-fledged, self-serving CI/CD pipeline for an application, perform the following tasks:

  • Create custom tasks, or install existing reusable tasks.
  • Create and define the delivery pipeline for your application.
  • Provide a storage volume or filesystem that is attached to a workspace for the pipeline execution, using one of the following approaches:

    • Specify a volume claim template that creates a persistent volume claim
    • Specify a persistent volume claim
  • Create a PipelineRun object to instantiate and invoke the pipeline.
  • Add triggers to capture events in the source repository.

This section uses the pipelines-tutorial example to demonstrate the preceding tasks. The example uses a simple application which consists of:

  • A front-end interface, pipelines-vote-ui, with the source code in the pipelines-vote-ui Git repository.
  • A back-end interface, pipelines-vote-api, with the source code in the pipelines-vote-api Git repository.
  • The apply-manifests and update-deployment tasks in the pipelines-tutorial Git repository.

1.1. Prerequisites

  • You have access to an OpenShift Container Platform cluster.
  • You have installed OpenShift Pipelines using the Red Hat OpenShift Pipelines Operator listed in the OpenShift OperatorHub. After it is installed, it is applicable to the entire cluster.
  • You have installed OpenShift Pipelines CLI.
  • You have forked the front-end pipelines-vote-ui and back-end pipelines-vote-api Git repositories using your GitHub ID, and have administrator access to these repositories.
  • Optional: You have cloned the pipelines-tutorial Git repository.

1.2. Creating a project and checking your pipeline service account

Procedure

  1. Log in to your OpenShift Container Platform cluster:

    $ oc login -u <login> -p <password> https://openshift.example.com:6443
  2. Create a project for the sample application. For this example workflow, create the pipelines-tutorial project:

    $ oc new-project pipelines-tutorial
    Note

    If you create a project with a different name, be sure to update the resource URLs used in the example with your project name.

  3. View the pipeline service account:

    Red Hat OpenShift Pipelines Operator adds and configures a service account named pipeline that has sufficient permissions to build and push an image. This service account is used by the PipelineRun object.

    $ oc get serviceaccount pipeline

1.3. Creating pipeline tasks

Procedure

  1. Install the apply-manifests and update-deployment task resources from the pipelines-tutorial repository, which contains a list of reusable tasks for pipelines:

    $ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.16/01_pipeline/01_apply_manifest_task.yaml
    $ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.16/01_pipeline/02_update_deployment_task.yaml
  2. Use the tkn task list command to list the tasks you created:

    $ tkn task list

    The output verifies that the apply-manifests and update-deployment task resources were created:

    NAME                DESCRIPTION   AGE
    apply-manifests                   1 minute ago
    update-deployment                 48 seconds ago
  3. Use the tkn clustertasks list command to list the Operator-installed additional cluster tasks such as buildah and s2i-python:

    Note

    To use the buildah cluster task in a restricted environment, you must ensure that the Dockerfile uses an internal image stream as the base image.

    $ tkn clustertasks list

    The output lists the Operator-installed ClusterTask resources:

    NAME                       DESCRIPTION   AGE
    buildah                                  1 day ago
    git-clone                                1 day ago
    s2i-python                               1 day ago
    tkn                                      1 day ago
Important

In Red Hat OpenShift Pipelines 1.10, ClusterTask functionality is deprecated and is planned to be removed in a future release.

1.4. Assembling a pipeline

A pipeline represents a CI/CD flow and is defined by the tasks to be executed. It is designed to be generic and reusable in multiple applications and environments.

A pipeline specifies how the tasks interact with each other and their order of execution using the from and runAfter parameters. It uses the workspaces field to specify one or more volumes that each task in the pipeline requires during execution.

In this section, you will create a pipeline that takes the source code of the application from GitHub, and then builds and deploys it on OpenShift Container Platform.

The pipeline performs the following tasks for the back-end application pipelines-vote-api and front-end application pipelines-vote-ui:

  • Clones the source code of the application from the Git repository by referring to the git-url and git-revision parameters.
  • Builds the container image using the buildah task provided in the openshift-pipelines namespace.
  • Pushes the image to the OpenShift image registry by referring to the image parameter.
  • Deploys the new image on OpenShift Container Platform by using the apply-manifests and update-deployment tasks.

Procedure

  1. Copy the contents of the following sample pipeline YAML file and save it:

    apiVersion: tekton.dev/v1
    kind: Pipeline
    metadata:
      name: build-and-deploy
    spec:
      workspaces:
      - name: shared-workspace
      params:
      - name: deployment-name
        type: string
        description: name of the deployment to be patched
      - name: git-url
        type: string
        description: url of the git repo for the code of deployment
      - name: git-revision
        type: string
        description: revision to be used from repo of the code for deployment
        default: "pipelines-1.16"
      - name: IMAGE
        type: string
        description: image to be built from the code
      tasks:
      - name: fetch-repository
        taskRef:
          resolver: cluster
          params:
          - name: kind
            value: task
          - name: name
            value: git-clone
          - name: namespace
            value: openshift-pipelines
        workspaces:
        - name: output
          workspace: shared-workspace
        params:
        - name: URL
          value: $(params.git-url)
        - name: SUBDIRECTORY
          value: ""
        - name: DELETE_EXISTING
          value: "true"
        - name: REVISION
          value: $(params.git-revision)
      - name: build-image
        taskRef:
          resolver: cluster
          params:
          - name: kind
            value: task
          - name: name
            value: buildah
          - name: namespace
            value: openshift-pipelines
        workspaces:
        - name: source
          workspace: shared-workspace
        params:
        - name: IMAGE
          value: $(params.IMAGE)
        runAfter:
        - fetch-repository
      - name: apply-manifests
        taskRef:
          name: apply-manifests
        workspaces:
        - name: source
          workspace: shared-workspace
        runAfter:
        - build-image
      - name: update-deployment
        taskRef:
          name: update-deployment
        params:
        - name: deployment
          value: $(params.deployment-name)
        - name: IMAGE
          value: $(params.IMAGE)
        runAfter:
        - apply-manifests

    The pipeline definition abstracts away the specifics of the Git source repository and image registries. These details are added as params when a pipeline is triggered and executed.

  2. Create the pipeline:

    $ oc create -f <pipeline-yaml-file-name.yaml>

    Alternatively, you can also execute the YAML file directly from the Git repository:

    $ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.16/01_pipeline/04_pipeline.yaml
  3. Use the tkn pipeline list command to verify that the pipeline is added to the application:

    $ tkn pipeline list

    The output verifies that the build-and-deploy pipeline was created:

    NAME               AGE            LAST RUN   STARTED   DURATION   STATUS
    build-and-deploy   1 minute ago   ---        ---       ---        ---

1.5. Mirroring images to run pipelines in a restricted environment

To run OpenShift Pipelines in a disconnected cluster or a cluster provisioned in a restricted environment, ensure that either the Samples Operator is configured for a restricted network, or a cluster administrator has created a cluster with a mirrored registry.

The following procedure uses the pipelines-tutorial example to create a pipeline for an application in a restricted environment using a cluster with a mirrored registry. To ensure that the pipelines-tutorial example works in a restricted environment, you must mirror the respective builder images from the mirror registry for the front-end interface, pipelines-vote-ui; back-end interface, pipelines-vote-api; and the cli.

Procedure

  1. Mirror the builder image from the mirror registry for the front-end interface, pipelines-vote-ui.

    1. Verify that the required images tag is not imported:

      $ oc describe imagestream python -n openshift

      Example output

      Name:			python
      Namespace:		openshift
      [...]
      
      3.8-ubi9 (latest)
        tagged from registry.redhat.io/ubi9/python-38:latest
          prefer registry pullthrough when referencing this tag
      
        Build and run Python 3.8 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.8/README.md.
        Tags: builder, python
        Supports: python:3.8, python
        Example Repo: https://github.com/sclorg/django-ex.git
      
      [...]

    2. Mirror the supported image tag to the private registry:

      $ oc image mirror registry.redhat.io/ubi9/python-39:latest <mirror-registry>:<port>/ubi9/python-39
    3. Import the image:

      $ oc tag <mirror-registry>:<port>/ubi9/python-39 python:latest --scheduled -n openshift

      You must periodically re-import the image. The --scheduled flag enables automatic re-import of the image.

    4. Verify that the images with the given tag have been imported:

      $ oc describe imagestream python -n openshift

      Example output

      Name:			python
      Namespace:		openshift
      [...]
      
      latest
        updates automatically from registry  <mirror-registry>:<port>/ubi9/python-39
      
        *  <mirror-registry>:<port>/ubi9/python-39@sha256:3ee...
      
      [...]

  2. Mirror the builder image from the mirror registry for the back-end interface, pipelines-vote-api.

    1. Verify that the required images tag is not imported:

      $ oc describe imagestream golang -n openshift

      Example output

      Name:			golang
      Namespace:		openshift
      [...]
      
      1.14.7-ubi8 (latest)
        tagged from registry.redhat.io/ubi8/go-toolset:1.14.7
          prefer registry pullthrough when referencing this tag
      
        Build and run Go applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/golang-container/blob/master/README.md.
        Tags: builder, golang, go
        Supports: golang
        Example Repo: https://github.com/sclorg/golang-ex.git
      
      [...]

    2. Mirror the supported image tag to the private registry:

      $ oc image mirror registry.redhat.io/ubi9/go-toolset:latest <mirror-registry>:<port>/ubi9/go-toolset
    3. Import the image:

      $ oc tag <mirror-registry>:<port>/ubi9/go-toolset golang:latest --scheduled -n openshift

      You must periodically re-import the image. The --scheduled flag enables automatic re-import of the image.

    4. Verify that the images with the given tag have been imported:

      $ oc describe imagestream golang -n openshift

      Example output

      Name:			golang
      Namespace:		openshift
      [...]
      
      latest
        updates automatically from registry <mirror-registry>:<port>/ubi9/go-toolset
      
        * <mirror-registry>:<port>/ubi9/go-toolset@sha256:59a74d581df3a2bd63ab55f7ac106677694bf612a1fe9e7e3e1487f55c421b37
      
      [...]

  3. Mirror the builder image from the mirror registry for the cli.

    1. Verify that the required images tag is not imported:

      $ oc describe imagestream cli -n openshift

      Example output

      Name:                   cli
      Namespace:              openshift
      [...]
      
      latest
        updates automatically from registry quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:65c68e8c22487375c4c6ce6f18ed5485915f2bf612e41fef6d41cbfcdb143551
      
        * quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:65c68e8c22487375c4c6ce6f18ed5485915f2bf612e41fef6d41cbfcdb143551
      
      [...]

    2. Mirror the supported image tag to the private registry:

      $ oc image mirror quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:65c68e8c22487375c4c6ce6f18ed5485915f2bf612e41fef6d41cbfcdb143551 <mirror-registry>:<port>/openshift-release-dev/ocp-v4.0-art-dev:latest
    3. Import the image:

      $ oc tag <mirror-registry>:<port>/openshift-release-dev/ocp-v4.0-art-dev cli:latest --scheduled -n openshift

      You must periodically re-import the image. The --scheduled flag enables automatic re-import of the image.

    4. Verify that the images with the given tag have been imported:

      $ oc describe imagestream cli -n openshift

      Example output

      Name:                   cli
      Namespace:              openshift
      [...]
      
      latest
        updates automatically from registry <mirror-registry>:<port>/openshift-release-dev/ocp-v4.0-art-dev
      
        * <mirror-registry>:<port>/openshift-release-dev/ocp-v4.0-art-dev@sha256:65c68e8c22487375c4c6ce6f18ed5485915f2bf612e41fef6d41cbfcdb143551
      
      [...]

1.6. Running a pipeline

A PipelineRun resource starts a pipeline and ties it to the Git and image resources that should be used for the specific invocation. It automatically creates and starts the TaskRun resources for each task in the pipeline.

Procedure

  1. Start the pipeline for the back-end application:

    $ tkn pipeline start build-and-deploy \
        -w name=shared-workspace,volumeClaimTemplateFile=https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.16/01_pipeline/03_persistent_volume_claim.yaml \
        -p deployment-name=pipelines-vote-api \
        -p git-url=https://github.com/openshift/pipelines-vote-api.git \
        -p IMAGE='image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/pipelines-vote-api' \
        --use-param-defaults

    The previous command uses a volume claim template, which creates a persistent volume claim for the pipeline execution.

  2. To track the progress of the pipeline run, enter the following command::

    $ tkn pipelinerun logs <pipelinerun_id> -f

    The <pipelinerun_id> in the above command is the ID for the PipelineRun that was returned in the output of the previous command.

  3. Start the pipeline for the front-end application:

    $ tkn pipeline start build-and-deploy \
        -w name=shared-workspace,volumeClaimTemplateFile=https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.16/01_pipeline/03_persistent_volume_claim.yaml \
        -p deployment-name=pipelines-vote-ui \
        -p git-url=https://github.com/openshift/pipelines-vote-ui.git \
        -p IMAGE='image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/pipelines-vote-ui' \
        --use-param-defaults
  4. To track the progress of the pipeline run, enter the following command:

    $ tkn pipelinerun logs <pipelinerun_id> -f

    The <pipelinerun_id> in the above command is the ID for the PipelineRun that was returned in the output of the previous command.

  5. After a few minutes, use tkn pipelinerun list command to verify that the pipeline ran successfully by listing all the pipeline runs:

    $ tkn pipelinerun list

    The output lists the pipeline runs:

     NAME                         STARTED      DURATION     STATUS
     build-and-deploy-run-xy7rw   1 hour ago   2 minutes    Succeeded
     build-and-deploy-run-z2rz8   1 hour ago   19 minutes   Succeeded
  6. Get the application route:

    $ oc get route pipelines-vote-ui --template='http://{{.spec.host}}'

    Note the output of the previous command. You can access the application using this route.

  7. To rerun the last pipeline run, using the pipeline resources and service account of the previous pipeline, run:

    $ tkn pipeline start build-and-deploy --last

1.7. Adding triggers to a pipeline

Triggers enable pipelines to respond to external GitHub events, such as push events and pull requests. After you assemble and start a pipeline for the application, add the TriggerBinding, TriggerTemplate, Trigger, and EventListener resources to capture the GitHub events.

Procedure

  1. Copy the content of the following sample TriggerBinding YAML file and save it:

    apiVersion: triggers.tekton.dev/v1beta1
    kind: TriggerBinding
    metadata:
      name: vote-app
    spec:
      params:
      - name: git-repo-url
        value: $(body.repository.url)
      - name: git-repo-name
        value: $(body.repository.name)
      - name: git-revision
        value: $(body.head_commit.id)
  2. Create the TriggerBinding resource:

    $ oc create -f <triggerbinding-yaml-file-name.yaml>

    Alternatively, you can create the TriggerBinding resource directly from the pipelines-tutorial Git repository:

    $ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.16/03_triggers/01_binding.yaml
  3. Copy the content of the following sample TriggerTemplate YAML file and save it:

    apiVersion: triggers.tekton.dev/v1beta1
    kind: TriggerTemplate
    metadata:
      name: vote-app
    spec:
      params:
      - name: git-repo-url
        description: The git repository url
      - name: git-revision
        description: The git revision
        default: pipelines-1.16
      - name: git-repo-name
        description: The name of the deployment to be created / patched
    
      resourcetemplates:
      - apiVersion: tekton.dev/v1
        kind: PipelineRun
        metadata:
          generateName: build-deploy-$(tt.params.git-repo-name)-
        spec:
          taskRunTemplate:
            serviceAccountName: pipeline
          pipelineRef:
            name: build-and-deploy
          params:
          - name: deployment-name
            value: $(tt.params.git-repo-name)
          - name: git-url
            value: $(tt.params.git-repo-url)
          - name: git-revision
            value: $(tt.params.git-revision)
          - name: IMAGE
            value: image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/$(tt.params.git-repo-name)
          workspaces:
          - name: shared-workspace
            volumeClaimTemplate:
              spec:
                accessModes:
                  - ReadWriteOnce
                resources:
                  requests:
                    storage: 500Mi

    The template specifies a volume claim template to create a persistent volume claim for defining the storage volume for the workspace. Therefore, you do not need to create a persistent volume claim to provide data storage.

  4. Create the TriggerTemplate resource:

    $ oc create -f <triggertemplate-yaml-file-name.yaml>

    Alternatively, you can create the TriggerTemplate resource directly from the pipelines-tutorial Git repository:

    $ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.16/03_triggers/02_template.yaml
  5. Copy the contents of the following sample Trigger YAML file and save it:

    apiVersion: triggers.tekton.dev/v1beta1
    kind: Trigger
    metadata:
      name: vote-trigger
    spec:
      taskRunTemplate:
        serviceAccountName: pipeline
      bindings:
        - ref: vote-app
      template:
        ref: vote-app
  6. Create the Trigger resource:

    $ oc create -f <trigger-yaml-file-name.yaml>

    Alternatively, you can create the Trigger resource directly from the pipelines-tutorial Git repository:

    $ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.16/03_triggers/03_trigger.yaml
  7. Copy the contents of the following sample EventListener YAML file and save it:

    apiVersion: triggers.tekton.dev/v1beta1
    kind: EventListener
    metadata:
      name: vote-app
    spec:
      taskRunTemplate:
        serviceAccountName: pipeline
      triggers:
        - triggerRef: vote-trigger

    Alternatively, if you have not defined a trigger custom resource, add the binding and template spec to the EventListener YAML file, instead of referring to the name of the trigger:

    apiVersion: triggers.tekton.dev/v1beta1
    kind: EventListener
    metadata:
      name: vote-app
    spec:
      taskRunTemplate:
        serviceAccountName: pipeline
      triggers:
      - bindings:
        - ref: vote-app
        template:
          ref: vote-app
  8. Create the EventListener resource by performing the following steps:

    • To create an EventListener resource using a secure HTTPS connection:

      1. Add a label to enable the secure HTTPS connection to the Eventlistener resource:

        $ oc label namespace <ns-name> operator.tekton.dev/enable-annotation=enabled
      2. Create the EventListener resource:

        $ oc create -f <eventlistener-yaml-file-name.yaml>

        Alternatively, you can create the EvenListener resource directly from the pipelines-tutorial Git repository:

        $ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.16/03_triggers/04_event_listener.yaml
      3. Create a route with the re-encrypt TLS termination:

        $ oc create route reencrypt --service=<svc-name> --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=<hostname>

        Alternatively, you can create a re-encrypt TLS termination YAML file to create a secured route.

        Example Re-encrypt TLS Termination YAML of the Secured Route

        apiVersion: route.openshift.io/v1
        kind: Route
        metadata:
          name: route-passthrough-secured 1
        spec:
          host: <hostname>
          to:
            kind: Service
            name: frontend 2
          tls:
            termination: reencrypt         3
            key: [as in edge termination]
            certificate: [as in edge termination]
            caCertificate: [as in edge termination]
            destinationCACertificate: |-   4
              -----BEGIN CERTIFICATE-----
              [...]
              -----END CERTIFICATE-----

        1 2
        The name of the object, which is limited to 63 characters.
        3
        The termination field is set to reencrypt. This is the only required tls field.
        4
        Required for re-encryption. destinationCACertificate specifies a CA certificate to validate the endpoint certificate, securing the connection from the router to the destination pods. If the service is using a service signing certificate, or the administrator has specified a default CA certificate for the router and the service has a certificate signed by that CA, this field can be omitted.

        See oc create route reencrypt --help for more options.

    • To create an EventListener resource using an insecure HTTP connection:

      1. Create the EventListener resource.
      2. Expose the EventListener service as an OpenShift Container Platform route to make it publicly accessible:

        $ oc expose svc el-vote-app

1.8. Configuring event listeners to serve multiple namespaces

Note

You can skip this section if you want to create a basic CI/CD pipeline. However, if your deployment strategy involves multiple namespaces, you can configure event listeners to serve multiple namespaces.

To increase reusability of EvenListener objects, cluster administrators can configure and deploy them as multi-tenant event listeners that serve multiple namespaces.

Procedure

  1. Configure cluster-wide fetch permission for the event listener.

    1. Set a service account name to be used in the ClusterRoleBinding and EventListener objects. For example, el-sa.

      Example ServiceAccount.yaml

      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: el-sa
      ---

    2. In the rules section of the ClusterRole.yaml file, set appropriate permissions for every event listener deployment to function cluster-wide.

      Example ClusterRole.yaml

      kind: ClusterRole
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: el-sel-clusterrole
      rules:
      - apiGroups: ["triggers.tekton.dev"]
        resources: ["eventlisteners", "clustertriggerbindings", "clusterinterceptors", "triggerbindings", "triggertemplates", "triggers"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["configmaps", "secrets"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["serviceaccounts"]
        verbs: ["impersonate"]
      ...

    3. Configure cluster role binding with the appropriate service account name and cluster role name.

      Example ClusterRoleBinding.yaml

      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: el-mul-clusterrolebinding
      subjects:
      - kind: ServiceAccount
        name: el-sa
        namespace: default
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: el-sel-clusterrole
      ...

  2. In the spec parameter of the event listener, add the service account name, for example el-sa. Fill the namespaceSelector parameter with names of namespaces where event listener is intended to serve.

    Example EventListener.yaml

    apiVersion: triggers.tekton.dev/v1beta1
    kind: EventListener
    metadata:
      name: namespace-selector-listener
    spec:
      taskRunTemplate:
        serviceAccountName: el-sa
      namespaceSelector:
        matchNames:
        - default
        - foo
    ...

  3. Create a service account with the necessary permissions, for example foo-trigger-sa. Use it for role binding the triggers.

    Example ServiceAccount.yaml

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: foo-trigger-sa
      namespace: foo
    ...

    Example RoleBinding.yaml

    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: triggercr-rolebinding
      namespace: foo
    subjects:
    - kind: ServiceAccount
      name: foo-trigger-sa
      namespace: foo
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: tekton-triggers-eventlistener-roles
    ...

  4. Create a trigger with the appropriate trigger template, trigger binding, and service account name.

    Example Trigger.yaml

    apiVersion: triggers.tekton.dev/v1beta1
    kind: Trigger
    metadata:
      name: trigger
      namespace: foo
    spec:
      taskRunTemplate:
        serviceAccountName: foo-trigger-sa
      interceptors:
        - ref:
            name: "github"
          params:
            - name: "secretRef"
              value:
                secretName: github-secret
                secretKey: secretToken
            - name: "eventTypes"
              value: ["push"]
      bindings:
        - ref: vote-app
      template:
        ref: vote-app
    ...

1.9. Creating webhooks

Webhooks are HTTP POST messages that are received by the event listeners whenever a configured event occurs in your repository. The event payload is then mapped to trigger bindings, and processed by trigger templates. The trigger templates eventually start one or more pipeline runs, leading to the creation and deployment of Kubernetes resources.

In this section, you will configure a webhook URL on your forked Git repositories pipelines-vote-ui and pipelines-vote-api. This URL points to the publicly accessible EventListener service route.

Note

Adding webhooks requires administrative privileges to the repository. If you do not have administrative access to your repository, contact your system administrator for adding webhooks.

Procedure

  1. Get the webhook URL:

    • For a secure HTTPS connection:

      $ echo "URL: $(oc  get route el-vote-app --template='https://{{.spec.host}}')"
    • For an HTTP (insecure) connection:

      $ echo "URL: $(oc  get route el-vote-app --template='http://{{.spec.host}}')"

      Note the URL obtained in the output.

  2. Configure webhooks manually on the front-end repository:

    1. Open the front-end Git repository pipelines-vote-ui in your browser.
    2. Click Settings Webhooks Add Webhook
    3. On the Webhooks/Add Webhook page:

      1. Enter the webhook URL from step 1 in Payload URL field
      2. Select application/json for the Content type
      3. Specify the secret in the Secret field
      4. Ensure that the Just the push event is selected
      5. Select Active
      6. Click Add Webhook
  3. Repeat step 2 for the back-end repository pipelines-vote-api.

1.10. Triggering a pipeline run

Whenever a push event occurs in the Git repository, the configured webhook sends an event payload to the publicly exposed EventListener service route. The EventListener service of the application processes the payload, and passes it to the relevant TriggerBinding and TriggerTemplate resource pairs. The TriggerBinding resource extracts the parameters, and the TriggerTemplate resource uses these parameters and specifies the way the resources must be created. This may rebuild and redeploy the application.

In this section, you push an empty commit to the front-end pipelines-vote-ui repository, which then triggers the pipeline run.

Procedure

  1. From the terminal, clone your forked Git repository pipelines-vote-ui:

    $ git clone git@github.com:<your GitHub ID>/pipelines-vote-ui.git -b pipelines-1.16
  2. Push an empty commit:

    $ git commit -m "empty-commit" --allow-empty && git push origin pipelines-1.16
  3. Check if the pipeline run was triggered:

    $ tkn pipelinerun list

    Notice that a new pipeline run was initiated.

1.11. Enabling monitoring of event listeners for Triggers for user-defined projects

As a cluster administrator, to gather event listener metrics for the Triggers service in a user-defined project and display them in the OpenShift Container Platform web console, you can create a service monitor for each event listener. On receiving an HTTP request, event listeners for the Triggers service return three metrics — eventlistener_http_duration_seconds, eventlistener_event_count, and eventlistener_triggered_resources.

Prerequisites

  • You have logged in to the OpenShift Container Platform web console.
  • You have installed the Red Hat OpenShift Pipelines Operator.
  • You have enabled monitoring for user-defined projects.

Procedure

  1. For each event listener, create a service monitor. For example, to view the metrics for the github-listener event listener in the test namespace, create the following service monitor:

    apiVersion: monitoring.coreos.com/v1
    kind: ServiceMonitor
    metadata:
      labels:
        app.kubernetes.io/managed-by: EventListener
        app.kubernetes.io/part-of: Triggers
        eventlistener: github-listener
      annotations:
        networkoperator.openshift.io/ignore-errors: ""
      name: el-monitor
      namespace: test
    spec:
      endpoints:
        - interval: 10s
          port: http-metrics
      jobLabel: name
      namespaceSelector:
        matchNames:
          - test
      selector:
        matchLabels:
          app.kubernetes.io/managed-by: EventListener
          app.kubernetes.io/part-of: Triggers
          eventlistener: github-listener
    ...
  2. Test the service monitor by sending a request to the event listener. For example, push an empty commit:

    $ git commit -m "empty-commit" --allow-empty && git push origin main
  3. On the OpenShift Container Platform web console, navigate to Administrator Observe Metrics.
  4. To view a metric, search by its name. For example, to view the details of the eventlistener_http_resources metric for the github-listener event listener, search using the eventlistener_http_resources keyword.

1.12. Configuring pull request capabilities in GitHub Interceptor

With GitHub Interceptor, you can create logic that validates and filters GitHub webhooks. For example, you can validate the webhook’s origin and filter incoming events based on specified criteria. When you use GitHub Interceptor to filter event data, you can specify the event types that Interceptor can accept in a field. In Red Hat OpenShift Pipelines, you can use the following capabilities of GitHub Interceptor:

  • Filter pull request events based on the files that have been changed
  • Validate pull requests based on configured GitHub owners

1.12.1. Filtering pull requests using GitHub Interceptor

You can filter GitHub events based on the files that have been changed for push and pull events. This helps you to execute a pipeline for only relevant changes in your Git repository. GitHub Interceptor adds a comma delimited list of all files that have been changed and uses the CEL Interceptor to filter incoming events based on the changed files. The list of changed files is added to the changed_files property of the event payload in the top-level extensions field.

Prerequisites

  • You have installed the Red Hat OpenShift Pipelines Operator.

Procedure

  1. Perform one of the following steps:

    • For a public GitHub repository, set the value of the addChangedFiles parameter to true in the YAML configuration file shown below:

      apiVersion: triggers.tekton.dev/v1beta1
      kind: EventListener
      metadata:
        name: github-add-changed-files-pr-listener
      spec:
        triggers:
          - name: github-listener
            interceptors:
              - ref:
                  name: "github"
                  kind: ClusterInterceptor
                  apiVersion: triggers.tekton.dev
                params:
                - name: "secretRef"
                  value:
                    secretName: github-secret
                    secretKey: secretToken
                - name: "eventTypes"
                  value: ["pull_request", "push"]
                - name: "addChangedFiles"
                  value:
                    enabled: true
              - ref:
                  name: cel
                params:
                - name: filter
                  value: extensions.changed_files.matches('controllers/')
      ...
    • For a private GitHub repository, set the value of the addChangedFiles parameter to true and provide the access token details, secretName and secretKey in the YAML configuration file shown below:

      apiVersion: triggers.tekton.dev/v1beta1
      kind: EventListener
      metadata:
        name: github-add-changed-files-pr-listener
      spec:
        triggers:
          - name: github-listener
            interceptors:
              - ref:
                  name: "github"
                  kind: ClusterInterceptor
                  apiVersion: triggers.tekton.dev
                params:
                - name: "secretRef"
                  value:
                    secretName: github-secret
                    secretKey: secretToken
                - name: "eventTypes"
                  value: ["pull_request", "push"]
                - name: "addChangedFiles"
                  value:
                    enabled: true
                    personalAccessToken:
                      secretName: github-pat
                      secretKey: token
              - ref:
                  name: cel
                params:
                - name: filter
                  value: extensions.changed_files.matches('controllers/')
      ...
  2. Save the configuration file.

1.12.2. Validating pull requests using GitHub Interceptors

You can use GitHub Interceptor to validate the processing of pull requests based on the GitHub owners configured for a repository. This validation helps you to prevent unnecessary execution of a PipelineRun or TaskRun object. GitHub Interceptor processes a pull request only if the user name is listed as an owner or if a configurable comment is issued by an owner of the repository. For example, when you comment /ok-to-test on a pull request as an owner, a PipelineRun or TaskRun is triggered.

Note

Owners are configured in an OWNERS file at the root of the repository.

Prerequisites

  • You have installed the Red Hat OpenShift Pipelines Operator.

Procedure

  1. Create a secret string value.
  2. Configure the GitHub webhook with that value.
  3. Create a Kubernetes secret named secretRef that contains your secret value.
  4. Pass the Kubernetes secret as a reference to your GitHub Interceptor.
  5. Create an owners file and add the list of approvers into the approvers section.
  6. Perform one of the following steps:

    • For a public GitHub repository, set the value of the githubOwners parameter to true in the YAML configuration file shown below:

      apiVersion: triggers.tekton.dev/v1beta1
      kind: EventListener
      metadata:
        name: github-owners-listener
      spec:
        triggers:
          - name: github-listener
            interceptors:
              - ref:
                  name: "github"
                  kind: ClusterInterceptor
                  apiVersion: triggers.tekton.dev
                params:
                  - name: "secretRef"
                    value:
                      secretName: github-secret
                      secretKey: secretToken
                  - name: "eventTypes"
                    value: ["pull_request", "issue_comment"]
                  - name: "githubOwners"
                    value:
                      enabled: true
                      checkType: none
      ...
    • For a private GitHub repository, set the value of the githubOwners parameter to true and provide the access token details, secretName and secretKey in the YAML configuration file shown below:

      apiVersion: triggers.tekton.dev/v1beta1
      kind: EventListener
      metadata:
        name: github-owners-listener
      spec:
        triggers:
          - name: github-listener
            interceptors:
              - ref:
                  name: "github"
                  kind: ClusterInterceptor
                  apiVersion: triggers.tekton.dev
                params:
                  - name: "secretRef"
                    value:
                      secretName: github-secret
                      secretKey: secretToken
                  - name: "eventTypes"
                    value: ["pull_request", "issue_comment"]
                  - name: "githubOwners"
                    value:
                      enabled: true
                      personalAccessToken:
                        secretName: github-token
                        secretKey: secretToken
                      checkType: all
      ...
      Note

      The checkType parameter is used to specify the GitHub owners who need authentication. You can set its value to orgMembers, repoMembers, or all.

  7. Save the configuration file.

1.13. Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.