Securing OpenShift Pipelines
Security features of OpenShift Pipelines
Abstract
Chapter 1. Using Tekton Chains for OpenShift Pipelines supply chain security Copy linkLink copied to clipboard!
Tekton Chains is a Kubernetes Custom Resource Definition (CRD) controller. You can use it to manage the supply chain security of the tasks and pipelines created using Red Hat OpenShift Pipelines.
By default, Tekton Chains observes all task run executions in your OpenShift Container Platform cluster. When the task runs complete, Tekton Chains takes a snapshot of the task runs. It then converts the snapshot to one or more standard payload formats, and finally signs and stores all artifacts.
To capture information about task runs, Tekton Chains uses Result objects. When the objects are unavailable, Tekton Chains the URLs and qualified digests of the OCI images.
1.1. Key features Copy linkLink copied to clipboard!
-
You can sign task runs, task run results, and OCI registry images with cryptographic keys that are generated by tools such as
cosignandskopeo. -
You can use attestation formats such as
in-toto. - You can securely store signatures and signed artifacts using OCI repository as a storage backend.
1.2. Configuring Tekton Chains Copy linkLink copied to clipboard!
The Red Hat OpenShift Pipelines Operator installs Tekton Chains by default. You can configure Tekton Chains by modifying the TektonConfig custom resource; the Operator automatically applies the changes that you make in this custom resource.
To edit the custom resource, use the following command:
$ oc edit TektonConfig config
The custom resource includes a chain: array. You can add any supported configuration parameters to this array, as shown in the following example:
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
addon: {}
chain:
artifacts.taskrun.format: tekton
config: {}
1.2.1. Supported parameters for Tekton Chains configuration Copy linkLink copied to clipboard!
Cluster administrators can use various supported parameter keys and values to configure specifications about task runs, Open Container Initiative (OCI) images, and storage.
Cluster administrators can use various supported parameter keys and values to configure specifications about task runs, Open Container Initiative (OCI) images, and storage.
1.2.1.1. Supported parameters for task run artifacts Copy linkLink copied to clipboard!
| Key | Description | Supported values | Default value |
|---|---|---|---|
|
| The format for storing task run payloads. |
|
|
|
|
The storage backend for task run signatures. You can specify many backends as a comma-separated list, such as |
|
|
|
| The signature backend for signing task run payloads. |
|
|
slsa/v1 is an alias of in-toto for backwards compatibility.
1.2.1.2. Supported parameters for pipeline run artifacts Copy linkLink copied to clipboard!
| Parameter | Description | Supported values | Default value |
|---|---|---|---|
|
| The format for storing pipeline run payloads. |
|
|
|
|
The storage backend for storing pipeline run signatures. You can specify many backends as a comma-separated list, such as |
|
|
|
| The signature backend for signing pipeline run payloads. |
|
|
|
|
When this parameter is |
|
|
-
slsa/v1is an alias ofin-totofor backwards compatibility. -
For the
grafeasstorage backend, only Container Analysis is supported. You cannot configure thegrafeasserver address in the current version of Tekton Chains.
1.2.1.3. Supported parameters for OCI artifacts Copy linkLink copied to clipboard!
| Parameter | Description | Supported values | Default value |
|---|---|---|---|
|
| The format for storing OCI payloads. |
|
|
|
|
The storage backend for storing OCI signatures. You can specify many backends as a comma-separated list, such as |
|
|
|
| The signature backend for signing OCI payloads. |
|
|
1.2.1.4. Supported parameters for Key Management Service (KMS) signers Copy linkLink copied to clipboard!
| Parameter | Description | Supported values | Default value |
|---|---|---|---|
|
|
The URI reference to a KMS service to use in |
Supported schemes: |
1.2.1.5. Supported parameters for storage Copy linkLink copied to clipboard!
| Parameter | Description | Supported values | Default value |
|---|---|---|---|
|
| The Google Cloud Storage (GCS) bucket for storage | ||
|
| The OCI repository for storing OCI signatures and attestation. |
If you configure one of the artifact storage backends to | |
|
| The builder ID to set for in-toto attestations |
| |
|
|
The build type for in-toto attestation. When this parameter is |
|
|
If you enable the docdb storage method is for any artifacts, configure docstore storage options. For more information about the go-cloud docstore URI format, see the docstore package documentation. Red Hat OpenShift Pipelines supports the following docstore services:
-
firestore -
dynamodb
| Parameter | Description | Supported values | Default value |
|---|---|---|---|
|
|
The go-cloud URI reference to a |
| |
|
|
The value for the Mongo server URL to use for | ||
|
|
The directory containing a file named |
Example value: |
If you enable the grafeas storage method for any artifacts, configure Grafeas storage options. For more information about Grafeas notes and occurrences, see Grafeas concepts.
To create occurrences, Red Hat OpenShift Pipelines must first create notes that link occurrences. Red Hat OpenShift Pipelines creates two types of occurrences: ATTESTATION Occurrence and BUILD Occurrence.
Red Hat OpenShift Pipelines uses the configurable noteid as the prefix of the note name. It appends the suffix -simplesigning for the ATTESTATION note and the suffix -intoto for the BUILD note. If the noteid field is not configured, Red Hat OpenShift Pipelines uses tekton-<NAMESPACE> as the prefix.
| Parameter | Description | Supported values | Default value |
|---|---|---|---|
|
| The OpenShift Container Platform project containing the Grafeas server for storing occurrences. | ||
|
| Optional: the prefix to use for the name of all created notes. | A string without spaces. | |
|
|
Optional: the |
|
Optionally, you can enable additional uploads of binary transparency attestations.
| Parameter | Description | Supported values | Default value |
|---|---|---|---|
|
| Enable or disable automatic binary transparency uploads. |
|
|
|
| The URL for uploading binary transparency attestations, if enabled. |
|
If you set transparency.enabled to manual, Tekton Chains uploads only task runs and pipeline runs with the following annotation to the transparency log:
chains.tekton.dev/transparency-upload: "true"
If you configure the x509 signature backend, you can optionally enable keyless signing with Fulcio.
| Parameter | Description | Supported values | Default value |
|---|---|---|---|
|
| Enable or disable requesting automatic certificates from Fulcio. |
|
|
|
| The Fulcio address for requesting certificates, if enabled. |
| |
|
| The expected OpenID Connect (OIDC) issuer. |
| |
|
| The provider from which to request the ID Token. |
| Red Hat OpenShift Pipelines attempts to use every provider |
|
| Path to the file containing the ID Token. | ||
|
|
The URL for the The Update Framework (TUF) server. |
|
If you configure the kms signature backend, set the KMS configuration, including OIDC and Spire, as necessary.
| Parameter | Description | Supported values | Default value |
|---|---|---|---|
|
|
URI of the KMS server (the value of | ||
|
|
Authentication token for the KMS server (the value of | ||
|
|
The full path name of the file that has the authentication token for the KMS server (the value of |
Example value: | |
|
|
The path for OIDC authentication (for example, | ||
|
| The role for OIDC authentication. | ||
|
|
The URI of the Spire socket for the KMS token (for example, | ||
|
| The audience for requesting a SPIFFE Verifiable Identity Document (SVID) from Spire. |
1.2.2. Creating and mounting the Mongo server URL secret Copy linkLink copied to clipboard!
You can give the value of the Mongo server URL to use for docdb storage (MONGO_SERVER_URL) using a secret. You must create this secret, mount it on the Tekton Chains controller, and set the storage.docdb.mongo-server-url-dir parameter to the directory where you mount the secret.
Prerequisites
-
You installed the OpenShift CLI (
oc) utility. -
You logged in to your OpenShift Container Platform cluster with administrative rights for the
openshift-pipelinesnamespace.
Procedure
Create a secret named
mongo-urlwith theMONGO_SERVER_URLfile that has the Mongo server URL value by entering the following command:$ oc create secret generic mongo-url -n tekton-chains \ --from-file=MONGO_SERVER_URL=<path>/MONGO_SERVER_URL<path>-
The full path and name of the
MONGO_SERVER_URLfile that has the Mongo server URL value.
In the
TektonConfigcustom resource (CR), in thechainsection, configure mounting the secret on the Tekton Chains controller and set thestorage.docdb.mongo-server-url-dirparameter to the directory where you mount the secret, as shown in the following example:Example configuration for mounting the
mongo-urlsecretapiVersion: operator.tekton.dev/v1 kind: TektonConfig metadata: name: config spec: # ... chain: disabled: false storage.docdb.mongo-server-url-dir: /tmp/mongo-url options: deployments: tekton-chains-controller: spec: template: spec: containers: - name: tekton-chains-controller volumeMounts: - mountPath: /tmp/mongo-url name: mongo-url volumes: - name: mongo-url secret: secretName: mongo-url # ...
1.2.3. Creating and mounting the KMS authentication token secret Copy linkLink copied to clipboard!
You can give the authentication token for the Key Management Service (KMS) server by using a secret. For example, if the KMS provider is Hashicorp Vault, the secret must contain the value of VAULT_TOKEN.
You must create this secret, mount it on the Tekton Chains controller, and set the signers.kms.auth.token-path parameter to the full pathname of the authentication token file.
Prerequisites
-
You installed the OpenShift CLI (
oc) utility. -
Log in to your OpenShift Container Platform cluster with administrative rights for the
openshift-pipelinesnamespace.
Procedure
Create a secret named
kms-secretswith theKMS_AUTH_TOKENfile that has the authentication token for the KMS server by entering the following command:$ oc create secret generic kms-secrets -n tekton-chains \ --from-file=KMS_AUTH_TOKEN=<path_and_name><path_and_name>-
The full path and name of the file that has the authentication token for the KMS server, for example,
/home/user/KMS_AUTH_TOKEN. You can use another file name instead ofKMS_AUTH_TOKEN.
In the
TektonConfigcustom resource (CR), in thechainsection, configure mounting the secret on the Tekton Chains controller and set thesigners.kms.auth.token-pathparameter to the full pathname of the authentication token file, as shown in the following example:Example configuration for mounting the
kms-secretssecretapiVersion: operator.tekton.dev/v1 kind: TektonConfig metadata: name: config spec: # ... chain: disabled: false signers.kms.auth.token-path: /etc/kms-secrets/KMS_AUTH_TOKEN options: deployments: tekton-chains-controller: spec: template: spec: containers: - name: tekton-chains-controller volumeMounts: - mountPath: /etc/kms-secrets name: kms-secrets volumes: - name: kms-secrets secret: secretName: kms-secrets # ...
1.2.4. Enabling Tekton Chains to operate only in selected namespaces Copy linkLink copied to clipboard!
By default, the Tekton Chains controller monitors resources in all namespaces. You can customize Tekton Chains to run only in specific namespaces, which provides granular control over its operation.
Prerequisites
-
You logged in to your OpenShift Container Platform cluster with
cluster-adminprivileges.
Procedure
In the
TektonConfigCR, in thechainsection, add the--namespace=argument to contain the namespaces that the controller should watch.The following example shows the configuration for the Tekton Chains controller to only watch resources within the
devandtestnamespaces, filteringPipelineRunandTaskRunobjects so:apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: chain: disabled: false options: deployments: tekton-chains-controller: spec: template: spec: containers: - args: - --namespace=dev, test name: tekton-chains-controller--namespace-
If you do not give the
--namespaceargument or leave it empty, the controller watches all namespaces by default.
1.3. Secrets for signing data in Tekton Chains Copy linkLink copied to clipboard!
Cluster administrators can generate a key pair and use Tekton Chains to sign artifacts using a Kubernetes secret. For Tekton Chains to work, a private key and a password for encrypted keys must exist as part of the signing-secrets secret in the openshift-pipelines namespace.
Currently, Tekton Chains supports the x509 and cosign signature schemes.
Use only one of the supported signature schemes.
The x509 signing scheme
To use the x509 signing scheme with Tekton Chains, you must fulfill the following requirements:
-
Store the private key in the
signing-secretswith thex509.pemstructure. -
Store the private key as an unencrypted
PKCS #8Privacy Enhanced Mail (PEM) file. -
The key is of
ed25519orecdsatype.
The cosign signing scheme
To use the cosign signing scheme with Tekton Chains, you must fulfill the following requirements:
-
Store the private key in the
signing-secretswith thecosign.keystructure. -
Store the password in the
signing-secretswith thecosign.passwordstructure. -
Store the private key as an encrypted PEM file of type
ENCRYPTED COSIGN PRIVATE KEY.
1.3.1. Generating the cosign key pair by using the TektonConfig CR Copy linkLink copied to clipboard!
To use the cosign signing scheme for Tekton Chains secrets, you can generate a cosign key pair that uses Elliptic Curve Digital Signature Algorithm (ECDSA) encryption by setting the generateSigningSecret field in the TektonConfig custom resource (CR) to true.
Prerequisites
-
You installed the OpenShift CLI (
oc) utility. -
You logged in to your OpenShift Container Platform cluster with administrative rights for the
openshift-pipelinesnamespace.
Procedure
Edit the
TektonConfigCR by running the following command:$ oc edit TektonConfig configIn the
TektonConfigCR, set thegenerateSigningSecretvalue totrue:Example of creating an ECDSA cosign key pair by using the TektonConfig CR
apiVersion: operator.tekton.dev/v1 kind: TektonConfig metadata: name: config spec: # ... chain: disabled: false generateSigningSecret: true # ...generateSigningSecret-
The default value is
false. Setting the value totruegenerates theecdsakey pair.
After a few minutes, extract the public key from the secret and store it, so that you can use it to verify artifact attestations. Run the following command to extract the key:
$ oc extract -n openshift-pipelines secret/signing-secrets --keys=cosign.pub
Result
The OpenShift Pipelines Operator generates an ecdsa type cosign key pair and stores it in the signing-secrets secret in the openshift-pipelines namespace. The secret includes the following files:
-
cosign.key: The private key -
cosign.password: The password for decrypting the private key -
cosign.pubThe public key
If a signing-secrets secret already exists, the Operator does not overwrite the secret.
The cosign.pub file in your current directory has the public key extracted from the secret.
If you set the generateSigningSecret field from true to false, the Red Hat OpenShift Pipelines Operator overrides and empties any value in the signing-secrets secret.
The Red Hat OpenShift Pipelines Operator does not offer the following security functions:
- Key rotation
- Auditing key usage
- Proper access control to the key
1.3.2. Manually generating signing secrets with the cosign tool Copy linkLink copied to clipboard!
You can use the cosign signing scheme with Tekton Chains using the cosign tool.
Prerequisites
- You installed the Cosign tool. For information about installing the Cosign tool, see the Sigstore documentation for Cosign.
Procedure
Generate the
cosign.keyandcosign.pubkey pairs by running the following command:$ cosign generate-key-pair k8s://openshift-pipelines/signing-secretsCosign prompts you for a password and then creates a Kubernetes secret.
-
Store the encrypted
cosign.keyprivate key and thecosign.passworddecryption password in thesigning-secretsKubernetes secret. Ensure that you store the private key as an encrypted Privacy Enhanced Mail (PEM) file of theENCRYPTED COSIGN PRIVATE KEYtype.
1.3.3. Manually generating signing secrets with the skopeo tool Copy linkLink copied to clipboard!
You can generate keys by using the skopeo tool and use them in the cosign signing scheme with Tekton Chains.
Prerequisites
-
You installed the
skopeopackage on your Linux system.
Procedure
Generate a public/private key pair by running the following command:
$ skopeo generate-sigstore-key --output-prefix <mykey><mykey>Replace
<mykey>with a key name of your choice.Skopeo prompts you for a passphrase for the private key and then creates the key files named
<mykey>.privateand<mykey>.pub.
Encode the
<mykey>.pubfile using thebase64tool by running the following command:$ base64 -w 0 <mykey>.pub > b64.pubEncode the
<mykey>.privatefile using thebase64tool by running the following command:$ base64 -w 0 <mykey>.private > b64.privateEncode the passphrase using the
base64tool by running the following command:$ echo -n '<passphrase>' | base64 -w 0 > b64.passphrase<passphrase>-
Replace
<passphrase>with the passphrase that you used for the key pair.
Create the
signing-secretssecret in theopenshift-pipelinesnamespace by running the following command:$ oc create secret generic signing-secrets -n openshift-pipelinesEdit the
signing-secretssecret by running the following command:$ oc edit secret -n openshift-pipelines signing-secretsAdd the encoded keys in the data of the secret in the following way:
apiVersion: v1 data: cosign.key: <Encoded <mykey>.private> cosign.password: <Encoded passphrase> cosign.pub: <Encoded <mykey>.pub> immutable: true kind: Secret metadata: name: signing-secrets # ... type: Opaque<mykey>-
Replace
<Encoded <mykey>.private>with the content of theb64.privatefile. cosign.password-
Replace
<Encoded passphrase>with the content of theb64.passphrasefile. <mykey>-
Replace
<Encoded <mykey>.pub>with the content of theb64.pubfile.
1.3.4. Resolving the "secret already exists" error Copy linkLink copied to clipboard!
If the signing-secret secret is already populated, the command to create this secret might output the following error message:
Error from server (AlreadyExists): secrets "signing-secrets" already exists
You can resolve this error by deleting the secret.
Procedure
Delete the
signing-secretsecret by running the following command:$ oc delete secret signing-secrets -n openshift-pipelines- Re-create the key pairs and store them in the secret using your preferred signing scheme.
1.4. Authenticating to an OCI registry Copy linkLink copied to clipboard!
Set up a service account with the necessary credentials so that Tekton Chains can push signatures to an OCI registry.
The Tekton Chains controller uses the same service account under which task runs are started. To configure authentication with an OCI registry, create the required credentials and associate them with this service account.
Procedure
Set the namespace and name of the Kubernetes service account.
$ export NAMESPACE=<namespace> $ export SERVICE_ACCOUNT_NAME=<service_account><namespace>- The namespace associated with the service account.
<service_account>- The name of the service account.
Create a Kubernetes secret.
$ oc create secret registry-credentials \ --from-file=.dockerconfigjson \ --type=kubernetes.io/dockerconfigjson \ -n $NAMESPACE--from-file-
Substitute with the path to your Docker config file. Default path is
~/.docker/config.json.
Give the service account access to the secret.
$ oc patch serviceaccount $SERVICE_ACCOUNT_NAME \ -p "{\"imagePullSecrets\": [{\"name\": \"registry-credentials\"}]}" -n $NAMESPACEIf you patch the default
pipelineservice account that Red Hat OpenShift Pipelines assigns to all task runs, the Red Hat OpenShift Pipelines Operator will override the service account. As a best practice, you can perform the following steps:Create a separate service account to assign to user’s task runs.
$ oc create serviceaccount <service_account_name>Associate the service account to the task runs by setting the value of the
serviceaccountnamefield in the task run template.apiVersion: tekton.dev/v1 kind: TaskRun metadata: name: build-push-task-run-2 spec: taskRunTemplate: serviceAccountName: build-bot taskRef: name: build-push ...serviceAccountName- Substitute with the name of the newly created service account.
1.5. Creating and verifying task run signatures without any additional authentication Copy linkLink copied to clipboard!
To verify signatures of task runs by using Tekton Chains with any additional authentication, perform the following tasks:
-
Generate an encrypted
x509orcosignkey pair and store it as a Kubernetes secret. - Configure the Tekton Chains backend storage.
- Create a task run, sign it, and store the signature and the payload as annotations on the task run itself.
- Retrieve the signature and payload from the signed task run.
- Verify the signature of the task run.
Prerequisites
Ensure that you install the following components on the cluster:
- Red Hat OpenShift Pipelines Operator
- Tekton Chains
- Cosign
Procedure
-
Generate an encrypted
x509orcosignkey pair. For more information about creating a key pair and saving it as a secret, see "Secrets for signing data in Tekton Chains". In the Tekton Chains configuration, disable the Open Container Initiative (OCI) storage, and set the task run storage and format to
tekton. In theTektonConfigcustom resource set the following values:apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: # ... chain: artifacts.oci.storage: "" artifacts.taskrun.format: tekton artifacts.taskrun.storage: tekton # ...For more information about configuring Tekton Chains using the
TektonConfigcustom resource, see "Configuring Tekton Chains".To restart the Tekton Chains controller to apply the modified configuration, enter the following command:
$ oc delete po -n openshift-pipelines -l app=tekton-chains-controllerCreate a task run by entering the following command:
$ oc create -f https://raw.githubusercontent.com/tektoncd/chains/main/examples/taskruns/task-output-image.yaml-fReplace the example URI with the URI or file path pointing to your task run.
Example output
taskrun.tekton.dev/build-push-run-output-image-qbjvh created
Check the status of the steps by entering the following command. Wait until the process finishes.
$ tkn tr describe --lastExample output
[...truncated output...] NAME STATUS ∙ create-dir-builtimage-9467f Completed ∙ git-source-sourcerepo-p2sk8 Completed ∙ build-and-push Completed ∙ echo Completed ∙ image-digest-exporter-xlkn7 CompletedTo retrieve the signature from the object stored as
base64encoded annotations, enter the following commands:$ tkn tr describe --last -o jsonpath="{.metadata.annotations.chains\.tekton\.dev/signature-taskrun-$TASKRUN_UID}" | base64 -d > sig$ export TASKRUN_UID=$(tkn tr describe --last -o jsonpath='{.metadata.uid}')- To verify the signature using the public key that you created, enter the following command:
$ cosign verify-blob-attestation --insecure-ignore-tlog --key path/to/cosign.pub --signature sig --type slsaprovenance --check-claims=false /dev/null
--insecure-ignore-tlogReplace
path/to/cosign.pubwith the path name of the public key file.Example output
Verified OK
1.6. Using Tekton Chains to sign and verify image and provenance Copy linkLink copied to clipboard!
Cluster administrators can use Tekton Chains to sign and verify images and provenances, by performing the following tasks:
-
Generate an encrypted
x509orcosignkey pair and store it as a Kubernetes secret. - Set up authentication for the Open Container Initiative (OCI) registry to store images, image signatures, and signed image attestations.
- Configure Tekton Chains to generate and sign provenance.
- Create an image with Kaniko in a task run.
- Verify the signed image and the signed provenance.
Prerequisites
Ensure that you install the following tools on the cluster:
Procedure
-
Generate an encrypted
x509orcosignkey pair. For more information about creating a key pair and saving it as a secret, see "Secrets for signing data in Tekton Chains". Configure authentication for the image registry.
- To configure the Tekton Chains controller for pushing signature to an OCI registry, use the credentials associated with the service account of the task run. For detailed information, see the "Authenticating to an OCI registry" section.
To configure authentication for a Kaniko task that builds and pushes image to the registry, create a Kubernetes secret of the docker
config.jsonfile containing the required credentials.$ oc create secret generic <docker_config_secret_name> \ --from-file <path_to_config.json><docker_config_secret_name>- Substitute with the name of the docker config secret.
<path_to_config.json>-
Substitute with the path to docker
config.jsonfile.
Configure Tekton Chains by setting the
artifacts.taskrun.format,artifacts.taskrun.storage, andtransparency.enabledparameters in thechains-configobject:$ oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"artifacts.taskrun.format": "in-toto"}}' $ oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"artifacts.taskrun.storage": "oci"}}' $ oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"transparency.enabled": "true"}}'Start the Kaniko task.
Apply the Kaniko task to the cluster.
$ oc apply -f examples/kaniko/kaniko.yamlexamples/kaniko/kaniko.yaml- Substitute with the URI or file path to your Kaniko task.
Set the appropriate environment variables.
$ export REGISTRY=<url_of_registry> $ export DOCKERCONFIG_SECRET_NAME=<name_of_the_secret_in_docker_config_json><url_of_registry>- Substitute with the URL of the registry where you want to push the image.
<name_of_the_secret_in_docker_config_json>-
Substitute with the name of the secret in the docker
config.jsonfile.
Start the Kaniko task.
$ tkn task start --param IMAGE=$REGISTRY/kaniko-chains --use-param-defaults --workspace name=source,emptyDir="" --workspace name=dockerconfig,secret=$DOCKERCONFIG_SECRET_NAME kaniko-chainsObserve the logs of this task until all steps complete. On successful authentication, the task pushes the final image to
$REGISTRY/kaniko-chains.
Wait for a minute to allow Tekton Chains to generate the provenance and sign it, and then check the availability of the
chains.tekton.dev/signed=trueannotation on the task run.$ oc get tr <task_run_name> \ -o json | jq -r .metadata.annotations { "chains.tekton.dev/signed": "true", ... }<task_run_name>- Substitute with the name of the task run.
Verify the image and the attestation.
$ cosign verify --key cosign.pub $REGISTRY/kaniko-chains $ cosign verify-attestation --key cosign.pub $REGISTRY/kaniko-chainsFind the provenance for the image in Rekor.
- Get the digest of the $REGISTRY/kaniko-chains image. You can search for it in the task run, or pull the image to extract the digest.
Search Rekor to find all entries that match the
sha256digest of the image.$ rekor-cli search --sha <image_digest> <uuid_1> <uuid_2> ...<image_digest>-
Substitute with the
sha256digest of the image. <uuid_1>- The first matching universally unique identifier (UUID).
<uuid_2>The second matching UUID.
The search result displays universally unique identifiers (UUIDs) of the matching entries. One of those UUIDs holds the attestation.
Check the attestation.
$ rekor-cli get --uuid <uuid> --format json | jq -r .Attestation | base64 --decode | jq
Chapter 2. Setting up OpenShift Pipelines in the web console to view Software Supply Chain Security elements Copy linkLink copied to clipboard!
Use the Developer or Administrator perspective to create or modify a pipeline and view key Software Supply Chain Security elements within a project.
Set up OpenShift Pipelines to view:
- Project vulnerabilities: Visual representation of identified vulnerabilities within a project.
- Software Bill of Materials (SBOMs): Download or view detailed listing of PipelineRun components.
Additionally, PipelineRuns that meet Tekton Chains requirement displays signed badges next to their names. This badge indicates that the pipeline run execution results are cryptographically signed and stored securely, for example within an OCI image.
Figure 2.1. The signed badge
The PipelineRun displays the signed badge next to its name only if you have configured Tekton Chains. For information on configuring Tekton Chains, see Using Tekton Chains for OpenShift Pipelines supply chain security.
2.1. Setting up OpenShift Pipelines to view project vulnerabilities Copy linkLink copied to clipboard!
The PipelineRun details page provides a visual representation of identified vulnerabilities, categorized by the severity (critical, high, medium, and low). This streamlined view facilitates prioritization and remediation efforts.
Figure 2.2. Viewing vulnerabilities on the PipelineRun details page
You can also review the vulnerabilities in the Vulnerabilities column in the pipeline run list view page.
Figure 2.3. Viewing vulnerabilities on the PipelineRun list view
Visual representation of identified vulnerabilities is available starting from the OpenShift Container Platform version 4.15 release.
Prerequisites
- You have logged in to the web console.
- You have the appropriate roles and permissions in a project to create applications and other workloads in OpenShift Container Platform.
- You have an existing vulnerability scan task.
Procedures
- In the Developer or Administrator perspective, switch to the relevant project where you want a visual representation of vulnerabilities.
Update your existing vulnerability scan task to ensure that it stores the output in the .json file and then extracts the vulnerability summary in the following format:
# The format to extract vulnerability summary (adjust the jq command for different JSON structures). jq -rce \ '{vulnerabilities:{ critical: (.result.summary.CRITICAL), high: (.result.summary.IMPORTANT), medium: (.result.summary.MODERATE), low: (.result.summary.LOW) }}' scan_output.json | tee $(results.SCAN_OUTPUT.path)NoteYou might need to adjust the jq command for different JSON structures.
(Optional) If you do not have a vulnerability scan task, create one in the following format:
Example vulnerability scan task using Roxctl
apiVersion: tekton.dev/v1 kind: Task metadata: name: vulnerability-scan annotations: task.output.location: results task.results.format: application/json task.results.key: SCAN_OUTPUT spec: params: - description: Image to be scanned name: image type: string results: - description: CVE result format name: SCAN_OUTPUT steps: - name: roxctl image: 'quay.io/lrangine/crda-maven:11.0' env: - name: ROX_CENTRAL_ENDPOINT valueFrom: secretKeyRef: key: rox_central_endpoint name: roxsecrets - name: ROX_API_TOKEN valueFrom: secretKeyRef: key: rox_api_token name: roxsecrets name: roxctl-scan script: | #!/bin/sh curl -k -L -H "Authorization: Bearer $ROX_API_TOKEN" https://$ROX_CENTRAL_ENDPOINT/api/cli/download/roxctl-linux --output ./roxctl chmod +x ./roxctl ./roxctl image scan --insecure-skip-tls-verify -e $ROX_CENTRAL_ENDPOINT --image $(params.image) --output json > roxctl_output.json jq -rce \ "{vulnerabilities:{ critical: (.result.summary.CRITICAL), high: (.result.summary.IMPORTANT), medium: (.result.summary.MODERATE), low: (.result.summary.LOW) }}" roxctl_output.json | tee $(results.SCAN_OUTPUT.path)name- The name of your task.
task.output.location- The location for storing the task outputs.
task.results.key-
The naming convention of the scan task result. A valid naming convention must end with the
SCAN_OUTPUTstring. For example, SCAN_OUTPUT, MY_CUSTOM_SCAN_OUTPUT, or ACS_SCAN_OUTPUT. - description: CVE result format- The description of the result.
image- The location of the container image to run the scan tool.
key-
The
rox_central_endpointkey obtained from Advanced Cluster Security for Kubernetes (ACS). key-
The
rox_api_tokenobtained from ACS. script- The shell script performs the vulnerability scanning and sets the scan output in the Task run results.
NoteThis is an example configuration. Change the values according to your specific scanning tool to set results in the expected format.
Update an appropriate Pipeline to add vulnerabilities specifications in the following format:
... spec: results: - description: The common vulnerabilities and exposures (CVE) result name: SCAN_OUTPUT value: $(tasks.vulnerability-scan.results.SCAN_OUTPUT)
Verification
-
Navigate to the
PipelineRundetails page and review the Vulnerabilities row for a visual representation of identified vulnerabilities. -
Or, you can navigate to the
PipelineRunlist view page, and review the Vulnerabilities column.
2.2. Setting up OpenShift Pipelines to download or view SBOMs Copy linkLink copied to clipboard!
The PipelineRun details page provides an option to download or view Software Bill of Materials (SBOMs), enhancing transparency and control within your supply chain. SBOMs lists all the software libraries that a component uses. Those libraries can enable specific functionality or help development.
You can use an SBOM to better understand the composition of your software, identify vulnerabilities, and assess the potential impact of any security issues that might arise.
Figure 2.4. Options to download or view SBOMs
Prerequisites
- You have logged in to the web console.
- You have the appropriate roles and permissions in a project to create applications and other workloads in OpenShift Container Platform.
Procedure
- In the Developer or Administrator perspective, switch to the relevant project where you want a visual representation of SBOMs.
Add a task in the following format to view or download the SBOM information:
Example SBOM task
apiVersion: tekton.dev/v1 kind: Task metadata: name: sbom-task annotations: task.output.location: results task.results.format: application/text task.results.key: LINK_TO_SBOM task.results.type: external-link spec: results: - description: Contains the SBOM link name: LINK_TO_SBOM steps: - name: print-sbom-results image: quay.io/image script: | #!/bin/sh syft version syft quay.io/<username>/quarkus-demo:v2 --output cyclonedx-json=sbom-image.json echo 'BEGIN SBOM' cat sbom-image.json echo 'END SBOM' echo 'quay.io/user/workloads/<namespace>/node-express/node-express:build-8e536-1692702836' | tee $(results.LINK_TO_SBOM.path)name- The name of your task.
task.output.location- The location for storing the task outputs.
task.results.key- The SBOM task result name. Do not change the name of the SBOM result task.
task.results.type- (Optional) Set to open the SBOM in a new tab.
- description: Contains the SBOM link- The description of the result.
image- The image that generates the SBOM.
script- The script that generates the SBOM image.
<namespace>- The SBOM image along with the path name.
Update the Pipeline to reference the newly created SBOM task.
... spec: tasks: - name: sbom-task taskRef: name: sbom-task results: - name: IMAGE_URL description: url value: <oci_image_registry_url>name- The same name as created in Step 2.
- name: IMAGE_URL- The name of the result.
<oci_image_registry_url>-
The OCI image repository URL which has the
.sbomimages.
- Rerun the affected OpenShift Pipeline.
2.2.1. Viewing an SBOM in the web UI Copy linkLink copied to clipboard!
Prerequisites
- You have set up OpenShift Pipelines to download or view SBOMs.
Procedure
- Navigate to the Activity → PipelineRuns tab.
- For the project whose SBOM you want to view, select its most recent pipeline run.
On the
PipelineRundetails page, select View SBOM.-
You can use your web browser to immediately search the SBOM for terms that indicate vulnerabilities in your software supply chain. For example, try searching for
log4j. - You can select Download to download the SBOM, or Expand to view it full-screen.
-
You can use your web browser to immediately search the SBOM for terms that indicate vulnerabilities in your software supply chain. For example, try searching for
2.2.2. Downloading an SBOM in the CLI Copy linkLink copied to clipboard!
Prerequisites
- You have installed the Cosign CLI tool. For information about installing the Cosign tool, see the Sigstore documentation for Cosign.
- You have set up OpenShift Pipelines to download or view SBOMs.
Procedure
- Open terminal, log in to Developer or Administrator perspective, and then switch to the relevant project.
From the OpenShift web console, copy the
download sbomcommand and run it on your terminal.Example cosign command
$ cosign download sbom quay.io/<workspace>/user-workload@sha256(Optional) To view the full SBOM in a searchable format, run the following command to redirect the output:
Example cosign command
$ cosign download sbom quay.io/<workspace>/user-workload@sha256 > sbom.txt
2.2.3. Reading the SBOM Copy linkLink copied to clipboard!
In the SBOM, as the following sample excerpt shows, you can see four characteristics of each library that a project uses:
- Its author or publisher
- Its name
- Its version
- Its licenses
This information helps you verify that individual libraries are safely-sourced, updated, and compliant.
Example SBOM
{
"bomFormat": "CycloneDX",
"specVersion": "1.4",
"serialNumber": "urn:uuid:89146fc4-342f-496b-9cc9-07a6a1554220",
"version": 1,
"metadata": {
...
},
"components": [
{
"bom-ref": "pkg:pypi/flask@2.1.0?package-id=d6ad7ed5aac04a8",
"type": "library",
"author": "Armin Ronacher <armin.ronacher@active-4.com>",
"name": "Flask",
"version": "2.1.0",
"licenses": [
{
"license": {
"id": "BSD-3-Clause"
}
}
],
"cpe": "cpe:2.3:a:armin-ronacher:python-Flask:2.1.0:*:*:*:*:*:*:*",
"purl": "pkg:pypi/Flask@2.1.0",
"properties": [
{
"name": "syft:package:foundBy",
"value": "python-package-cataloger"
...
Chapter 3. Configuring the security context for pods Copy linkLink copied to clipboard!
The default service account for pods that OpenShift Pipelines starts is pipeline. The security context constraint (SCC) associated with the pipeline service account is pipelines-scc. The pipelines-scc SCC is based the anyuid SCC, with minor differences as defined in the following YAML specification:
Example pipelines-scc.yaml snippet
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
# ...
allowedCapabilities:
- SETFCAP
# ...
fsGroup:
type: MustRunAs
# ...
In addition, the Buildah task, shipped as part of OpenShift Pipelines, uses vfs as the default storage driver.
You can configure the security context for pods that OpenShift Pipelines creates for pipeline runs and task runs. You can make the following changes:
- Change the default and maximum SCC for all pods
- Change the default SCC for pods created for pipeline runs and task runs in a particular namespace
- Configure a particular pipeline run or task run to use a custom SCC and service account
The simplest way to run buildah that ensures all images can build is to run it as root in a pod with the privileged SCC. For instructions about running buildah with more restrictive security settings, see Building of container images using Buildah as a non-root user.
3.1. Configuring the default and maximum SCC for pods that OpenShift Pipelines creates Copy linkLink copied to clipboard!
You can configure the default security context constraint (SCC) for all pods that OpenShift Pipelines creates for task runs and pipeline runs. You can also configure the maximum SCC, which is the least restrictive SCC that you can configure for these pods in any namespace.
Procedure
Edit the
TektonConfigcustom resource (CR) by entering the following command:$ oc edit TektonConfig configSet the default and maximum SCC in the spec, as in the following example:
apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: # ... platforms: openshift: scc: default: "restricted-v2" maxAllowed: "privileged"default-
spec.platforms.openshift.scc.defaultspecifies the default SCC that OpenShift Pipelines attaches to the service account (SA) used for workloads, which is, by default, thepipelineSA. OpenShift Pipelines uses this SCC for all pipeline run and task run pods. maxAllowed-
spec.platforms.openshift.scc.maxAllowedspecifies the least restrictive SCC that you can configure for pipeline run and task run pods in any namespace. This setting does not apply when you configure a custom SA and SCC in a particular pipeline run or task run.
3.2. Configuring the SCC for pods in a namespace Copy linkLink copied to clipboard!
Configure the security context constraint (SCC) for all pods that OpenShift Pipelines creates for pipeline runs and task runs in a particular namespace.
The SCC must not be less restrictive than the maximum SCC that you configured by using the TektonConfig CR, in the spec.platforms.openshift.scc.maxAllowed spec.
Procedure
Set the
operator.tekton.dev/sccannotation for the namespace to the name of the SCC.Example namespace annotation for configuring the SCC for OpenShift Pipelines pods
apiVersion: v1 kind: Namespace metadata: name: test-namespace annotations: operator.tekton.dev/scc: nonroot
3.3. Running pipeline run and task run by using a custom security context constraint (SCC) and a custom service account Copy linkLink copied to clipboard!
When using the pipelines-scc SCC associated with the default pipelines service account, the pipeline run and task run pods might face timeouts. This happens because the default pipelines-scc SCC sets the fsGroup.type parameter to MustRunAs.
When using the pipelines-scc SCC associated with the default pipelines service account, the pipeline run and task run pods might face timeouts. This happens because the default pipelines-scc SCC sets the fsGroup.type parameter to MustRunAs.
For more information about pod timeouts, see BZ#1995779.
To avoid pod timeouts, you can create a custom SCC with the fsGroup.type parameter set to RunAsAny, and associate it with a custom service account.
As a best practice, use a custom SCC and a custom service account for pipeline runs and task runs. This approach allows greater flexibility and does not break the runs when an upgrade modifies the defaults.
Procedure
Define a custom SCC with the
fsGroup.typeparameter set toRunAsAny:Example: Custom SCC
apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: annotations: kubernetes.io/description: my-scc is a close replica of anyuid scc. pipelines-scc has fsGroup - RunAsAny. name: my-scc allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null defaultAddCapabilities: null fsGroup: type: RunAsAny groups: - system:cluster-admins priority: 10 readOnlyRootFilesystem: false requiredDropCapabilities: - MKNOD runAsUser: type: RunAsAny seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secretCreate the custom SCC:
Example: Create the
my-sccSCC$ oc create -f my-scc.yamlCreate a custom service account:
Example: Create a
fsgroup-runasanyservice account$ oc create serviceaccount fsgroup-runasanyAssociate the custom SCC with the custom service account:
Example: Associate the
my-sccSCC with thefsgroup-runasanyservice account$ oc adm policy add-scc-to-user my-scc -z fsgroup-runasanyIf you want to use the custom service account for privileged tasks, you can associate the
privilegedSCC with the custom service account by running the following command:Example: Associate the
privilegedSCC with thefsgroup-runasanyservice account$ oc adm policy add-scc-to-user privileged -z fsgroup-runasanyUse the custom service account in the pipeline run and task run:
Example: Pipeline run YAML with
fsgroup-runasanycustom service accountapiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: <pipeline_run_name> spec: pipelineRef: name: <pipeline_cluster_task_name> taskRunTemplate: serviceAccountName: 'fsgroup-runasany'Example: Task run YAML with
fsgroup-runasanycustom service accountapiVersion: tekton.dev/v1 kind: TaskRun metadata: name: <task_run_name> spec: taskRef: name: <cluster_task_name> taskRunTemplate: serviceAccountName: 'fsgroup-runasany'
Chapter 4. Securing webhooks with event listeners Copy linkLink copied to clipboard!
As an administrator, you can secure webhooks with event listeners. After creating a namespace, you enable HTTPS for the Eventlistener resource by adding the operator.tekton.dev/enable-annotation=enabled label to the namespace. Then, you create a Trigger resource and a secured route using the re-encrypted TLS termination.
Triggers in Red Hat OpenShift Pipelines support insecure HTTP and secure HTTPS connections to the Eventlistener resource. HTTPS secures connections within and outside the cluster.
Red Hat OpenShift Pipelines runs a tekton-operator-proxy-webhook pod that watches for the labels in the namespace. When you add the label to the namespace, the webhook sets the service.beta.openshift.io/serving-cert-secret-name=<secret_name> annotation on the EventListener object. This, in turn, creates secrets and the required certificates.
service.beta.openshift.io/serving-cert-secret-name=<secret_name>
In addition, you can mount the created secret into the Eventlistener pod to secure the request.
4.1. Providing secure connection with OpenShift routes Copy linkLink copied to clipboard!
You can give a secure connection with OpenShift routes by using re-encrypted TLS termination.
Procedure
Create a route with the re-encrypted TLS termination by running the following command:
$ oc create route reencrypt --service=<svc_name> --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=<hostname>Or, you can create a re-encrypted TLS termination YAML file to create a secure route:
Example re-encrypt TLS termination YAML to create a secure route
apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured spec: host: <hostname> to: kind: Service name: frontend tls: termination: reencrypt key: [as in edge termination] certificate: [as in edge termination] caCertificate: [as in edge termination] destinationCACertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----name- The name of the object, which OpenShift Container Platform limits to only 63 characters.
termination-
You must set the termination field to
reencrypt. This is the only required TLS field. destinationCACertificateRe-encryption requires this field. The
destinationCACertificatefield specifies a CA certificate to validate the endpoint certificate, thus securing the connection from the router to the destination pods. You can omit this field in either of the following scenarios:- The service uses a service signing certificate.
- The administrator specifies a default CA certificate for the router, and the service has a certificate signed by that CA.
Optional: Display more options by running the following command:
$ oc create route reencrypt --help
4.2. Configuring security context for event listeners Copy linkLink copied to clipboard!
You can configure a custom security context directly in your EventListener custom resource (CR) to meet your security requirements. A custom security context can help ensure that containers run with restricted privileges and comply with OpenShift Container Platform security context constraints (SCCs).
Procedure
Create a YAML file that defines your
EventListenerCR:Example EventListener custom resource with configured security context
apiVersion: triggers.tekton.dev/v1 kind: EventListener metadata: #... spec: serviceAccountName: tekton-triggers-sa resources: kubernetesResource: spec: template: spec: securityContext: runAsNonRoot: true containers: - resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" securityContext: readOnlyRootFilesystem: true #...runAsNonRoot- Specify the pod-level security context settings. The example setting sets the pod-level security context to prevent the containers from running as the root user.
readOnlyRootFilesystem- Specify the container-level security context settings. The example setting restricts the container root filesystem to read-only to limit potential file system modifications at runtime.
4.3. Creating a sample EventListener resource using a secure HTTPS connection Copy linkLink copied to clipboard!
This section uses the pipelines-tutorial example to show creation of a sample EventListener resource using a secure HTTPS connection.
Procedure
Create the
TriggerBindingresource from the YAML file available in the pipelines-tutorial repository:$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/01_binding.yamlCreate the
TriggerTemplateresource from the YAML file available in the pipelines-tutorial repository:$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/02_template.yamlCreate the
Triggerresource directly from the pipelines-tutorial repository:$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/03_trigger.yamlCreate an
EventListenerresource using a secure HTTPS connection:Add a label to enable the secure HTTPS connection to the
Eventlistenerresource:$ oc label namespace <ns_name> operator.tekton.dev/enable-annotation=enabledCreate the
EventListenerresource from the YAML file available in the pipelines-tutorial repository:$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/04_event_listener.yamlCreate a route with the re-encrypted TLS termination:
$ oc create route reencrypt --service=<svc_name> --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=<hostname>
Chapter 5. Authenticating pipelines with repositories using secrets Copy linkLink copied to clipboard!
Pipelines and tasks can require credentials to authenticate with Git repositories and container repositories. In Red Hat OpenShift Pipelines, you can use secrets to authenticate pipeline runs and task runs that interact with a Git repository or container repository during execution.
A secret for authentication with a Git repository is known as a Git secret.
A pipeline run or a task run gains access to the secrets through an associated service account. Alternatively, you can define a workspace in the pipeline or task and bind the secret to the workspace.
5.1. Prerequisites Copy linkLink copied to clipboard!
-
You installed the
ocOpenShift command line utility.
5.2. Providing secrets using service accounts Copy linkLink copied to clipboard!
You can use service accounts to provide secrets for authentication with Git repositories and container repositories.
You can associate a secret with a service account. The information in the secret becomes available to the tasks that run under this service account.
5.2.1. Types and annotation of secrets for service accounts Copy linkLink copied to clipboard!
If you give authentication secrets using service accounts, OpenShift Pipelines supports several secret types. For most of these secret types, you must give annotations that define the repositories for which the authentication secret is valid.
5.2.1.1. Git authentication secrets Copy linkLink copied to clipboard!
If you give authentication secrets using service accounts, OpenShift Pipelines supports the following types of secrets for Git authentication:
-
kubernetes.io/basic-auth: A username and password for Basic HTTP authentication -
kubernetes.io/ssh-auth: Keys for SSH-based authentication
If you give authentication secrets using service accounts, a Git secret must have one or more annotation keys. The names of each key must begin with tekton.dev/git- and the value is the URL of the host for which OpenShift Pipelines must use the credentials in the secret.
In the following example, OpenShift Pipelines uses a basic-auth secret to access repositories at github.com and gitlab.com.
Example: Credentials for Basic HTTP authentication with many Git repositories
apiVersion: v1
kind: Secret
metadata:
name: git-secret-basic
annotations:
tekton.dev/git-0: github.com
tekton.dev/git-1: gitlab.com
type: kubernetes.io/basic-auth
stringData:
username: <username>
password: <password>
<username>- Username for the repository
<password>- Password or personal access token for the repository
You can also use ssh-auth secret to give a private key for accessing a Git repository, as in the following example:
Example: Private key for SSH-based authentication
apiVersion: v1
kind: Secret
metadata:
name: git-secret-ssh
annotations:
tekton.dev/git-0: https://github.com
type: kubernetes.io/ssh-auth
stringData:
ssh-privatekey:
ssh-privatekey- The content of the SSH private key file.
5.2.1.2. Container registry authentication secrets Copy linkLink copied to clipboard!
If you give authentication secrets using service accounts, OpenShift Pipelines supports the following types of secrets for container (Docker) registry authentication:
-
kubernetes.io/basic-auth: A username and password for Basic HTTP authentication -
kubernetes.io/dockercfg: A serialized~/.dockercfgfile -
kubernetes.io/dockerconfigjson: A serialized~/.docker/config.jsonfile
If you give authentication secrets using service accounts, a container registry secret of the kubernetes.io/basic-auth type must have one or more annotation keys. The names of each key must begin with tekton.dev/docker- and the value is the URL of the host for which OpenShift Pipelines must use the credentials in the secret. This annotation is not required for other types of container registry secrets.
In the following example, OpenShift Pipelines uses a basic-auth secret, which relies on a username and password, to access container registries at quay.io and my-registry.example.com.
Example: Credentials for Basic HTTP authentication with many container repositories
apiVersion: v1
kind: Secret
metadata:
name: docker-secret-basic
annotations:
tekton.dev/docker-0: quay.io
tekton.dev/docker-1: my-registry.example.com
type: kubernetes.io/basic-auth
stringData:
username: <username>
password: <password>
<username>- Username for the registry
<password>- Password or personal access token for the registry
You can create kubernetes.io/dockercfg and kubernetes.io/dockerconfigjson secrets from an existing configuration file, as in the following example:
Example: Command for creating a secret for authenticating to a container repository from an existing configuration file
$ oc create secret generic docker-secret-config \
--from-file=config.json=/home/user/.docker/config.json \
--type=kubernetes.io/dockerconfigjson
You can also use the oc command line utility to create kubernetes.io/dockerconfigjson secrets from credentials, as in the following example:
Example: Command for creating a secret for authenticating to a container repository from credentials
$ oc create secret docker-registry docker-secret-config \
--docker-email=<email> \
--docker-username=<username> \
--docker-password=<password> \
--docker-server=my-registry.example.com:5000
<email>- Email address for the registry
<username>- Username for the registry
<password>- Password or personal access token for the registry
--docker-server- The hostname and port for the registry
5.2.2. Configuring Basic HTTP authentication for Git using a service account Copy linkLink copied to clipboard!
For a pipeline to retrieve resources from password-protected repositories, you can configure the Basic HTTP authentication for that pipeline.
Consider using SSH-based authentication rather than Basic HTTP authentication.
To configure Basic HTTP authentication for a pipeline, create a Basic HTTP authentication secret, associate this secret with a service account, and associate this service account with a TaskRun or PipelineRun resource.
For GitHub, authentication using a plain password is deprecated. Instead, use a personal access token.
Procedure
Create the YAML manifest for the secret in the
secret.yamlfile. In this manifest, specify the username and password or GitHub personal access token to access the target Git repository.apiVersion: v1 kind: Secret metadata: name: basic-user-pass annotations: tekton.dev/git-0: https://github.com type: kubernetes.io/basic-auth stringData: username: <username> password: <password>name-
Name of the secret. In this example,
basic-user-pass. <username>- Username for the Git repository.
<password>- Password or personal access token for the Git repository.
Create the YAML manifest for the service account in the
serviceaccount.yamlfile. In this manifest, associate the secret with the service account.apiVersion: v1 kind: ServiceAccount metadata: name: build-bot secrets: - name: basic-user-passname-
Name of the service account. In this example,
build-bot. - name: basic-user-pass-
Name of the secret. In this example,
basic-user-pass.
Create the YAML manifest for the task run or pipeline run in the
run.yamlfile and associate the service account with the task run or pipeline run. Use one of the following examples:Associate the service account with a
TaskRunresource:apiVersion: tekton.dev/v1 kind: TaskRun metadata: name: build-push-task-run-2 spec: taskRunTemplate: serviceAccountName: build-bot taskRef: name: build-pushname-
Name of the task run. In this example,
build-push-task-run-2. serviceAccountName-
Name of the service account. In this example,
build-bot. name-
Name of the task. In this example,
build-push.
Associate the service account with a
PipelineRunresource:apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: demo-pipeline namespace: default spec: taskRunTemplate: serviceAccountName: build-bot pipelineRef: name: demo-pipelinename-
Name of the pipeline run. In this example,
demo-pipeline. serviceAccountName-
Name of the service account. In this example,
build-bot. name-
Name of the pipeline. In this example,
demo-pipeline.
Apply the YAML manifests that you created by entering the following command:
$ oc apply --filename secret.yaml,serviceaccount.yaml,run.yaml
5.2.3. Configuring SSH authentication for Git using a service account Copy linkLink copied to clipboard!
For a pipeline to retrieve resources from repositories configured with SSH keys, you must configure the SSH-based authentication for that pipeline.
To configure SSH-based authentication for a pipeline, create an authentication secret with the SSH private key, associate this secret with a service account, and associate this service account with a TaskRun or PipelineRun resource.
Procedure
-
Generate an SSH private key, or copy an existing private key, which is usually available in the
~/.ssh/id_rsafile. Create the YAML manifest for the secret in the
secret.yamlfile. In this manifest, set the value ofssh-privatekeyto the content of the SSH private key file, and set the value ofknown_hoststo the content of the known hosts file.apiVersion: v1 kind: Secret metadata: name: ssh-key annotations: tekton.dev/git-0: github.com type: kubernetes.io/ssh-auth stringData: ssh-privatekey: known_hosts:metadata.name-
Name of the secret containing the SSH private key. In this example,
ssh-key. ssh-privatekey- The content of the SSH private key file.
known_hostsThe content of the known hosts file.
ImportantIf you omit the known hosts file, OpenShift Pipelines accepts the public key of any server.
-
Optional: Specify a custom SSH port by adding
:<port_number>to the end of the annotation value. For example,tekton.dev/git-0: github.com:2222. Create the YAML manifest for the service account in the
serviceaccount.yamlfile. In this manifest, associate the secret with the service account.apiVersion: v1 kind: ServiceAccount metadata: name: build-bot secrets: - name: ssh-keymetadata.name-
Name of the service account. In this example,
build-bot. secrets.name-
Name of the secret containing the SSH private key. In this example,
ssh-key.
In the
run.yamlfile, associate the service account with a task run or a pipeline run. Use one of the following examples:To associate the service account with a task run:
apiVersion: tekton.dev/v1 kind: TaskRun metadata: name: build-push-task-run-2 spec: taskRunTemplate: serviceAccountName: build-bot taskRef: name: build-pushmetadata.name-
Name of the task run. In this example,
build-push-task-run-2. serviceAccountName-
Name of the service account. In this example,
build-bot. taskRef.name-
Name of the task. In this example,
build-push.
To associate the service account with a pipeline run:
apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: demo-pipeline namespace: default spec: taskRunTemplate: serviceAccountName: build-bot pipelineRef: name: demo-pipelinemetadata.name-
Name of the pipeline run. In this example,
demo-pipeline. serviceAccountName-
Name of the service account. In this example,
build-bot. pipelineRef.name-
Name of the pipeline. In this example,
demo-pipeline.
Apply the changes.
$ oc apply --filename secret.yaml,serviceaccount.yaml,run.yaml
5.2.4. Configuring container registry authentication using a service account Copy linkLink copied to clipboard!
For a pipeline to retrieve container images from a registry or push container images to a registry, you must configure the authentication for that registry.
To configure registry authentication for a pipeline, create an authentication secret with the Docker configuration file, associate this secret with a service account, and associate this service account with a TaskRun or PipelineRun resource.
Procedure
Create the container registry authentication secret from an existing
config.jsonfile, which has the authentication information, by entering the following command:$ oc create secret generic my-registry-credentials \ --from-file=config.json=/home/user/credentials/config.jsonmy-registry-credentials- The name of the secret.
--from-file-
The path name of the
config.jsonfile, in this example,/home/user/credentials/config.json
Create the YAML manifest for the service account in the
serviceaccount.yamlfile. In this manifest, associate the secret with the service account.apiVersion: v1 kind: ServiceAccount metadata: name: container-bot secrets: - name: my-registry-credentialsmetadata.name-
Name of the service account. In this example,
container-bot. secrets.name-
Name of the secret containing the registry credentials. In this example,
my-registry-credentials.
Create a YAML manifest for a task run or pipeline run as the
run.yamlfile. In this file, associate the service account with a task run or a pipeline run. Use one of the following examples:To associate the service account with a task run:
apiVersion: tekton.dev/v1 kind: TaskRun metadata: name: build-container-task-run-2 spec: taskRunTemplate: serviceAccountName: container-bot taskRef: name: build-containermetadata.name-
Name of the task run. In this example,
build-container-task-run-2. serviceAccountName-
Name of the service account. In this example,
container-bot. taskRef.name-
Name of the task. In this example,
build-container.
To associate the service account with a pipeline run:
apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: demo-pipeline namespace: default spec: taskRunTemplate: serviceAccountName: container-bot pipelineRef: name: demo-pipelinemetadata.name-
Name of the pipeline run. In this example,
demo-pipeline. serviceAccountName-
Name of the service account. In this example,
container-bot. pipelineRef.name-
Name of the pipeline. In this example,
demo-pipeline.
Apply the changes by entering the following command:
$ oc apply --filename serviceaccount.yaml,run.yaml
5.2.5. Additional considerations for authentication using service accounts Copy linkLink copied to clipboard!
In certain cases, you must complete additional steps to use authentication secrets that you provide using service accounts.
5.2.5.1. SSH Git authentication in tasks Copy linkLink copied to clipboard!
You can directly start Git commands in the steps of a task and use SSH authentication, but you must complete an additional step.
OpenShift Pipelines provides the SSH files in the /tekton/home/.ssh directory and sets the $HOME variable to /tekton/home. However, Git SSH authentication ignores the $HOME variable and uses the home directory specified in the /etc/passwd file for the user. Therefore, a step that uses Git command must symlink the /tekton/home/.ssh directory to the home directory of the associated user.
For example, if the task runs as the root user, the step must include the following command before Git commands:
apiVersion: tekton.dev/v1
kind: Task
metadata:
name: example-git-task
spec:
steps:
- name: example-git-step
# ...
script:
ln -s $HOME/.ssh /root/.ssh
# ...
However, explicit symlinks are not necessary when you use a pipeline resource of the git type or the git-clone task available in the Tekton catalog.
As an example of using SSH authentication in git type tasks, refer to authenticating-git-commands.yaml.
5.2.5.2. Use of secrets as a non-root user Copy linkLink copied to clipboard!
You might need to use secrets as a non-root user in certain scenarios, such as:
You can configure your tasks to use secrets as a non-root user. This is useful in scenarios where you need to authenticate without root privileges.
- The platform randomizes the users and groups that the containers use to start runs.
- The steps in a task define a non-root security context.
- A task specifies a global non-root security context, which applies to all steps in a task.
In such scenarios, consider the following aspects of executing task runs and pipeline runs as a non-root user:
-
SSH authentication for Git requires the user to have a valid home directory configured in the
/etc/passwddirectory. Specifying a UID that has no valid home directory results in authentication failure. -
SSH authentication ignores the
$HOMEenvironment variable. So you must or symlink the appropriate secret files from the$HOMEdirectory defined by OpenShift Pipelines (/tekton/home), to the non-root user’s valid home directory.
In addition, to configure SSH authentication in a non-root security context, refer to the git-clone-and-check step in the example for authenticating git commands.
5.3. Providing secrets using workspaces Copy linkLink copied to clipboard!
You can use workspaces to provide secrets for authentication with Git repositories and container repositories.
You can configure a named workspace in a task, specifying a path where the workspace is mounted. When you run the task, provide the secret as the workspace with this name. When OpenShift Pipelines executes the task, the information in the secret is available to the task.
If you provide authentication secrets using workspaces, annotations for the secrets are not required.
5.3.1. Configuring SSH authentication for Git using workspaces Copy linkLink copied to clipboard!
For a pipeline to retrieve resources from repositories configured with SSH keys, you must configure the SSH-based authentication for that pipeline.
To configure SSH-based authentication for a pipeline, create an authentication secret with the SSH private key, configure a named workspace for this secret in the task, and specify the secret when running the task.
Procedure
Create the Git SSH authentication secret from files in an existing
.sshdirectory by entering the following command:$ oc create secret generic my-github-ssh-credentials \ --from-file=id_ed25519=/home/user/.ssh/id_ed25519 \ --from-file=known_hosts=/home/user/.ssh/known_hostsmy-github-ssh-credentials- The name of the secret.
--from-file=id_ed25519-
The name and full path name of the private key file, in this example,
/home/user/.ssh/id_ed25519 --from-file=known_hosts-
The name and full path name of the known hosts file, in this example,
/home/user/.ssh/known_hosts
In your task definition, configure a named workspace for the Git authentication, for example,
ssh-directory:Example definition of a workspace
apiVersion: tekton.dev/v1 kind: Task metadata: name: git-clone spec: workspaces: - name: ssh-directory description: | A .ssh directory with private key, known_hosts, config, etc.-
In the steps of the task, access the directory using the path in the
$(workspaces.<workspace_name>.path)environment variable, for example,$(workspaces.ssh-directory.path) When running the task, specify the secret for the named workspace by including the
--workspaceargument in thetkn task startcommand:$ tkn task start <task_name> --workspace name=<workspace_name>,secret=<secret_name> # ...<secret_name>-
Replace
<workspace_name>with the name of the workspace that you configured and<secret_name>with the name of the secret that you created.
Example task for cloning a Git repository by using an SSH key for authentication
apiVersion: tekton.dev/v1
kind: Task
metadata:
name: git-clone
spec:
workspaces:
- name: output
description: The git repo will be cloned onto the volume backing this Workspace.
- name: ssh-directory
description: |
A .ssh directory with private key, known_hosts, config, etc. Copied to
the user's home before git commands are executed. Used to authenticate
with the git remote when performing the clone. Binding a Secret to this
Workspace is strongly recommended over other volume types
params:
- name: url
description: Repository URL to clone from.
type: string
- name: revision
description: Revision to checkout. (branch, tag, sha, ref, etc...)
type: string
default: ""
- name: gitInitImage
description: The image providing the git-init binary that this Task runs.
type: string
default: "gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init:v0.37.0"
results:
- name: commit
description: The precise commit SHA that was fetched by this Task.
- name: url
description: The precise URL that was fetched by this Task.
steps:
- name: clone
image: "$(params.gitInitImage)"
script: |
#!/usr/bin/env sh
set -eu
# This is necessary for recent version of git
git config --global --add safe.directory '*'
cp -R "$(workspaces.ssh-directory.path)" "${HOME}"/.ssh
chmod 700 "${HOME}"/.ssh
chmod -R 400 "${HOME}"/.ssh/*
CHECKOUT_DIR="$(workspaces.output.path)/"
/ko-app/git-init \
-url="$(params.url)" \
-revision="$(params.revision)" \
-path="${CHECKOUT_DIR}"
cd "${CHECKOUT_DIR}"
RESULT_SHA="$(git rev-parse HEAD)"
EXIT_CODE="$?"
if [ "${EXIT_CODE}" != 0 ] ; then
exit "${EXIT_CODE}"
fi
printf "%s" "${RESULT_SHA}" > "$(results.commit.path)"
printf "%s" "$(params.url)" > "$(results.url.path)"
-R-
The script copies the content of the secret (in the form of a folder) to
${HOME}/.ssh, which is the standard folder wheresshsearches for credentials.
Example command for running the task
$ tkn task start git-clone
--param url=git@github.com:example-github-user/buildkit-tekton
--workspace name=output,emptyDir=""
--workspace name=ssh-directory,secret=my-github-ssh-credentials
--use-param-defaults --showlog
5.3.2. Configuring container registry authentication using workspaces Copy linkLink copied to clipboard!
For a pipeline to retrieve container images from a registry, you must configure the authentication for that registry.
To configure authentication for a container registry, create an authentication secret with the Docker configuration file, configure a named workspace for this secret in the task, and specify the secret when running the task.
Procedure
Create the container registry authentication secret from an existing
config.jsonfile, which has the authentication information, by entering the following command:$ oc create secret generic my-registry-credentials \ --from-file=config.json=/home/user/credentials/config.jsonmy-registry-credentials- The name of the secret.
--from-file-
The path name of the
config.jsonfile, in this example,/home/user/credentials/config.json
In your task definition, configure a named workspace for the Git authentication, for example,
ssh-directory:Example definition of a workspace
apiVersion: tekton.dev/v1 kind: Task metadata: name: skopeo-copy spec: workspaces: - name: dockerconfig description: Includes a docker `config.json` # ...-
In the steps of the task, access the directory by using the path in the
$(workspaces.<workspace_name>.path)environment variable, for example,$(workspaces.dockerconfig.path). To run the task, specify the secret for the named workspace by including the
--workspaceargument in thetkn task startcommand:$ tkn task start <task_name> --workspace name=<workspace_name>,secret=<secret_name> # ...<secret_name>-
Replace
<workspace_name>with the name of the workspace that you configured and<secret_name>with the name of the secret that you created.
Example task for copying an image from a container repository by using Skopeo
apiVersion: tekton.dev/v1
kind: Task
metadata:
name: skopeo-copy
spec:
workspaces:
- name: dockerconfig
description: Includes a docker `config.json`
steps:
- name: clone
image: quay.io/skopeo/stable:v1.8.0
env:
- name: DOCKER_CONFIG
value: $(workspaces.dockerconfig.path)
script: |
#!/usr/bin/env sh
set -eu
skopeo copy docker://docker.io/library/ubuntu:latest docker://quay.io/example_repository/ubuntu-copy:latest
workspaces.name-
The name of the workspace that has the
config.jsonfile. env.value-
The
DOCKER_CONFIGenvironment variable points to the location of theconfig.jsonfile in thedockerconfigworkspace. Skopeo uses this environment variable to get the authentication information.
Example command for running the task
$ tkn task start skopeo-copy
--workspace name=dockerconfig,secret=my-registry-credentials
--use-param-defaults --showlog
5.3.3. Limiting a secret to particular steps using workspaces Copy linkLink copied to clipboard!
When you give authentication secrets using workspaces and define the workspace in a task, by default the workspace is available to all steps in the task.
To limit a secret to specific steps, define the workspace both in the task specification and in the step specification.
Procedure
Add the
workspaces:definition under both the task specification and the step specification, as in the following example:Example task definition where only one step can access the credentials workspace
apiVersion: tekton.dev/v1 kind: Task metadata: name: git-clone-build spec: workspaces: - name: ssh-directory description: | A .ssh directory with private key, known_hosts, config, etc. # ... steps: - name: clone workspaces: - name: ssh-directory # ... - name: build # ...spec.workspaces-
The definition of the
ssh-directoryworkspace in the task specification. steps.workspaces-
The definition of the
ssh-directoryworkspace in the step specification. The authentication information is available to this step as the$(workspaces.ssh-directory.path)directory. steps.name: build-
As this step does not include a definition of the
ssh-directoryworkspace, the authentication information is not available to this step.
Chapter 6. Building of container images using Buildah as a non-root user Copy linkLink copied to clipboard!
Running OpenShift Pipelines as the root user on a container can expose the container processes and the host to other potentially malicious resources. You can reduce this type of exposure by running the workload as a specific non-root user in the container.
In most cases, you can run Buildah without root privileges by creating a custom task for building the image and configuring user namespaces in this task.
If your image does not build successfully using this configuration, you can use custom service account (SA) and security context constraint (SCC) definitions; however, if you use this option, you must enable the Buildah step to raise its privileges (allowPrivilegeEscalation: true).
6.1. Running Buildah as a non-root user by configuring user namespaces Copy linkLink copied to clipboard!
Configuring user namespaces is the simplest way to run Buildah in a task as a non-root user. However, some images might not build using this option.
Configuring user namespaces is the simplest way to run Buildah in a task as a non-root user. However, some images might not build using this option.
Prerequisites
-
You have installed the
occommand-line utility.
Procedure
To create a copy of the
buildahtask, which Red Hat provides in theopenshift-pipelinesnamespace, and to change the name of the copy tobuildah-as-user, enter the following command:$ oc get task buildah -n openshift-pipelines -o yaml | yq '. |= (del .metadata |= with_entries(select(.key == "name" )))' | yq '.kind="Task"' | yq '.metadata.name="buildah-as-user"' | oc create -f -Edit the copied
buildahtask by entering the following command:$ oc edit task buildah-as-userIn the new task, create
annotationsandstepTemplatesections, as shown in the following example:Example additions to the
buildah-as-usertaskapiVersion: tekton.dev/v1 kind: Task metadata: annotations: io.kubernetes.cri-o.userns-mode: 'auto:size=65536;map-to-root=true' io.openshift.builder: 'true' name: assemble-containerimage namespace: pipeline-namespace spec: description: This task builds an image. # ... stepTemplate: env: - name: HOME value: /tekton/home image: $(params.builder-image) imagePullPolicy: IfNotPresent name: '' resources: limits: cpu: '1' memory: 4Gi requests: cpu: 100m memory: 2Gi securityContext: capabilities: add: - SETFCAP runAsNonRoot: true runAsUser: 1000 workingDir: $(workspaces.working-directory.path) # ...runAsUser-
The
runAsUser:setting is not strictly necessary, because the task usespodTemplate.
-
Use the new
buildah-as-usertask to build the image in your pipeline.
6.2. Running Buildah as a non-root user by defining a custom SA and SCC Copy linkLink copied to clipboard!
To run builds of container images using Buildah as a non-root user, you can perform the following steps:
- Define custom service account (SA) and security context constraint (SCC).
-
Configure Buildah to use the
builduser with id1000. - Start a task run with a custom config map, or integrate it with a pipeline run.
6.2.1. Configuring custom service account and security context constraint Copy linkLink copied to clipboard!
The default pipeline SA allows using a user ID outside of the namespace range. To reduce dependency on the default SA, you can define a custom SA and security context constraint (SCC) with the necessary cluster role and role bindings for the build user with user ID 1000.
At this time, Buildah requires enabling the allowPrivilegeEscalation setting to run successfully in the container. With this setting, Buildah can use SETUID and SETGID capabilities when running as a non-root user.
Procedure
Create a custom SA and SCC with necessary cluster role and role bindings.
Example: Custom SA and SCC for used id
1000apiVersion: v1 kind: ServiceAccount metadata: name: pipelines-sa-userid-1000 --- kind: SecurityContextConstraints metadata: annotations: name: pipelines-scc-userid-1000 allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: - system:cluster-admins priority: 10 readOnlyRootFilesystem: false requiredDropCapabilities: - MKNOD - KILL runAsUser: type: MustRunAs uid: 1000 seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pipelines-scc-userid-1000-clusterrole rules: - apiGroups: - security.openshift.io resourceNames: - pipelines-scc-userid-1000 resources: - securitycontextconstraints verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: pipelines-scc-userid-1000-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: pipelines-scc-userid-1000-clusterrole subjects: - kind: ServiceAccount name: pipelines-sa-userid-1000name- Define a custom SA.
SecurityContextConstraints-
Define a custom SCC created based on restricted privileges, with modified
runAsUserfield. allowPrivilegeEscalation-
At this time, Buildah requires enabling the
allowPrivilegeEscalationsetting to run successfully in the container. With this setting, Buildah can useSETUIDandSETGIDcapabilities when running as a non-root user. uid-
Restrict any pod that gets attached with the custom SCC through the custom SA to run as user id
1000. ClusterRole- Define a cluster role that uses the custom SCC.
RoleBinding- Bind the cluster role that uses the custom SCC to the custom SA.
6.2.2. Configuring Buildah to use build user Copy linkLink copied to clipboard!
You can define a Buildah task to use the build user with user id 1000.
Procedure
Create a copy of the
buildahtask in theopenshift-pipelinesnamespace; change the name of the copy tobuildah-as-user.$ oc get task buildah -n openshift-pipelines -o yaml \ | yq '. |= (del .metadata |= with_entries(select(.key == "name" )))' \ | yq '.kind="Task"' | yq '.metadata.name="buildah-as-user"' | oc create -f -Edit the copied
buildahtask.$ oc edit task buildah-as-userExample: Modified Buildah task with
builduserapiVersion: tekton.dev/v1 kind: Task metadata: name: buildah-as-user spec: description: >- Buildah task builds source into a container image and then pushes it to a container registry. Buildah Task builds source into a container image using Project Atomic's Buildah build tool.It uses Buildah's support for building from Dockerfiles, using its buildah bud command.This command executes the directives in the Dockerfile to assemble a container image, then pushes that image to a container registry. params: - name: IMAGE description: Reference of the image buildah will produce. - name: BUILDER_IMAGE description: The location of the buildah builder image. default: registry.redhat.io/rhel8/buildah@sha256:99cae35f40c7ec050fed3765b2b27e0b8bbea2aa2da7c16408e2ca13c60ff8ee - name: STORAGE_DRIVER description: Set buildah storage driver default: vfs - name: DOCKERFILE description: Path to the Dockerfile to build. default: ./Dockerfile - name: CONTEXT description: Path to the directory to use as context. default: . - name: TLSVERIFY description: Verify the TLS on the registry endpoint (for push/pull to a non-TLS registry) default: "true" - name: FORMAT description: The format of the built container, oci or docker default: "oci" - name: BUILD_EXTRA_ARGS description: Extra parameters passed for the build command when building images. default: "" - description: Extra parameters passed for the push command when pushing images. name: PUSH_EXTRA_ARGS type: string default: "" - description: Skip pushing the built image name: SKIP_PUSH type: string default: "false" results: - description: Digest of the image just built. name: IMAGE_DIGEST type: string workspaces: - name: source steps: - name: build securityContext: runAsUser: 1000 image: $(params.BUILDER_IMAGE) workingDir: $(workspaces.source.path) script: | echo "Running as USER ID `id`" buildah --storage-driver=$(params.STORAGE_DRIVER) bud \ $(params.BUILD_EXTRA_ARGS) --format=$(params.FORMAT) \ --tls-verify=$(params.TLSVERIFY) --no-cache \ -f $(params.DOCKERFILE) -t $(params.IMAGE) $(params.CONTEXT) [[ "$(params.SKIP_PUSH)" == "true" ]] && echo "Push skipped" && exit 0 buildah --storage-driver=$(params.STORAGE_DRIVER) push \ $(params.PUSH_EXTRA_ARGS) --tls-verify=$(params.TLSVERIFY) \ --digestfile $(workspaces.source.path)/image-digest $(params.IMAGE) \ docker://$(params.IMAGE) cat $(workspaces.source.path)/image-digest | tee /tekton/results/IMAGE_DIGEST volumeMounts: - name: varlibcontainers mountPath: /home/build/.local/share/containers volumes: - name: varlibcontainers emptyDir: {}runAsUser-
Run the container explicitly as the user id
1000, which corresponds to thebuilduser in the Buildah image. echo "Running as USER ID `id`"-
Display the user id to confirm that the process is running as user id
1000. mountPath- You can change the path for the volume mount as necessary.
6.2.3. Starting a task run with custom config map, or a pipeline run Copy linkLink copied to clipboard!
After defining the custom Buildah task, you can create a TaskRun object that builds an image as a build user with user id 1000. In addition, you can integrate the TaskRun object as part of a PipelineRun object.
Procedure
Create a
TaskRunobject with a customConfigMapandDockerfileobjects.Example: A task run that runs Buildah as user id
1000apiVersion: v1 data: Dockerfile: | ARG BASE_IMG=registry.access.redhat.com/ubi9/ubi FROM $BASE_IMG AS buildah-runner RUN dnf -y update && \ dnf -y install git && \ dnf clean all CMD git kind: ConfigMap metadata: name: dockerfile --- apiVersion: tekton.dev/v1 kind: TaskRun metadata: name: buildah-as-user-1000 spec: taskRunTemplate: serviceAccountName: pipelines-sa-userid-1000 params: - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/test/buildahuser taskRef: kind: Task name: buildah-as-user workspaces: - configMap: name: dockerfile name: sourceconfigMapRef.name- Use a config map because the focus is on the task run, without any prior task that fetches some sources with a Dockerfile.
serviceAccountName- The name of the service account that you created.
workspaces.name-
Mount a config map as the source workspace for the
buildah-as-usertask.
(Optional) Create a pipeline and a corresponding pipeline run.
Example: A pipeline and corresponding pipeline run
apiVersion: tekton.dev/v1 kind: Pipeline metadata: name: pipeline-buildah-as-user-1000 spec: params: - name: IMAGE - name: URL workspaces: - name: shared-workspace - name: sslcertdir optional: true tasks: - name: fetch-repository taskRef: resolver: cluster params: - name: kind value: task - name: name value: git-clone - name: namespace value: openshift-pipelines workspaces: - name: output workspace: shared-workspace params: - name: URL value: $(params.URL) - name: SUBDIRECTORY value: "" - name: DELETE_EXISTING value: "true" - name: buildah taskRef: name: buildah-as-user runAfter: - fetch-repository workspaces: - name: source workspace: shared-workspace - name: sslcertdir workspace: sslcertdir params: - name: IMAGE value: $(params.IMAGE) --- apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipelinerun-buildah-as-user-1000 spec: taskRunSpecs: - pipelineTaskName: buildah taskServiceAccountName: pipelines-sa-userid-1000 params: - name: URL value: https://github.com/openshift/pipelines-vote-api - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/test/buildahuser pipelineRef: name: pipeline-buildah-as-user-1000 workspaces: - name: shared-workspace volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Mitasks.name: fetch-repository-
Use the
git-clonetask to fetch the source containing a Dockerfile and build it using the modified Buildah task. taskRef.name- Refer to the modified Buildah task.
taskServiceAccountName- Use the service account that you created for the Buildah task.
workspaces.name: shared-workspace-
Share data between the
git-clonetask and the modified Buildah task using a persistent volume claim (PVC) created automatically by the controller.
- Start the task run or the pipeline run.
6.3. Limitations of unprivileged builds Copy linkLink copied to clipboard!
The process for unprivileged builds works with most Dockerfile objects. However, there are some known limitations might cause a build to fail:
-
Using the
--mount=type=cacheoption might fail due to lack of necessary permissions issues. For more information, see this article. -
Using the
--mount=type=secretoption fails because mounting resources requires additional capabilities that are not provided by the custom SCC.
Additional resources
Chapter 7. Using buildah-ns Tekton task Copy linkLink copied to clipboard!
The buildah-ns Tekton task builds Open Container Initiative (OCI) images without requiring a container runtime daemon, such as the Docker daemon. The task uses buildah and applies user namespace isolation to provide enhanced security.
After a successful build, the task produces the following results:
- The fully qualified image name
- The SHA256 digest of the image
The buildah-ns task is functionally identical to the standard buildah Tekton task, but applies additional security mechanisms to improve container isolation at the kernel level.
7.1. Differences between buildah and buildah-ns tasks Copy linkLink copied to clipboard!
The buildah-ns task extends the standard buildah task with the following security-focused changes:
-
Task name: The task name is
buildah-nsinstead ofbuildah. Annotations: The task includes security annotations that enable automatic user namespace mapping:
io.kubernetes.cri-o.userns-mode: "auto" io.openshift.builder: "true"- Security model: User namespace separation improves privilege isolation and limits the impact of potential container escape vulnerabilities.
7.2. Security model of the buildah-ns task Copy linkLink copied to clipboard!
The buildah-ns task applies user namespace isolation to give privilege separation between containers and the host system.
The buildah-ns task applies user namespace isolation to give privilege separation between containers and the host system.
7.2.1. UID mapping behavior Copy linkLink copied to clipboard!
When the task runs with namespace annotations, the system maps user IDs (UIDs) as follows:
- Inside the container: Processes run as UID 0, which is displayed as the root user.
- Outside the container: The same processes run as a nonzero UID on the host system.
This mapping allows processes inside the container to behave as if they have root privileges while restricting their privileges on the host system.
7.2.2. Security benefits Copy linkLink copied to clipboard!
User namespace isolation provides the following security advantages:
- kernel-level isolation: Adds an extra isolation boundary between containers.
- Reduced privilege exposure: Limits the impact of compromised workloads by running them as non-root users on the host.
- Container escape protection: Helps mitigate potential vulnerabilities that allow escaping from the container runtime environment.
7.3. Workspaces, parameters, and results for the buildah-ns task Copy linkLink copied to clipboard!
The buildah-ns task requires a workspace, accepts several parameters for image build customization, and provides results that contain information about the built image.
The buildah-ns task requires a workspace, accepts several parameters for image build customization, and provides results that contain information about the built image.
7.3.1. Workspace Copy linkLink copied to clipboard!
| Name | Required | Description |
|---|---|---|
| source | Yes |
The build context for the container image. Typically has application source code and a |
7.3.2. Parameters Copy linkLink copied to clipboard!
| Name | Type | Default | Description |
|---|---|---|---|
|
| string | Required | Fully qualified name of the image to build, including tag. |
| CONTAINERFILE_PATH | string | Containerfile | Path to the container build file relative to the source workspace. |
| TLS_VERIFY | string | true |
Whether to verify TLS when pushing images. Red Hat recommends setting this value to |
| VERBOSE | string | false | Enables verbose build output. |
| SUBDIRECTORY | string | . | Subdirectory in the workspace to use as the build context. |
| STORAGE_DRIVER | string | overlay | Storage driver for Buildah, aligned with the cluster node configuration. |
| BUILD_EXTRA_ARGS | string | Empty |
Additional flags for the |
| PUSH_EXTRA_ARGS | string | Empty |
Additional flags for the |
| SKIP_PUSH | string | false |
If set to |
7.3.3. Results Copy linkLink copied to clipboard!
| Name | Description |
|---|---|
| IMAGE_URL | Fully qualified name of the built image. |
| IMAGE_DIGEST | SHA256 digest of the built image. |
7.4. Running the buildah-ns task Copy linkLink copied to clipboard!
You can run the buildah-ns task as part of a PipelineRun resource.
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata: {}
spec:
pipelineRef:
name: task-buildah-ns
params:
- name: IMAGE
value: your-image-name
- name: TLS_VERIFY
value: true
- name: VERBOSE
value: false
workspaces:
- name: source
persistentVolumeClaim:
claimName: your-pvc-name
value-
Replace
your-image-namewith the full name of the container image that you want to build. claimName-
Replace
your-pvc-namewith the name of thePersistentVolumeClaim(PVC) that stores the application source code.
If the target container registry requires authentication, configure a Kubernetes secret for registry access and link it to the service account that runs the TaskRun or PipelineRun resources.
Additional resources