Este contenido no está disponible en el idioma seleccionado.

Chapter 1. Red Hat OpenShift Container Platform


1.1. Protect your signing data

As a systems administrator, protecting the signing data of your software supply chain is critical when there is data loss due to hardware failure or accidental data deletion.

The OpenShift API Data Protection (OADP) product provides data protection to applications running on Red Hat OpenShift Container Platform. By using the OADP product, this can help us get the software developers back to signing and verifying code as quickly as possible. After installing and configuring the OADP operator you can start backing up and restoring your Red Hat Trusted Artifact Signer (RHTAS) data.

1.1.1. Installing and configuring the OADP Operator

The OpenShift API Data Protection (OADP) Operator gives you the ability to backup OpenShift application resources and internal container images. You can use the OADP Operator to backup and restore your Trusted Artifact Signer data.

Important

This procedure uses Amazon Web Services (AWS) Simple Storage Service (S3) to create a bucket for illustrating how to configure the OADP operator. You can choose to use a different supported S3-compatible object storage platform instead of AWS, such as Red Hat OpenShift Data Foundation.

Prerequisites

  • Red Hat OpenShift Container Platform 4.15 or later.
  • Access to the OpenShift web console with the cluster-admin role.
  • The ability to create an S3-compatible bucket.
  • A workstation with the oc, and aws binaries installed.

Procedure

  1. Open a terminal on your workstation, and log in to OpenShift:

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    Copy to Clipboard Toggle word wrap
    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Copy to Clipboard Toggle word wrap
    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  2. Create a new bucket:

    export BUCKET=NEW_BUCKET_NAME
    export REGION=AWS_REGION_ID
    export USER=OADP_USER_NAME
    
    aws s3api create-bucket \
    --bucket $BUCKET \
    --region $REGION \
    --create-bucket-configuration LocationConstraint=$REGION
    Copy to Clipboard Toggle word wrap
    $ export BUCKET=example-bucket-name
    $ export REGION=us-east-1
    $ export USER=velero
    $
    $ aws s3api create-bucket \
    --bucket $BUCKET \
    --region $REGION \
    --create-bucket-configuration LocationConstraint=$REGION
    Copy to Clipboard Toggle word wrap
  3. Create a new user:

    $ aws iam create-user --user-name $USER
    Copy to Clipboard Toggle word wrap
  4. Create a new policy:

    $ cat > velero-policy.json <<EOF
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:DescribeVolumes",
                    "ec2:DescribeSnapshots",
                    "ec2:CreateTags",
                    "ec2:CreateVolume",
                    "ec2:CreateSnapshot",
                    "ec2:DeleteSnapshot"
                ],
                "Resource": "*"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:GetObject",
                    "s3:DeleteObject",
                    "s3:PutObject",
                    "s3:AbortMultipartUpload",
                    "s3:ListMultipartUploadParts"
                ],
                "Resource": [
                    "arn:aws:s3:::${BUCKET}/*"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:ListBucket",
                    "s3:GetBucketLocation",
                    "s3:ListBucketMultipartUploads"
                ],
                "Resource": [
                    "arn:aws:s3:::${BUCKET}"
                ]
            }
        ]
    }
    EOF
    Copy to Clipboard Toggle word wrap
  5. Associate this policy to the new user:

    $ aws iam put-user-policy \
    --user-name $USER \
    --policy-name velero \
    --policy-document file://velero-policy.json
    Copy to Clipboard Toggle word wrap
  6. Create an access key:

    $ aws iam create-access-key --user-name $USER --output=json | jq -r '.AccessKey | [ "export AWS_ACCESS_KEY_ID=" + .AccessKeyId, "export AWS_SECRET_ACCESS_KEY=" + .SecretAccessKey ] | join("\n")'
    Copy to Clipboard Toggle word wrap
  7. Create a credentials file with your AWS secret key information:

    $ cat << EOF > ./credentials-velero
    [default]
    aws_access_key_id=$AWS_ACCESS_KEY_ID
    aws_secret_access_key=$AWS_SECRET_ACCESS_KEY
    EOF
    Copy to Clipboard Toggle word wrap
  8. Log in to the OpenShift web console with a user that has the cluster-admin role.
  9. From the Administrator perspective, expand the Operators navigation menu, and click OperatorHub.
  10. In the search field, type oadp, and click the OADP Operator tile provided by Red Hat.
  11. Click the Install button to show the operator details.
  12. Accept the default values, click Install on the Install Operator page, and wait for the installation to finish.
  13. After the operator installation finishes, from your workstation terminal, create a secret resource for OpenShift with your AWS credentials:

    $ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
    Copy to Clipboard Toggle word wrap
  14. From the OpenShift web console, click the View Operator button.
  15. Click Create instance on the DataProtectionApplication (DPA) tile.
  16. On the Create DataProtectionApplication page, select YAML view.
  17. Edit the following values in the resource file:

    1. Under the metadata section, replace velero-sample with velero.
    2. Under the spec.configuration.nodeAgent section, replace restic with kopia.
    3. Under the spec.configuration.velero section, add resourceTimeout: 10m.
    4. Under the spec.configuration.velero.defaultPlugins section, add - csi.
    5. Under the spec.snapshotLocations section, replace the us-west-2 value with your AWS regional value.
    6. Under the spec.backupLocations section, replace the us-east-1 value with your AWS regional value.
    7. Under the spec.backupLocations.objectStorage section, replace my-bucket-name with your bucket name. Replace velero with your bucket prefix name, if you use a different prefix.
  18. Click the Create button.

1.1.2. Backing up your Trusted Artifact Signer data

With the OpenShift API Data Protection (OADP) operator installed and with an instance deployed, you can create a volume snapshot resource, and a backup resource to backup your Red Hat Trusted Artifact Signer (RHTAS) data.

Prerequisites

  • Red Hat OpenShift Container Platform 4.15 or later.
  • Access to the OpenShift web console with the cluster-admin role.
  • Installation of the OADP operator.
  • A workstation with the oc binary installed.

Procedure

  1. Open a terminal on your workstation, and log in to OpenShift:

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    Copy to Clipboard Toggle word wrap
    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Copy to Clipboard Toggle word wrap
    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  2. Find and edit the VolumeSnapshotClass resource:

    $ oc get VolumeSnapshotClass -n openshift-adp
    $ oc edit VolumeSnapshotClass csi-aws-vsc -n openshift-adp
    Copy to Clipboard Toggle word wrap
  3. Update the following values in the resource file:

    1. Under the metadata.labels section, add the velero.io/csi-volumesnapshot-class: "true" label.
    2. Save your changes, and quit the editor.
  4. Create a one-time, initial Backup job resource:

    $ cat <<EOF | oc apply -f -
    apiVersion: velero.io/v1
    kind: Backup
    metadata:
      name: rhtas-backup
      labels:
        velero.io/storage-location: velero-1
      namespace: openshift-adp
    spec:
      hooks: {}
      includedNamespaces:
      - trusted-artifact-signer
      includedResources: []
      excludedResources: []
      snapshotMoveData: true
      storageLocation: velero-1
      ttl: 720h0m0s
    EOF
    Copy to Clipboard Toggle word wrap

    By default, all resources are backed up within the trusted-artifact-signer namespace. You can specify what resources you want to include or exclude by using the includeResources or excludedResources properties respectively.

    Important

    Depending on the storage class of the backup target, persistent volumes cannot be actively in-use for the backup to be successful.

  5. Create a new schedule for regular backups to occur:

    $ cat << EOF | oc apply -f -
    apiVersion: velero.io/v1
    kind: Schedule
    metadata:
      name: BACKUP_JOB_NAME
      namespace: openshift-adp
    spec:
      schedule: USER_DEFINED_SCHEDULE
      template:
        hooks: {}
        includedNamespaces:
        - trusted-artifact-signer
        storageLocation: velero-1
        defaultVolumesToFsBackup: true
        ttl: 720h0m0s
    EOF
    Copy to Clipboard Toggle word wrap

    Replace BACKUP_JOB_NAME with a job name, and replace USER_DEFINED_SCHEDULE with a cron-formatted expression for the schedule. For example, using a cron-formatted schedule of */10 * * * *, this backs up the trusted-artifact-signer namespace and its resources every 10 minutes.

    1. You can verify if this schedule is enabled, and when the last backup job ran. For example:

      $ oc get schedule -n openshift-adp
      
      NAME            STATUS    SCHEDULE       LASTBACKUP   AGE   PAUSED
      rhtas-backups   Enabled   0/10 * * * *   3m11s        16m
      Copy to Clipboard Toggle word wrap

1.1.3. Restoring your Trusted Artifact Signer data

With the Red Hat Trusted Artifact Signer (RHTAS) and OpenShift API Data Protection (OADP) operators installed, and a backup resource for RHTAS namespace, you can restore your data to an OpenShift cluster.

Prerequisites

Procedure

  1. Disable the RHTAS operator:

    $ oc scale deploy rhtas-operator-controller-manager --replicas=0 -n openshift-operators
    Copy to Clipboard Toggle word wrap
  2. Create the Restore resource:

    $ cat <<EOF | oc apply -f -
    apiVersion: velero.io/v1
    kind: Restore
    metadata:
      name: rhtas-restore
      namespace: openshift-adp
    spec:
      backupName: rhtas-backup
      includedResources: []
      restoreStatus:
        includedResources:
          - securesign.rhtas.redhat.com
          - trillian.rhtas.redhat.com
          - ctlog.rhtas.redhat.com
          - fulcio.rhtas.redhat.com
          - rekor.rhtas.redhat.com
          - tuf.rhtas.redhat.com
          - timestampauthority.rhtas.redhat.com
      excludedResources:
      - pod
      - deployment
      - nodes
      - route
      - service
      - replicaset
      - events
      - cronjob
      - events.events.k8s.io
      - backups.velero.io
      - restores.velero.io
      - resticrepositories.velero.io
      - pods
      - deployments
      restorePVs: true
      existingResourcePolicy: update
    EOF
    Copy to Clipboard Toggle word wrap
  3. If restoring your RHTAS data to a different OpenShift cluster, do the following steps.

    1. Delete the secret for the Trillian database:

      $ oc delete secret securesign-sample-trillian-db-tls
      $ oc delete pod trillian-db-xxx
      Copy to Clipboard Toggle word wrap
      Note

      The RHTAS operator recreates the secret and restarts the pod.

    2. Run the restoreOwnerReferences.sh script.
  4. Enable the RHTAS operator:

    $ oc scale deploy rhtas-operator-controller-manager --replicas=1 -n openshift-operators
    Copy to Clipboard Toggle word wrap
    Important

    Immediately starting the RHTAS operator after starting the restore ensures the claim of the persistent volume.

1.2. The Update Framework

As a systems administrator, understanding Red Hat’s implementation of The Update Framework (TUF) for Red Hat Trusted Artifact Signer (RHTAS) is important in helping you to maintaining a secure coding environment for developers. You can refresh TUF’s root and non-root metadata periodically to help prevent mix-and-match attacks on a code base. Refreshing the TUF metadata gives clients the ability to detect and reject outdated or tampered-with files.

Starting with Red Hat Trusted Artifact Signer (RHTAS) version 1.1, we implemented The Update Framework (TUF) as a trust root to store public keys, and certificates used by RHTAS services. The Update Framework is a sophisticated framework for securing software update systems, and this makes it ideal for securing shipped artifacts. The Update Framework refers to the RHTAS services as trusted root targets. There are four trusted targets, one for each RHTAS service: Fulcio, Certificate Transparency (CT) log, Rekor, and Timestamp Authority (TSA). Client software, such as cosign, use the RHTAS trust root targets to sign and verify artifact signatures. A simple HTTP server distributes the public keys and certificates to the client software. This simple HTTP server has the TUF repository of the individual targets.

By default, when deploying RHTAS on Red Hat OpenShift or Red Hat Enterprise Linux, we create a TUF repository, and prepopulate the individual targets. By default, the expiration date of all metadata files is 52 weeks from the time you deploy the RHTAS service. Red Hat recommends choosing shorter expiration periods, and rotating your public keys and certificates often. Doing these maintenance tasks regularly can help prevent attacks on your code base.

1.2.2. Updating The Update Framework metadata files

By default, The Update Framework (TUF) metadata files expire after 52 weeks from the Red Hat Trusted Artifact Signer (RHTAS) deployment date. At a minimum, you have to update the TUF metadata files at least once every 52 weeks before they expire. Red Hat recommends updating the metadata files more often than once a year.

This procedure walks you through refreshing the root, and non-root metadata files.

Prerequisites

  • Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • A workstation with the oc binary installed.

Procedure

  1. Download the tuftool binary from the OpenShift cluster to your workstation.

    Important

    Currently, the tuftool binary is only available for Linux operating systems on the x86_64 architecture.

    1. From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
    2. Open a terminal on your workstation, decompress the binary .gz file, and set the execution bit:

      $ gunzip tuftool-amd64.gz
      $ chmod +x tuftool-amd64
      Copy to Clipboard Toggle word wrap
    3. Move and rename the binary to a location within your $PATH environment:

      $ sudo mv tuftool-amd64 /usr/local/bin/tuftool
      Copy to Clipboard Toggle word wrap
  2. Log in to OpenShift from the command line:

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    Copy to Clipboard Toggle word wrap
    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Copy to Clipboard Toggle word wrap
    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  3. Switch to the RHTAS project:

    $ oc project trusted-artifact-signer
    Copy to Clipboard Toggle word wrap
  4. Configure your shell environment:

    $ export WORK="${HOME}/trustroot-example"
    $ export ROOT="${WORK}/root/root.json"
    $ export KEYDIR="${WORK}/keys"
    $ export INPUT="${WORK}/input"
    $ export TUF_REPO="${WORK}/tuf-repo"
    $ export TUF_SERVER_POD="$(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")"
    $ export TIMESTAMP_EXPIRATION="in 10 days"
    $ export SNAPSHOT_EXPIRATION="in 26 weeks"
    $ export TARGETS_EXPIRATION="in 26 weeks"
    $ export ROOT_EXPIRATION="in 26 weeks"
    Copy to Clipboard Toggle word wrap

    Set the expiration durations according to your requirements.

  5. Create a temporary TUF directory structure:

    $ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
    Copy to Clipboard Toggle word wrap
  6. Download the TUF contents to the temporary TUF directory structure:

    $ oc extract --to "${KEYDIR}/" secret/tuf-root-keys
    $ oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}"
    $ cp "${TUF_REPO}/root.json" "${ROOT}"
    Copy to Clipboard Toggle word wrap
  7. You can update the timestamp, snapshot, and targets metadata all in one command:

    $ tuftool update \
      --root "${ROOT}" \
      --key "${KEYDIR}/timestamp.pem" \
      --key "${KEYDIR}/snapshot.pem" \
      --key "${KEYDIR}/targets.pem" \
      --timestamp-expires "${TIMESTAMP_EXPIRATION}" \
      --snapshot-expires "${SNAPSHOT_EXPIRATION}" \
      --targets-expires "${TARGETS_EXPIRATION}" \
      --outdir "${TUF_REPO}" \
      --metadata-url "file://${TUF_REPO}"
    Copy to Clipboard Toggle word wrap
    Note

    You can also run the TUF metadata update on a subset of TUF metadata files. For example, the timestamp.json metadata file expires more often than the other metadata files. Therefore, you can just update the timestamp metadata file by running the following command:

    $ tuftool update \
      --root "${ROOT}" \
      --key "${KEYDIR}/timestamp.pem" \
      --timestamp-expires "${TIMESTAMP_EXPIRATION}" \
      --outdir "${TUF_REPO}" \
      --metadata-url "file://${TUF_REPO}"
    Copy to Clipboard Toggle word wrap
  8. Only update the root expiration date if it is about to expire:

    $ tuftool root expire "${ROOT}" "${ROOT_EXPIRATION}"
    Copy to Clipboard Toggle word wrap
    Note

    You can skip this step if the root file is not close to expiring.

  9. Update the root version:

    $ tuftool root bump-version "${ROOT}"
    Copy to Clipboard Toggle word wrap
  10. Sign the root metadata file again:

    $ tuftool root sign "${ROOT}" -k "${KEYDIR}/root.pem"
    Copy to Clipboard Toggle word wrap
  11. Set the new root version, and copy the root metadata file in place:

    $ export NEW_ROOT_VERSION=$(cat "${ROOT}" | jq -r ".signed.version")
    $ cp "${ROOT}" "${TUF_REPO}/root.json"
    $ cp "${ROOT}" "${TUF_REPO}/${NEW_ROOT_VERSION}.root.json"
    Copy to Clipboard Toggle word wrap
  12. Upload these changes to the TUF server:

    $ oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"
    Copy to Clipboard Toggle word wrap

1.3. Rotate your certificates and keys

As a systems administrator, you can proactively rotate the certificates and signer keys used by the Red Hat Trusted Artifact Signer (RHTAS) service running on Red Hat OpenShift. Rotating your keys regularly can prevent key tampering, and theft. These procedures guide you through expiring your old certificates and signer keys, and replacing them with a new certificate and signer key for the underlying services that make up RHTAS. You can rotate keys and certificates for the following services:

  • Rekor
  • Certificate Transparency log
  • Fulcio
  • Timestamp Authority

1.3.1. Rotating the Rekor signer key

You can proactively rotate Rekor’s signer key by using the sharding feature to freeze the log tree, and create a new log tree with a new signer key. This procedure walks you through expiring your old Rekor signer key, and replacing it with a new signer key for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old Rekor signer key still allows you to verify artifacts signed by the old key.

Important

This procedure requires downtime to the Rekor service.

Prerequisites

  • Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • A workstation with the oc, openssl, and cosign binaries installed.

Procedure

  1. Download the rekor-cli binary from the OpenShift cluster to your workstation.

    1. Login to the OpenShift web console. From the home page, click the ? icon, click Command line tools, go to the rekor-cli download section, and click the link for your platform.
    2. Open a terminal on your workstation, decompress the binary .gz file, and set the execute bit:

      $ gunzip rekor-cli-amd64.gz
      $ chmod +x rekor-cli-amd64
      Copy to Clipboard Toggle word wrap
    3. Move and rename the binary to a location within your $PATH environment:

      $ sudo mv rekor-cli-amd64 /usr/local/bin/rekor-cli
      Copy to Clipboard Toggle word wrap
  2. Download the tuftool binary from the OpenShift cluster to your workstation.

    Important

    The tuftool binary is only available for Linux operating systems.

    1. From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
    2. From a terminal on your workstation, decompress the binary .gz file, and set the execute bit:

      $ gunzip tuftool-amd64.gz
      $ chmod +x tuftool-amd64
      Copy to Clipboard Toggle word wrap
    3. Move and rename the binary to a location within your $PATH environment:

      $ sudo mv tuftool-amd64 /usr/local/bin/tuftool
      Copy to Clipboard Toggle word wrap
  3. Log in to OpenShift from the command line:

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    Copy to Clipboard Toggle word wrap
    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Copy to Clipboard Toggle word wrap
    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  4. Switch to the RHTAS project:

    $ oc project trusted-artifact-signer
    Copy to Clipboard Toggle word wrap
  5. Get the Rekor URL:

    $ export REKOR_URL=$(oc get rekor -o jsonpath='{.items[0].status.url}')
    Copy to Clipboard Toggle word wrap
  6. Get the log tree identifier for the active shard:

    $ export OLD_TREE_ID=$(rekor-cli loginfo --rekor_server $REKOR_URL --format json | jq -r .TreeID)
    Copy to Clipboard Toggle word wrap
  7. Set the log tree to the DRAINING state:

    $ oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver:8091 --tree_id=${OLD_TREE_ID} --tree_state=DRAINING
    Copy to Clipboard Toggle word wrap

    While draining, the tree log will not accept any new entries. Watch and wait for the queue to empty.

    Important

    You must wait for the queues to be empty before proceeding to the next step. If leaves are still integrating while draining, then freezing the log tree during this process can cause the log path to exceed the maximum merge delay (MMD) threshold.

  8. Freeze the log tree:

    $ oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver:8091 --tree_id=${OLD_TREE_ID} --tree_state=FROZEN
    Copy to Clipboard Toggle word wrap
  9. Get the length of the frozen log tree:

    $ export OLD_SHARD_LENGTH=$(rekor-cli loginfo --rekor_server $REKOR_URL --format json | jq -r .ActiveTreeSize)
    Copy to Clipboard Toggle word wrap
  10. Get Rekor’s public key for the old shard:

    $ export OLD_PUBLIC_KEY=$(curl -s $REKOR_URL/api/v1/log/publicKey | base64 | tr -d '\n')
    Copy to Clipboard Toggle word wrap
  11. Create a new log tree:

    $ export NEW_TREE_ID=$(oc run createtree --image registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- -logtostderr=false --admin_server=trillian-logserver:8091 --display_name=rekor-tree)
    Copy to Clipboard Toggle word wrap

    Now you have two log trees, one frozen tree, and a new tree that will become the active shard.

  12. Create a new private key:

    $ openssl ecparam -genkey -name prime256v1 -noout -out new-rekor.pem
    Copy to Clipboard Toggle word wrap
    Important

    The new key must have a unique file name.

  13. Create a new secret resource with the new signer key:

    $ oc create secret generic rekor-signer-key --from-file=private=new-rekor.pem
    Copy to Clipboard Toggle word wrap
  14. Update the Securesign Rekor configuration with the new tree identifier and the old sharding information:

    $ read -r -d '' SECURESIGN_PATCH_1 <<EOF
    [
        {
            "op": "replace",
            "path": "/spec/rekor/treeID",
            "value": $NEW_TREE_ID
        },
        {
            "op": "add",
            "path": "/spec/rekor/sharding/-",
            "value": {
                "treeID": $OLD_TREE_ID,
                "treeLength": $OLD_SHARD_LENGTH,
                "encodedPublicKey": "$OLD_PUBLIC_KEY"
            }
        },
        {
            "op": "replace",
            "path": "/spec/rekor/signer/keyRef",
            "value": {"name": "rekor-signer-key", "key": "private"}
        }
    ]
    EOF
    Copy to Clipboard Toggle word wrap
    Note

    If you have /spec/rekor/signer/keyPasswordRef set with a value, then create a new separate update to remove it:

    $ read -r -d '' SECURESIGN_PATCH_2 <<EOF
    [
        {
            "op": "remove",
            "path": "/spec/rekor/signer/keyPasswordRef"
        }
    ]
    EOF
    Copy to Clipboard Toggle word wrap

    Apply this update after applying the first update.

  15. Update the Securesign instance:

    $ oc patch Securesign securesign-sample --type='json' -p="$SECURESIGN_PATCH_1"
    Copy to Clipboard Toggle word wrap
  16. Wait for the Rekor server to redeploy with the new signer key:

    $ oc wait pod -l app.kubernetes.io/name=rekor-server --for=condition=Ready
    Copy to Clipboard Toggle word wrap
  17. Get the new public key:

    $ export NEW_KEY_NAME=new-rekor.pub
    $ curl $(oc get rekor -o jsonpath='{.items[0].status.url}')/api/v1/log/publicKey -o $NEW_KEY_NAME
    Copy to Clipboard Toggle word wrap
  18. Configure The Update Framework (TUF) service to use the new Rekor public key.

    1. Configure your shell environment:

      $ export WORK="${HOME}/trustroot-example"
      $ export ROOT="${WORK}/root/root.json"
      $ export KEYDIR="${WORK}/keys"
      $ export INPUT="${WORK}/input"
      $ export TUF_REPO="${WORK}/tuf-repo"
      $ export TUF_SERVER_POD="$(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")"
      Copy to Clipboard Toggle word wrap
    2. Create a temporary TUF directory structure:

      $ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
      Copy to Clipboard Toggle word wrap
    3. Download the TUF contents to the temporary TUF directory structure:

      $ oc extract --to "${KEYDIR}/" secret/tuf-root-keys
      $ oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}"
      $ cp "${TUF_REPO}/root.json" "${ROOT}"
      Copy to Clipboard Toggle word wrap
    4. Find the active Rekor signer key file name. Open the latest target file, for example, 1.target.json, within the local TUF repository. In this file you will find the active Rekor signer key file name, for example, rekor.pub. Set an environment variable with this active Rekor signer key file name:

      $ export ACTIVE_KEY_NAME=rekor.pub
      Copy to Clipboard Toggle word wrap
    5. Update the Rekor signer key with the old public key:

      $ echo $OLD_PUBLIC_KEY | base64 -d > $ACTIVE_KEY_NAME
      Copy to Clipboard Toggle word wrap
    6. Expire the old Rekor signer key:

      $ tuftool rhtas \
        --root "${ROOT}" \
        --key "${KEYDIR}/snapshot.pem" \
        --key "${KEYDIR}/targets.pem" \
        --key "${KEYDIR}/timestamp.pem" \
        --set-rekor-target "${ACTIVE_KEY_NAME}" \
        --rekor-uri "${REKOR_URL}" \
        --rekor-status "Expired" \
        --outdir "${TUF_REPO}" \
        --metadata-url "file://${TUF_REPO}"
      Copy to Clipboard Toggle word wrap
    7. Add the new Rekor signer key:

      $ tuftool rhtas \
        --root "${ROOT}" \
        --key "${KEYDIR}/snapshot.pem" \
        --key "${KEYDIR}/targets.pem" \
        --key "${KEYDIR}/timestamp.pem" \
        --set-rekor-target "${NEW_KEY_NAME}" \
        --rekor-uri "${REKOR_URL}" \
        --outdir "${TUF_REPO}" \
        --metadata-url "file://${TUF_REPO}"
      Copy to Clipboard Toggle word wrap
    8. Upload these changes to the TUF server:

      $ oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"
      Copy to Clipboard Toggle word wrap
    9. Delete the working directory:

      $ rm -r $WORK
      Copy to Clipboard Toggle word wrap
  19. Update the cosign configuration with the updated TUF configuration:

    $ cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
    Copy to Clipboard Toggle word wrap

    Now, you are ready to sign and verify your artifacts with the new Rekor signer key.

1.3.2. Rotating the Certificate Transparency log signer key

You can proactively rotate Certificate Transparency (CT) log signer key by using the sharding feature to freeze the log tree, and create a new log tree with a new signer key. This procedure walks you through expiring your old CT log signer key, and replacing it with a new signer key for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old CT log signer key still allows you to verify artifacts signed by the old key.

Prerequisites

  • Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • A workstation with the oc, openssl, and cosign binaries installed.

Procedure

  1. Download the tuftool binary from the OpenShift cluster to your workstation.

    Important

    Currently, the tuftool binary is only available for Linux operating systems on the x86_64 architecture.

    1. From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
    2. Open a terminal on your workstation, decompress the binary .gz file, and set the execution bit:

      $ unzip tuftool-amd64.gz
      $ chmod +x tuftool-amd64
      Copy to Clipboard Toggle word wrap
    3. Move and rename the binary to a location within your $PATH environment:

      $ sudo mv tuftool-amd64 /usr/local/bin/tuftool
      Copy to Clipboard Toggle word wrap
  2. Log in to OpenShift from the command line:

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    Copy to Clipboard Toggle word wrap
    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Copy to Clipboard Toggle word wrap
    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  3. Switch to the RHTAS project:

    $ oc project trusted-artifact-signer
    Copy to Clipboard Toggle word wrap
  4. Make a backup of the current CT log configuration, and keys:

    $ export SERVER_CONFIG_NAME=$(oc get ctlog -o jsonpath='{.items[0].status.serverConfigRef.name}')
    $ oc get secret $SERVER_CONFIG_NAME -o jsonpath="{.data.config}" | base64 --decode > config.txtpb
    $ oc get secret $SERVER_CONFIG_NAME -o jsonpath="{.data.fulcio-0}" | base64 --decode > fulcio-0.pem
    $ oc get secret $SERVER_CONFIG_NAME -o jsonpath="{.data.private}" | base64 --decode > private.pem
    $ oc get secret $SERVER_CONFIG_NAME -o jsonpath="{.data.public}" | base64 --decode > public.pem
    Copy to Clipboard Toggle word wrap
  5. Capture the current tree identifier:

    $ export OLD_TREE_ID=$(oc get ctlog -o jsonpath='{.items[0].status.treeID}')
    Copy to Clipboard Toggle word wrap
  6. Set the log tree to the DRAINING state:

    $ oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver:8091 --tree_id=${OLD_TREE_ID} --tree_state=DRAINING
    Copy to Clipboard Toggle word wrap

    While draining, the tree log will not accept any new entries. Watch and wait for the queue to empty.

    Important

    You must wait for the queues to be empty before proceeding to the next step. If leaves are still integrating while draining, then freezing the log tree during this process can cause the log path to exceed the maximum merge delay (MMD) threshold.

  7. Once the queue has been fully drained, freeze the log:

    $ oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver:8091 --tree_id=${OLD_TREE_ID} --tree_state=FROZEN
    Copy to Clipboard Toggle word wrap
  8. Create a new Merkle tree, and capture the new tree identifier:

    $ export NEW_TREE_ID=$(kubectl run createtree --image registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- -logtostderr=false --admin_server=trillian-logserver:8091 --display_name=ctlog-tree)
    Copy to Clipboard Toggle word wrap
  9. Generate a new certificate, along with new public and private keys:

    $ openssl ecparam -genkey -name prime256v1 -noout -out new-ctlog.pem
    $ openssl ec -in new-ctlog.pem -pubout -out new-ctlog-public.pem
    $ openssl ec -in new-ctlog.pem -out new-ctlog.pass.pem -des3 -passout pass:"CHANGE_ME"
    Copy to Clipboard Toggle word wrap

    Replace CHANGE_ME with a new password.

    Important

    The certificate and new keys must have unique file names.

  10. Update the CT log configuration.

    1. Open the config.txtpb file for editing.
    2. For the frozen log, add the not_after_limit field to the frozen log entry, rename the prefix value to a unique name, and replace the old path to the private key with ctfe-keys/private-0:

      ...
      log_configs:{
        # frozen log
        config:{
          log_id:2066075212146181968
          prefix:"trusted-artifact-signer-0"
          roots_pem_file:"/ctfe-keys/fulcio-0"
          private_key:{[type.googleapis.com/keyspb.PEMKeyFile]:{path:"/ctfe-keys/private-0" password:"Example123"}}
          public_key:{der:"0Y0\x13\x06\x07*\x86H\xce=\x02\x01\x06\x08*\x86H\xce=\x03\x01\x07\x03B\x00\x04)'.\xffUJ\xe2s)\xefR\x8a\xfcO\xdcewȶy\xa7\x9d<\x13\xb0\x1c\x99\x96\xe4'\xe3v\x07:\xc8I+\x08J\x9d\x8a\xed\x06\xe4\xaeI:q\x98\xf4\xbc<o4VD\x0cr\xf9\x9c\xecxT\x84"}
          not_after_limit:{seconds:1728056285 nanos:012111000}
          ext_key_usages:"CodeSigning"
          log_backend_name:"trillian"
        }
      Copy to Clipboard Toggle word wrap
      Note

      You can get the current time value for seconds and nanoseconds, by running the following commands: date +%s, and date +%N.

      Important

      The not_after_limit field defines the end of the timestamp range for the frozen log only. Certificates beyond this point in time are no longer accepted for inclusion in this log.

    3. Copy and paste the frozen log config block, appending it to the configuration file to create a new entry.
    4. Change the following lines in the new config block. Set the log_id to the new tree identifier, change the prefix to trusted-artifact-signer, change the private_key path to ctfe-keys/private, remove the public_key line, and change not_after_limit to not_after_start and set the timestamp range:

      ...
      log_configs:{
        # frozen log
        ...
        # new active log
        config:{
      	  log_id: NEW_TREE_ID
      	  prefix:"trusted-artifact-signer"
      	  roots_pem_file:"/ctfe-keys/fulcio-0"
      	  private_key:{[type.googleapis.com/keyspb.PEMKeyFile]:{path:"ctfe-keys/private" password:"CHANGE_ME"}}
      	  ext_key_usages:"CodeSigning"
      	  not_after_start:{seconds:1713201754 nanos:155663000}
      	  log_backend_name:"trillian"
        }
      Copy to Clipboard Toggle word wrap

      Add the NEW_TREE_ID, and replace CHANGE_ME with the new private key password. The password here must match the password used for generating the new private and public keys.

      Important

      The not_after_start field defines the beginning of the timestamp range inclusively. This means the log will start accepting certificates at this point in time.

  11. Create a new secret resource:

    $ oc create secret generic ctlog-config \
    --from-file=config=config.txtpb \
    --from-file=private=new-ctlog.pass.pem \
    --from-file=public=new-ctlog-public.pem \
    --from-file=fulcio-0=fulcio-0.pem \
    --from-file=private-0=private.pem \
    --from-file=public-0=public.pem \
    --from-literal=password=CHANGE_ME
    Copy to Clipboard Toggle word wrap

    Replace CHANGE_ME with the new private key password.

  12. Configure The Update Framework (TUF) service to use the new CT log public key.

    1. Configure your shell environment:

      $ export WORK="${HOME}/trustroot-example"
      $ export ROOT="${WORK}/root/root.json"
      $ export KEYDIR="${WORK}/keys"
      $ export INPUT="${WORK}/input"
      $ export TUF_REPO="${WORK}/tuf-repo"
      $ export TUF_SERVER_POD="$(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")"
      Copy to Clipboard Toggle word wrap
    2. Create a temporary TUF directory structure:

      $ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
      Copy to Clipboard Toggle word wrap
    3. Download the TUF contents to the temporary TUF directory structure:

      $ oc extract --to "${KEYDIR}/" secret/tuf-root-keys
      $ oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}"
      $ cp "${TUF_REPO}/root.json" "${ROOT}"
      Copy to Clipboard Toggle word wrap
    4. Find the active CT log public key file name. Open the latest target file, for example, 1.targets.json, within the local TUF repository. In this target file you will find the active CT log public key file name, for example, ctfe.pub. Set an environment variable with this active CT log public key file name:

      $ export ACTIVE_CTFE_NAME=ctfe.pub
      Copy to Clipboard Toggle word wrap
    5. Extract the active CT log public key from OpenShift:

      $ oc get secret $(oc get ctlog securesign-sample -o jsonpath='{.status.publicKeyRef.name}') -o jsonpath='{.data.public}' | base64 -d > $ACTIVE_CTFE_NAME
      Copy to Clipboard Toggle word wrap
    6. Expire the old CT log signer key:

      $ tuftool rhtas \
        --root "${ROOT}" \
        --key "${KEYDIR}/snapshot.pem" \
        --key "${KEYDIR}/targets.pem" \
        --key "${KEYDIR}/timestamp.pem" \
        --set-ctlog-target "$ACTIVE_CTFE_NAME" \
        --ctlog-uri "https://ctlog.rhtas" \
        --ctlog-status "Expired" \
        --outdir "${TUF_REPO}" \
        --metadata-url "file://${TUF_REPO}"
      Copy to Clipboard Toggle word wrap
    7. Add the new CT log signer key:

      $ tuftool rhtas \
        --root "${ROOT}" \
        --key "${KEYDIR}/snapshot.pem" \
        --key "${KEYDIR}/targets.pem" \
        --key "${KEYDIR}/timestamp.pem" \
        --set-ctlog-target "new-ctlog-public.pem" \
        --ctlog-uri "https://ctlog.rhtas" \
        --outdir "${TUF_REPO}" \
        --metadata-url "file://${TUF_REPO}"
      Copy to Clipboard Toggle word wrap
    8. Upload these changes to the TUF server:

      $ oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"
      Copy to Clipboard Toggle word wrap
  13. Update the Securesign CT log configuration with the new tree identifier:

    read -r -d '' SECURESIGN_PATCH <<EOF
    [
    	{
        	"op": "replace",
        	"path": "/spec/ctlog/serverConfigRef",
        	"value": {"name": "ctlog-config"}
    	},
        {
            "op": "replace",
            "path": "/spec/ctlog/treeID",
            "value": $NEW_TREE_ID
        },
    	{
        	"op": "replace",
        	"path": "/spec/ctlog/privateKeyRef",
        	"value": {"name": "ctlog-config", "key": "private"}
    	},
        {
            "op": "replace",
            "path": "/spec/ctlog/privateKeyPasswordRef",
            "value": {"name": "ctlog-config", "key": "password"}
        },
    	{
        	"op": "replace",
        	"path": "/spec/ctlog/publicKeyRef",
        	"value": {"name": "ctlog-config", "key": "public"}
    	}
    ]
    EOF
    Copy to Clipboard Toggle word wrap
  14. Patch the Securesign instance:

    $ oc patch Securesign securesign-sample --type='json' -p="$SECURESIGN_PATCH"
    Copy to Clipboard Toggle word wrap
  15. Wait for the CT log server to redeploy:

    $ oc wait pod -l app.kubernetes.io/name=ctlog --for=condition=Ready
    Copy to Clipboard Toggle word wrap
  16. Delete the working directory:

    $ rm -r $WORK
    Copy to Clipboard Toggle word wrap
  17. Update the cosign configuration with the updated TUF configuration:

    $ cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
    Copy to Clipboard Toggle word wrap

    Now, you are ready to sign and verify your artifacts with the new CT log signer key.

1.3.3. Rotating the Fulcio certificate

You can proactively rotate the certificate used by the Fulcio service. This procedure walks you through expiring your old Fulcio certificate, and replacing it with a new certificate for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old Fulcio certificate still allows you to verify artifacts signed by the old certificate.

Prerequisites

  • Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • A workstation with the oc, openssl, and cosign binaries installed.

Procedure

  1. Download the tuftool binary from the OpenShift cluster to your workstation.

    Important

    Currently, the tuftool binary is only available for Linux operating systems on the x86_64 architecture.

    1. From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
    2. Open a terminal on your workstation, decompress the binary .gz file, and set the execution bit:

      $ gunzip tuftool-amd64.gz
      $ chmod +x tuftool-amd64
      Copy to Clipboard Toggle word wrap
    3. Move and rename the binary to a location within your $PATH environment:

      $ sudo mv tuftool-amd64 /usr/local/bin/tuftool
      Copy to Clipboard Toggle word wrap
  2. Log in to OpenShift from the command line:

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    Copy to Clipboard Toggle word wrap
    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Copy to Clipboard Toggle word wrap
    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  3. Switch to the RHTAS project:

    $ oc project trusted-artifact-signer
    Copy to Clipboard Toggle word wrap
  4. Generate a new certificate, along with new public and private keys:

    $ openssl ecparam -genkey -name prime256v1 -noout -out new-fulcio.pem
    $ openssl ec -in new-fulcio.pem -pubout -out new-fulcio-public.pem
    $ openssl ec -in new-fulcio.pem -out new-fulcio.pass.pem -des3 -passout pass:"CHANGE_ME"
    $ openssl req -new -x509 -key new-fulcio.pass.pem -out new-fulcio.cert.pem
    Copy to Clipboard Toggle word wrap

    Replace CHANGE_ME with a new password.

    Important

    The certificate and new keys must have unique file names.

  5. Create a new secret:

    $ oc create secret generic fulcio-config \
    --from-file=private=new-fulcio.pass.pem \
    --from-file=cert=new-fulcio.cert.pem \
    --from-literal=password=CHANGE_ME
    Copy to Clipboard Toggle word wrap

    Replace CHANGE_ME with a new password.

    Note

    The password here must match the password used for generating the new private and public keys.

  6. Configure The Update Framework (TUF) service to use the new Fulcio certificate.

    1. Set up your shell environment:

      $ export WORK="${HOME}/trustroot-example"
      $ export ROOT="${WORK}/root/root.json"
      $ export KEYDIR="${WORK}/keys"
      $ export INPUT="${WORK}/input"
      $ export TUF_REPO="${WORK}/tuf-repo"
      $ export TUF_SERVER_POD="$(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")"
      Copy to Clipboard Toggle word wrap
    2. Create a temporary TUF directory structure:

      $ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
      Copy to Clipboard Toggle word wrap
    3. Download the TUF contents to the temporary TUF directory structure:

      $ oc extract --to "${KEYDIR}/" secret/tuf-root-keys
      $ oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}"
      $ cp "${TUF_REPO}/root.json" "${ROOT}"
      Copy to Clipboard Toggle word wrap
    4. Find the active Fulcio certificate file name. Open the latest target file, for example, 1.targets.json, within the local TUF repository. In this file you will find the active Fulcio certificate file name, for example, fulcio_v1.crt.pem. Set an environment variable with this active Fulcio certificate file name:

      $ export ACTIVE_CERT_NAME=fulcio_v1.crt.pem
      Copy to Clipboard Toggle word wrap
    5. Extract the active Fulcio certificate from OpenShift:

      $ oc get secret $(oc get fulcio securesign-sample -o jsonpath='{.status.certificate.caRef.name}') -o jsonpath='{.data.cert}' | base64 -d > $ACTIVE_CERT_NAME
      Copy to Clipboard Toggle word wrap
    6. Expire the old certificate:

      $ tuftool rhtas \
        --root "${ROOT}" \
        --key "${KEYDIR}/snapshot.pem" \
        --key "${KEYDIR}/targets.pem" \
        --key "${KEYDIR}/timestamp.pem" \
        --set-fulcio-target "$ACTIVE_CERT_NAME" \
        --fulcio-uri "https://fulcio.rhtas" \
        --fulcio-status "Expired" \
        --outdir "${TUF_REPO}" \
        --metadata-url "file://${TUF_REPO}"
      Copy to Clipboard Toggle word wrap
    7. Add the new Fulcio certificate:

      $ tuftool rhtas \
        --root "${ROOT}" \
        --key "${KEYDIR}/snapshot.pem" \
        --key "${KEYDIR}/targets.pem" \
        --key "${KEYDIR}/timestamp.pem" \
        --set-fulcio-target "new-fulcio.cert.pem" \
        --fulcio-uri "https://fulcio.rhtas" \
        --outdir "${TUF_REPO}" \
        --metadata-url "file://${TUF_REPO}"
      Copy to Clipboard Toggle word wrap
    8. Upload these changes to the TUF server:

      $ oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"
      Copy to Clipboard Toggle word wrap
    9. Delete the working directory:

      $ rm -r $WORK
      Copy to Clipboard Toggle word wrap
  7. Update the Securesign Fulcio configuration:

    $ read -r -d '' SECURESIGN_PATCH <<EOF
    [
    {
        "op": "replace",
        "path": "/spec/fulcio/certificate/privateKeyRef",
        "value": {"name": "fulcio-config", "key": "private"}
    },
    {
        "op": "replace",
        "path": "/spec/fulcio/certificate/privateKeyPasswordRef",
        "value": {"name": "fulcio-config", "key": "password"}
    },
    {
        "op": "replace",
        "path": "/spec/fulcio/certificate/caRef",
        "value": {"name": "fulcio-config", "key": "cert"}
    },
    {
        "op": "replace",
        "path": "/spec/ctlog/rootCertificates",
        "value": [{"name": "fulcio-config", "key": "cert"}]
    }
    ]
    EOF
    Copy to Clipboard Toggle word wrap
  8. Patch the Securesign instance:

    $ oc patch Securesign securesign-sample --type='json' -p="$SECURESIGN_PATCH"
    Copy to Clipboard Toggle word wrap
  9. Wait for the Fulcio server to redeploy:

    $ oc wait pod -l app.kubernetes.io/name=fulcio-server --for=condition=Ready
    $ oc wait pod -l app.kubernetes.io/name=ctlog --for=condition=Ready
    Copy to Clipboard Toggle word wrap
  10. Update the cosign configuration with the updated TUF configuration:

    $ cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
    Copy to Clipboard Toggle word wrap

    Now, you are ready to sign and verify your artifacts with the new Fulcio certificate.

You can proactively rotate the Timestamp Authority (TSA) signer key and certificate chain. This procedure walks you through expiring your old TSA signer key and certificate chain, and replacing them with a new ones for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old TSA signer key and certificate chain still allows you to verify artifacts signed by the old key and certificate chain.

Prerequisites

  • Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • A workstation with the oc and openssl binaries installed.

Procedure

  1. Download the tuftool binary from the OpenShift cluster to your workstation.

    Important

    Currently, the tuftool binary is only available for Linux operating systems on the x86_64 architecture.

    1. From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
    2. Open a terminal on your workstation, decompress the binary .gz file, and set the execution bit:

      $ gunzip tuftool-amd64.gz
      $ chmod +x tuftool-amd64
      Copy to Clipboard Toggle word wrap
    3. Move and rename the binary to a location within your $PATH environment:

      $ sudo mv tuftool-amd64 /usr/local/bin/tuftool
      Copy to Clipboard Toggle word wrap
  2. Log in to OpenShift from the command line:

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    Copy to Clipboard Toggle word wrap
    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Copy to Clipboard Toggle word wrap
    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  3. Switch to the RHTAS project:

    $ oc project trusted-artifact-signer
    Copy to Clipboard Toggle word wrap
  4. Generate a new certificate chain, and a new signer key.

    Important

    The new certificate and keys must have unique file names.

    1. Create a temporary working directory:

      $ mkdir certs && cd certs
      Copy to Clipboard Toggle word wrap
    2. Create the root certificate authority (CA) private key, and set a password:

      $ openssl req -x509 -newkey rsa:2048 -days 365 -sha256 -nodes \
      -keyout rootCA.key.pem -out rootCA.crt.pem \
      -passout pass:"CHANGE_ME" \
      -subj "/C=CC/ST=state/L=Locality/O=RH/OU=RootCA/CN=RootCA" \
      -addext "basicConstraints=CA:true" -addext "keyUsage=cRLSign, keyCertSign"
      Copy to Clipboard Toggle word wrap

      Replace CHANGE_ME with a new password.

    3. Create the intermediate CA private key and certificate signing request (CSR), and set a password:

      $ openssl req -newkey rsa:2048 -sha256 \
      -keyout intermediateCA.key.pem -out intermediateCA.csr.pem \
      -passout pass:"CHANGE_ME" \
      -subj "/C=CC/ST=state/L=Locality/O=RH/OU=IntermediateCA/CN=IntermediateCA"
      Copy to Clipboard Toggle word wrap

      Replace CHANGE_ME with a new password.

    4. Sign the intermediate CA certificate with the root CA:

      $ openssl x509 -req -in intermediateCA.csr.pem -CA rootCA.crt.pem -CAkey rootCA.key.pem \
      -CAcreateserial -out intermediateCA.crt.pem -days 365 -sha256 \
      -extfile <(echo -e "basicConstraints=CA:true\nkeyUsage=cRLSign, keyCertSign\nextendedKeyUsage=critical,timeStamping") \
      -passin pass:"CHANGE_ME"
      Copy to Clipboard Toggle word wrap

      Replace CHANGE_ME with the root CA private key password to sign the intermediate CA certificate.

    5. Create the leaf CA private key and CSR, and set a password:

      $ openssl req -newkey rsa:2048 -sha256 \
      -keyout leafCA.key.pem -out leafCA.csr.pem \
      -passout pass:"CHANGE_ME" \
      -subj "/C=CC/ST=state/L=Locality/O=RH/OU=LeafCA/CN=LeafCA"
      Copy to Clipboard Toggle word wrap
    6. Sign the leaf CA certificate with the intermediate CA:

      $ openssl x509 -req -in leafCA.csr.pem -CA intermediateCA.crt.pem -CAkey intermediateCA.key.pem \
        -CAcreateserial -out leafCA.crt.pem -days 365 -sha256 \
        -extfile <(echo -e "basicConstraints=CA:false\nkeyUsage=cRLSign, keyCertSign\nextendedKeyUsage=critical,timeStamping") \
        -passin pass:"CHANGE_ME"
      Copy to Clipboard Toggle word wrap

      Replace CHANGE_ME with the intermediate CA private key password to sign the leaf CA certificate.

    7. Create the certificate chain by combining the newly created certificates together:

      $ cat leafCA.crt.pem intermediateCA.crt.pem rootCA.crt.pem > new-tsa.certchain.pem
      Copy to Clipboard Toggle word wrap
  5. Create a new secret resource with the signer key:

    $ oc create secret generic rotated-signer-key --from-file=rotated-signer-key=certs/leafCA.key.pem
    Copy to Clipboard Toggle word wrap
  6. Create a new secret resource with the new certificate chain:

    $ oc create secret generic rotated-cert-chain --from-file=rotated-cert-chain=certs/new-tsa.certchain.pem
    Copy to Clipboard Toggle word wrap
  7. Create a new secret resource with for the password:

    $ oc create secret generic rotated-password --from-literal=rotated-password=CHANGE_ME
    Copy to Clipboard Toggle word wrap

    Replace CHANGE_ME with the intermediate CA private key password.

  8. Find your active TSA certificate file name, the TSA URL string, and configure your shell environment with these values:

    $ export ACTIVE_CERT_CHAIN_NAME=tsa.certchain.pem
    $ export TSA_URL=$(oc get timestampauthority securesign-sample -o jsonpath='{.status.url}')/api/v1/timestamp
    $ curl $TSA_URL/certchain -o $ACTIVE_CERT_CHAIN_NAME
    Copy to Clipboard Toggle word wrap
  9. Update the Securesign TSA configuration:

    $ read -r -d '' SECURESIGN_PATCH <<EOF
    [
        {
            "op": "replace",
            "path": "/spec/tsa/signer/certificateChain",
            "value": {
                "certificateChainRef" : {"name": "rotated-cert-chain", "key": "rotated-cert-chain"}
            }
        },
        {
            "op": "replace",
            "path": "/spec/tsa/signer/file",
            "value": {
                    "privateKeyRef": {"name": "rotated-signer-key", "key": "rotated-signer-key"},
                    "passwordRef": {"name": "rotated-password", "key": "rotated-password"}
                }
        }
    ]
    EOF
    Copy to Clipboard Toggle word wrap
  10. Patch the Securesign instance:

    $ oc patch Securesign securesign-sample --type='json' -p="$SECURESIGN_PATCH"
    Copy to Clipboard Toggle word wrap
  11. Wait for the TSA server to redeploy with the new signer key and certificate chain:

    $ oc get pods -w -l app.kubernetes.io/name=tsa-server
    Copy to Clipboard Toggle word wrap
  12. Get the new certificate chain:

    $ export NEW_CERT_CHAIN_NAME=new-tsa.certchain.pem
    $ curl $TSA_URL/certchain -o $NEW_CERT_CHAIN_NAME
    Copy to Clipboard Toggle word wrap
  13. Configure The Update Framework (TUF) service to use the new TSA certificate chain.

    1. Set up your shell environment:

      $ export WORK="${HOME}/trustroot-example"
      $ export ROOT="${WORK}/root/root.json"
      $ export KEYDIR="${WORK}/keys"
      $ export INPUT="${WORK}/input"
      $ export TUF_REPO="${WORK}/tuf-repo"
      $ export TUF_SERVER_POD="$(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")"
      Copy to Clipboard Toggle word wrap
    2. Create a temporary TUF directory structure:

      $ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
      Copy to Clipboard Toggle word wrap
    3. Download the TUF contents to the temporary TUF directory structure:

      $ oc extract --to "${KEYDIR}/" secret/tuf-root-keys
      $ oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}"
      $ cp "${TUF_REPO}/root.json" "${ROOT}"
      Copy to Clipboard Toggle word wrap
    4. Expire the old TSA certificate:

      $ tuftool rhtas \
        --root "${ROOT}" \
        --key "${KEYDIR}/snapshot.pem" \
        --key "${KEYDIR}/targets.pem" \
        --key "${KEYDIR}/timestamp.pem" \
        --set-tsa-target "$ACTIVE_CERT_CHAIN_NAME" \
        --tsa-uri "$TSA_URL" \
        --tsa-status "Expired" \
        --outdir "${TUF_REPO}" \
        --metadata-url "file://${TUF_REPO}"
      Copy to Clipboard Toggle word wrap
    5. Add the new TSA certificate:

      $ tuftool rhtas \
        --root "${ROOT}" \
        --key "${KEYDIR}/snapshot.pem" \
        --key "${KEYDIR}/targets.pem" \
        --key "${KEYDIR}/timestamp.pem" \
        --set-tsa-target "$NEW_CERT_CHAIN_NAME" \
        --tsa-uri "$TSA_URL" \
        --outdir "${TUF_REPO}" \
        --metadata-url "file://${TUF_REPO}"
      Copy to Clipboard Toggle word wrap
    6. Upload these changes to the TUF server:

      $ oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"
      Copy to Clipboard Toggle word wrap
    7. Delete the working directory:

      $ rm -r $WORK
      Copy to Clipboard Toggle word wrap
  14. Update the cosign configuration with the updated TUF configuration:

    $ cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
    Copy to Clipboard Toggle word wrap

    Now, you are ready to sign and verify your artifacts that uses the new TSA signer key, and certificate.

1.4. The Policy Controller

As a systems administrator, it is important to control how and when objects get created within your OpenShift Container Platform environment, and within your software supply chain. Starting with Red Hat Trusted Artifact Signer (RHTAS) 1.3, you can run the Policy Controller admission controller to enforce policies by using verifiable supply-chain metadata. Once you install the Policy Controller Operator and create the required resources, you can start enforcing your security policies, and your software supply chain.

Important

The Policy Controller is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details.

The Red Hat Trusted Artifact Signer (RHTAS) Policy Controller Operator is an Red Hat OpenShift Container Platform admission controller designed to enforce policies by using supply-chain metadata. Essentially, the RHTAS Policy Controller acts as a gatekeeper for your Red Hat OpenShift cluster by making deployed workloads adhere to your security policies.

The RHTAS Policy Controller has these key features:

Easy integration with the RHTAS service
The Policy Controller Operator uses the established, trusted, and transparent services provided by RHTAS, such as Rekor’s transparency log, and Fulcio’s short-lived certificates for stronger signature validation. You can also take advantage of Trusted Artifact Signer’s secure Trust Root as a source of public keys and certificates used in artifact verification, along with auditing Rekor’s transparency log.
Verification of container image signatures
The RHTAS Policy Controller resolves container image tags to validate that the container image being ran does not differ from what was signed by RHTAS service. You can automatically verify signatures and attestations for container images, these can be enforced on a per-namespace basis, and you can create multiple policies to fit your security needs. You can create custom resources, such as, ClusterImagePolicy to define the rules for validating container images.
Defining and enforcing workload policies
You can define and enforce policies to restrict what container images can run in your Red Hat OpenShift cluster. One such requirement could be, to only allow specified images to run that match a certain signing key, and to verify attestations. You can chose to enforce strict policies, or use warning mode to better understand how a policy will impact your environment. You can also define and enforce policies based on other supply chain metadata.

1.4.2. Installing the Policy Controller Operator

Before you can start creating policies, and enforcing them, you need to install the Policy Controller Operator by using the Operator Lifecycle Manager (OLM).

Prerequisites

  • Access to the OpenShift web console with the cluster-admin role.

Procedure

  1. Log in to the OpenShift web console with a user that has the cluster-admin role.
  2. From the Administrator perspective, expand the Operators navigation menu, and click OperatorHub.
  3. In the search field, type policy-controller, and click the Policy Controller Operator tile provided by Red Hat.
  4. Click the Install button to show the operator details.
  5. Accept the default values, click Install on the Install Operator page, and wait for the installation to finish.
  6. Once the installation finishes, you can create the Policy Controller resources.

1.4.3. Creating the Policy Controller resources

After installing the Red Hat Trusted Artifact Signer (RHTAS) Policy Controller Operator, you need to create three new resources. These resources are: the base Policy Controller resource, the cluster image policy resource, and the Trust Root resource. This procedure guides you on creating a basic set of these resources.

Note

By default the Policy Controller resyncs the cluster image policies every 10 hours.

Prerequisites

  • Installation of the RHTAS Policy Controller Operator.
  • A workstation with the oc, curl, and tuftool binaries installed.

Procedure

  1. Open a terminal on your workstation, and log in to OpenShift:

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    Copy to Clipboard Toggle word wrap
    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Copy to Clipboard Toggle word wrap
    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  2. Create and switch to the policy-controller-operator namespace:

    $ oc new-project policy-controller-operator ; oc project policy-controller-operator
    Copy to Clipboard Toggle word wrap
  3. Create a basic Policy Controller resource.

    1. Configure the Policy Controller to watch your namespaces that match the defined label selector under spec.policy-controller.webhook.namespaceSelector.matchExpressions:

      ...
      spec:
        policy-controller:
          ...
          webhook:
            ...
            namespaceSelector:
              matchExpressions:
                - key: policy.rhtas.com/include
                  operator: In
                  values: ["true"]
      ...
      Copy to Clipboard Toggle word wrap
      $ cat <<EOF | oc apply -f -
      apiVersion: rhtas.charts.redhat.com/v1alpha1
      kind: PolicyController
      metadata:
        name: policycontroller-sample
      spec:
        policy-controller:
          cosign:
            webhookName: "policy.rhtas.com"
          webhook:
            name: webhook
            extraArgs:
              webhook-name: policy.rhtas.com
              mutating-webhook-name: defaulting.clusterimagepolicy.rhtas.com
              validating-webhook-name: validating.clusterimagepolicy.rhtas.com
            failurePolicy: Fail
            namespaceSelector:
              matchExpressions:
                - key: policy.rhtas.com/include
                  operator: In
                  values: ["true"]
            webhookNames:
              defaulting: "defaulting.clusterimagepolicy.rhtas.com"
              validating: "validating.clusterimagepolicy.rhtas.com"
      EOF
      Copy to Clipboard Toggle word wrap
      Important

      You must create this resource in the policy-controller-operator namespace.

    2. Add the policy.rhtas.com/include: "true" label to the namespace that you want watched by the Policy Controller:

      apiVersion: v1
      kind: Namespace
      metadata:
        labels:
          policy.rhtas.com/include: "true"
        name: example-namespace
      Copy to Clipboard Toggle word wrap
    3. If you have a custom Certificate Authority (CA) bundle or self-signed certificates, then you can add your ConfigMap name and key under the spec.policy-controller.webhook.registryCaBundle section of the Policy Controller resource:

      ...
      spec:
        policy-controller:
          ...
          webhook:
            registryCaBundle:
              name: CONFIGMAP_NAME
              key: CA_BUNDLE_KEY
      ...
      Copy to Clipboard Toggle word wrap
  4. Create a Trust Root resource. You have three options for creating the Trust Root resource: a custom TUF repository, using your own keys, or using a serialized TUF root.

    1. Configure these environment variables from the RHTAS services:

      $ export TUF_URL="$(oc -n trusted-artifact-signer get tuf -o jsonpath='{.items[0].status.url}')"
      $ export BASE64_TUF_ROOT="$(curl -fsSL "$TUF_URL/root.json" | base64 -w0)"
      $ export FULCIO_URL="$(oc -n trusted-artifact-signer get fulcio -o jsonpath='{.items[0].status.url}')"
      $ export CTLOG_URL="http://ctlog.trusted-artifact-signer.svc.cluster.local"
      $ export REKOR_URL="$(oc -n trusted-artifact-signer get rekor -o jsonpath='{.items[0].status.url}')"
      $ export TSA_URL="$(oc -n trusted-artifact-signer get timestampAuthorities -o jsonpath='{.items[0].status.url}')"
      Copy to Clipboard Toggle word wrap
    2. Option 1. Create the TrustRoot resource for a custom TUF repository:

      $ cat <<EOF | oc apply -f -
      apiVersion: policy.sigstore.dev/v1alpha1
      kind: TrustRoot
      metadata:
        name: trust-root
      spec:
        remote:
          mirror: $TUF_URL
          root: |
            $BASE64_TUF_ROOT
      EOF
      Copy to Clipboard Toggle word wrap
    3. Option 2. Create a Trust Root with your own keys.

      1. Create and apply the TrustRoot resource using this template:

        apiVersion: policy.sigstore.dev/v1alpha1
        kind: TrustRoot
        metadata:
          name: trust-root
        spec:
          sigstoreKeys:
            certificateAuthorities:
            - subject:
                organization: fulcio-organization
                commonName: fulcio-common-name
              uri: $FULCIO_URL
              certChain: |-
                FULCIO_CERT_CHAIN
            ctLogs:
            - baseURL: $CTLOG_URL
              hashAlgorithm: sha-256
              publicKey: |-
                CTFE_PUBLIC_KEY
            tLogs:
            - baseURL: $REKOR_URL
              hashAlgorithm: sha-256
              publicKey: |-
                REKOR_PUBLIC_KEY
            timestampAuthorities:
            - subject:
                organization: tsa-organization
                commonName: tsa-common-name
              uri: $TSA_URL
              certChain: |-
                TSA_CERT_CHAIN
        Copy to Clipboard Toggle word wrap
        Note

        Substitute the public keys and certificate chain values with your specific values for your RHTAS environment.

    4. Option 3. Create a Trust Root for a serialized TUF root.

      1. Create a temporary directory to contain a clone of your TUF root:

        $ mkdir -p tuf-repo
        Copy to Clipboard Toggle word wrap
      2. Download and clone the TUF repository:

        $ curl -s $TUF_URL/root.json > root.json
        $ tuftool clone --metadata-url=$TUF_URL --metadata-dir=tuf-repo --targets-url=$TUF_URL/targets --targets-dir=tuf-repo/targets --root=root.json
        Copy to Clipboard Toggle word wrap
      3. Archive and encode the TUF repository:

        $ tar -C ./tuf-repo -czf tuf-repo.tgz .
        $ export MIRROR_FS=$(base64 -w0 tuf-repo.tgz)
        Copy to Clipboard Toggle word wrap
      4. Create the TrustRoot resource:

        $ cat <<EOF | oc apply -f -
        apiVersion: policy.sigstore.dev/v1alpha1
        kind: TrustRoot
        metadata:
          name: trust-root
        spec:
          repository:
            root: |-
              $BASE64_TUF_ROOT
            mirrorFS: |-
              $MIRROR_FS
        EOF
        Copy to Clipboard Toggle word wrap
  5. Create a basic Policy Controller cluster image policy resource.

    1. Configure these environment variables for Fulcio, Rekor, the Trust Root, and the OpenID Connect (OIDC) issuer and subject:

      $ export FULCIO_URL="$(oc -n trusted-artifact-signer get fulcio -o jsonpath='{.items[0].status.url}')"
      $ export REKOR_URL="$(oc -n trusted-artifact-signer get rekor -o jsonpath='{.items[0].status.url}')"
      $ export TRUST_ROOT_RESOURCE="trust-root"
      $ export OIDC_ISSUER_URL="https://ISSUER_URL"
      $ export OIDC_SUBJECT="SUBJECT"
      Copy to Clipboard Toggle word wrap
    2. Create the ClusterImagePolicy resource:

      $ cat <<EOF | oc apply -f -
      apiVersion: policy.sigstore.dev/v1beta1
      kind: ClusterImagePolicy
      metadata:
        name: cluster-image-policy
      spec:
        images:
          - glob: "**"
        authorities:
          - keyless:
              url: $FULCIO_URL
              trustRootRef: $TRUST_ROOT_RESOURCE
              identities:
                - issuer: $OIDC_ISSUER_URL
                  subject: $OIDC_SUBJECT
            ctlog:
              url: $REKOR_URL
              trustRootRef: $TRUST_ROOT_RESOURCE
            rfc3161timestamp:
              trustRootRef: $TRUST_ROOT_RESOURCE
      EOF
      Copy to Clipboard Toggle word wrap
      Note

      The glob value of ** evaluates all container images.

1.5. Signing and verifying AI/ML models

As a systems administrator, you can use Red Hat Trusted Artifact Signer (RHTAS) to sign and verify artificial intelligence (AI) and machine learning (ML) models. You can integrate AI/ML model signing and verification into your Continuous Integration and Continuous Deployment (CI/CD) pipelines, or by using the command-line interface (CLI). Doing this can enhance the security of your software supply chain workloads when running on Red Hat OpenShift by only using valid AI/ML models.

Important

Signing and verifying AI/ML models that use the Model Validation Operator or that use the CLI are a Technology Preview features only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details.

1.5.1. Building a client trust configuration for Model Validation

Before signing artificial intelligence (AI) and machine learning (ML) models with Red Hat Trusted Artifact Signer (RHTAS), you need to generate a client trust configuration that uses The Update Framework (TUF) Trust Root for your RHTAS environment.

Important

On RHTAS 1.2 and below, the Rekor key is configured to use SHA384 encryption. You must rotate the Rekor signer key to use SHA256. If you do not change the encryption type for Rekor, then verifying artifacts will cause mismatch errors.

For more information about this issue, see the RHTAS Release Notes.

Prerequisites

  • Access to the OpenShift web console with the cluster-admin role.
  • Installation of RHTAS running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • A workstation with the oc binary installed.

Procedure

  1. Open a terminal on your workstation, log in to OpenShift from the command line:

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    Copy to Clipboard Toggle word wrap
    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Copy to Clipboard Toggle word wrap
    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  2. Configure your shell environment:

    $ export WORK="${HOME}/trustroot-example"
    $ export SIGNED_TRUST_ROOT="${WORK}/root/trusted_root.json"
    $ export TUF_REPO="${WORK}/tuf-repo"
    $ export TUF_SERVER_POD="$(oc get pods -l app.kubernetes.io/component=tuf,\!job-name -o jsonpath='{.items[0].metadata.name}' -n trusted-artifact-signer)"
    $ export CA_URL=$(oc get fulcio -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer)
    $ export TLOG_URL=$(oc get rekor -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer)
    $ export OIDC_URL="OIDC_ISSUER_URL"
    Copy to Clipboard Toggle word wrap

    Replace OIDC_ISSUER_URL with your OIDC provider’s URL address.

  3. Create the temporary TUF directories:

    $ mkdir -p "${WORK}/root/" "${TUF_REPO}"
    Copy to Clipboard Toggle word wrap
  4. Download the signed target trust root file to the temporary TUF directories:

    $ oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}" -n trusted-artifact-signer
    $ cp "${TUF_REPO}/targets/DIGEST.trusted_root.json" "${SIGNED_TRUST_ROOT}"
    Copy to Clipboard Toggle word wrap

    An example signed target trust root file name looks similar to this format, c03afd04e353889093e5b16b019656b23a57.trusted_root.json, where your DIGEST value would be different.

  5. Create a script for making the client trust configuration used by the CLI:

    $ cat > make_trust_config.sh <<'EOF'
    #!/bin/bash
    
    # Usage: ./make-trust-config.sh <trusted_root_input.json> <output.json> [caUrl] [oidcUrl] [tlogUrl]
    
    if [ "$#" -lt 2 ]; then
        echo "Usage: $0 <trusted_root_input.json> <output.json> [caUrl] [oidcUrl] [tlogUrl]"
        exit 1
    fi
    
    INPUT_FILE="$1"
    OUTPUT_FILE="$2"
    CA_URL=${3:-${CA_URL:-""}}
    OIDC_URL=${4:-${OIDC_URL:-""}}
    TLOG_URL=${5:-${TLOG_URL:-""}}
    
    # Check for jq
    if ! command -v jq &> /dev/null; then
        echo "Error: 'jq' is required but not installed."
        exit 1
    fi
    
    jq -n \
      --argjson trustedRoot "$(cat $INPUT_FILE)" \
      --arg caUrl "$CA_URL" \
      --arg oidcUrl "$OIDC_URL" \
      --arg tlogUrl "$TLOG_URL" \
      '{
        mediaType: "application/vnd.dev.sigstore.clienttrustconfig.v0.1+json",
        trustedRoot: $trustedRoot,
        signingConfig: {
          caUrl: $caUrl,
          oidcUrl: $oidcUrl,
          tlogUrls: [$tlogUrl]
        }
      }' > "$OUTPUT_FILE"
    EOF
    Copy to Clipboard Toggle word wrap
  6. Make the script executable:

    $ chmod u+x make_trust_config.sh
    Copy to Clipboard Toggle word wrap
  7. Run the make_trust_config.sh script:

    $ ./make_trust_config.sh $SIGNED_TRUST_ROOT trust_config.json
    Copy to Clipboard Toggle word wrap

    A new trust_config.json file is created in the current working directory.

  8. You can now start signing and verifying AL/ML models by using command-line interface.

With Red Hat Trusted Artifact Signer (RHTAS), you can sign, and verify signatures on artificial intelligence (AI) and machine learning (ML) models by using the model-transparency command-line interface (CLI). For the CLI to sign and verify the AI and ML models it must know about your Trust Root. The signing and verifying commands run inside a container image, and does not require a locally installed binary.

Important

Signing and verifying AI/ML models by using the CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details.

Important

On RHTAS 1.2 and below, the Rekor key is configured to use SHA384 encryption. You must rotate the Rekor signer key to use SHA256. If you do not change the encryption type for Rekor, then verifying artifacts will cause mismatch errors.

For more information about this issue, see the RHTAS Release Notes.

Prerequisites

  • Installation of RHTAS running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • An OpenID Connect (OIDC) identity for retrieving tokens or client credentials.
  • A workstation with the podman binary installed.

Procedure

  1. Configure your shell environment:

    $ export OIDC_ISSUER="OIDC_ISSUER_URL"
    $ export MODEL_IMAGE="registry.redhat.io/rhtas/model-transparency-rhel9@sha256:6db7fa2b956875a6f507811166b47b164d463dea78ab4403c6d7648d838b8acb"
    $ export MODEL_DIR="PATH_TO_MODEL_DIRECTORY"
    $ export TRUST_CFG="$(pwd)/trust_config.json"
    $ export SIG_PATH="$MODEL_DIR/model.sig"
    Copy to Clipboard Toggle word wrap

    Replace OIDC_ISSUER_URL with your OIDC provider URL address.

    Replace PATH_TO_MODEL_DIRECTORY with the absolute path to the directory containing the AI/ML models.

  2. There are two options for signing a model by using the CLI. You can use an identity token and a client identifier, or just the client identifier. Using an identity token is the non-interactive way, where as using a client identifier is the interactive way.

    Note

    When using self-signed certificates or a custom Certificate Authority (CA), you have to pass those certificates to the container to successfully sign an AI/ML model.

    1. Option 1. Signing AI/ML models with an identity token:

      $ podman run --rm \
      --userns=keep-id --user "$(id -u)":"$(id -g)" --group-add keep-groups \
      -v "$MODEL_DIR":/model:Z,U \
      -v "$TRUST_CFG":/trust_config.json:Z,ro \
      -w /model "$MODEL_IMAGE" \
      sign sigstore \
      --trust_config /trust_config.json \
      --signature /model/model.sig \
      --identity_token "OIDC_TOKEN" \
      --client_id CLIENT_ID \
      /model
      Copy to Clipboard Toggle word wrap

      Replace OIDC_TOKEN with your OIDC authentication token.

      Replace CLIENT_ID with your OIDC client identifier.

    2. Option 2. Signing AI/ML models by using a client identifier:

      $ podman run --rm -it \
      --userns=keep-id --user "$(id -u)":"$(id -g)" --group-add keep-groups \
      -v "$MODEL_DIR":/model:Z,U \
      -v "$TRUST_CFG":/trust_config.json:Z,ro \
      -w /model "$MODEL_IMAGE" \
      sign sigstore \
      --trust_config "/trust_config.json" \
      --signature "/model/model.sig" \
      --client_id CLIENT_ID \
      /model
      Copy to Clipboard Toggle word wrap

      Replace CLIENT_ID with your OIDC client identifier.

  3. Verify a model signature that uses the CLI:

    $ podman run --rm -it \
    --userns=keep-id --user "$(id -u)":"$(id -g)" --group-add keep-groups \
    -v "$MODEL_DIR":/model:Z,U \
    -v "$TRUST_CFG":/trust_config.json:Z,ro \
    -w /model "$MODEL_IMAGE" \
    verify sigstore \
    --trust_config "/trust_config.json" \
    --signature "/model/model.sig" \
    --identity IDENTITY \
    --identity_provider "$OIDC_ISSUER" \
    /model
    Copy to Clipboard Toggle word wrap

    Replace IDENTITY with an email address or with a SPIFFE or URI subject.

1.5.3. Installing and configuring the Model Validation Operator

The Model Validation Operator gives you the ability to verify signed artificial intelligence (AI) and machine learning (ML) models at runtime for Red Hat OpenShift environments. This Operator allows you to create a ModelValidation custom resource (CR) in a project namespace, and then you can add a label to your pod for validation. You can use a mutating admission webhook that injects a short-lived step to validate an AI/ML model and signature that uses Red Hat Trusted Artifact Signer (RHTAS), and The Update Framework (TUF) to verify the signature, identity, and the issuer. If the validation process succeeds, then the pod proceeds, but if validation fails, then the pod admission is denied.

Important

The Model Validation Operator feature is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details.

Prerequisites

  • Red Hat OpenShift Container Platform 4.15 or later.
  • Access to the OpenShift web console with the cluster-admin role.
  • Installation of RHTAS running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • A workstation with the oc binary installed.

Procedure

  1. Log in to the OpenShift web console with a user that has the cluster-admin role.
  2. From the Administrator perspective, expand the Operators navigation menu, and click OperatorHub.
  3. In the search field, type Model Validation Operator, and click the tile that is displayed.
  4. Click the Install button to show the operator details.
  5. Accept the default values, click Install on the Install Operator page, and wait for the installation to finish.
  6. Once the installation finishes, click View Operator.
  7. Add the AI/ML model, its signature, and your signed Trust Root configuration to the namespace you want validation done. This is typically done by creating a Persistent Volume Claim (PVC) on the OpenShift cluster, and copying these files to the PVC.
  8. Create a ModelValidation CR in the namespace you want validation done.

    1. Click the modelvalidation tab, and click the Create modelvalidation button.
    2. On the Create modelvalidation page, select YAML view. Update the YAML file accordingly:

      apiVersion: ml.sigstore.dev/v1alpha1
      kind: ModelValidation
      metadata:
        name: model-validation-example
        namespace: NAMESPACE
      spec:
        config:
          sigstoreConfig:
            certificateIdentity: "IDENTITY"
            certificateOidcIssuer: "OIDC_ISSUER"
          clientTrustConfig:
            trustConfigPath: SIGNED_TRUST_ROOT
        model:
          path: PATH_TO_MODEL
          signaturePath: PATH_TO_MODEL_SIGNATURE
      Copy to Clipboard Toggle word wrap

      Replace NAMESPACE with the same namespace where your workloads run.

      Replace IDENTITY with the signer’s email address.

      Replace OIDC_ISSUER with your OIDC provider’s issuer URL address.

      Replace SIGNED_TRUST_ROOT with the signed Trust Root target file, for example, /data/trust-config.json.

      Replace PATH_TO_MODEL with the path to the model file, for example, /data/model.onnx.

      Replace PATH_TO_MODEL_SIGNATURE with the path to the model’s signature file, for example, /data/model.sig.

    3. Click the Create button.
  9. From your terminal session, create a new pod CR in the namespace you want to trigger a validation check. Update this example YAML file with your specific information:

    apiVersion: v1
    kind: Pod
    metadata:
      name: model-validation-pod-example
      namespace: NAMESPACE
      labels:
        validation.ml.sigstore.dev/ml: "model-validation-example"
    spec:
      containers:
      - name: app
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
        - name: model-storage-example
          mountPath: PATH_TO_WORKLOAD_VOLUME
      volumes:
      - name: model-storage-example
        persistentVolumeClaim:
          claimName: PVC_NAME
    Copy to Clipboard Toggle word wrap

    You configure the webhook by using this label key, validation.ml.sigstore.dev/ml, with the value of the ModelValidation CR name created earlier, surrounded by double quotes.

    Replace NAMESPACE, PATH_TO_WORKLOAD_VOLUME, and PVC_NAME with values appropriate to your environment.

    1. Create the new pod by applying the CR file:

      oc apply -f _PATH_TO_CR_FILE_
      Copy to Clipboard Toggle word wrap
  10. Now the webhook can intercept pod create and update requests. Next, the Model Validation Operator injects validation steps that reads the AI/ML model and its signature, and checks it against your Trust Root for RHTAS. If the validation check succeeds, then the pod creation or modification proceeds on.

1.6. Using your own certificate authority bundle

You can bring your organization’s certificate authority (CA) bundle for signing and verifying your build artifacts with Red Hat’s Trusted Artifact Signer (RHTAS) service.

Prerequisites

  • Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • Your CA root certificate.
  • A workstation with the oc binary installed.

Procedure

  1. Log in to OpenShift from the command line:

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    Copy to Clipboard Toggle word wrap
    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Copy to Clipboard Toggle word wrap
    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  2. Switch to the RHTAS project:

    $ oc project trusted-artifact-signer
    Copy to Clipboard Toggle word wrap
  3. Create a new ConfigMap by using your organization’s CA root certificate bundle:

    $ oc create configmap custom-ca-bundle --from-file=ca-bundle.crt
    Copy to Clipboard Toggle word wrap
    Important

    The certificate filename must be ca-bundle.crt.

  4. Open the Securesign resource for editing:

    $ oc edit Securesign securesign-sample
    Copy to Clipboard Toggle word wrap
    1. Add the rhtas.redhat.com/trusted-ca under the metadata.annotations section:

      apiVersion: rhtas.redhat.com/v1alpha1
      kind: Securesign
      metadata:
        name: example-instance
        annotations:
      	rhtas.redhat.com/trusted-ca: custom-ca-bundle
      spec:
      ...
      Copy to Clipboard Toggle word wrap
    2. Save, and quit the editor.
  5. Open the Fulcio resource for editing:

    $ oc edit Fulcio securesign-sample
    Copy to Clipboard Toggle word wrap
    1. Add the rhtas.redhat.com/trusted-ca under the metadata.annotations section:

      apiVersion: rhtas.redhat.com/v1alpha1
      kind: Fulcio
      metadata:
        name: example-instance
        annotations:
          rhtas.redhat.com/trusted-ca: custom-ca-bundle
      spec:
      ...
      Copy to Clipboard Toggle word wrap
    2. Save, and quit the editor.
  6. Wait for the RHTAS operator to reconfigure before signing and verifying artifacts.
Volver arriba
Red Hat logoGithubredditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar. Explore nuestras recientes actualizaciones.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

Theme

© 2025 Red Hat