Administration Guide


Red Hat Trusted Artifact Signer 1

General administration for the Trusted Artifact Signer service

Red Hat Trusted Documentation Team

Abstract

The Administration Guide gives System Administrators guidance on how to maintain the Trusted Artifact Signer service running on Red Hat platforms.
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message

Preface

Welcome to the Red Hat Trusted Artifact Signer Administration Guide!

This guide can help you with the normal maintenance routines for Red Hat’s Trusted Artifact Signer (RHTAS) service running on Red Hat OpenShift. You can find information about deploying the Trusted Artifact Signer service in the Deployment Guide.

Chapter 1. Protect your signing data

As a systems administrator, protecting the signing data of your software supply chain is critical when there is data loss due to hardware failure or accidental data deletion. The OpenShift API Data Protection (OADP) product provides data protection to applications running on Red Hat OpenShift. By using the OADP product, this can help us get the software developers back to signing and verifying code as quickly as possible. After installing and configuring the OADP operator you can start backing up and restoring your Red Hat Trusted Artifact Signer (RHTAS) data.

1.1. Installing and configuring the OADP operator

The OpenShift API Data Protection (OADP) operator gives you the ability to backup OpenShift application resources and internal container images. You can use the OADP operator to backup and restore your Trusted Artifact Signer data.

Important

This procedure uses Amazon Web Services (AWS) Simple Storage Service (S3) to create a bucket for illustrating how to configure the OADP operator. You can choose to use a different supported S3-compatible object storage platform instead of AWS, such as Red Hat OpenShift Data Foundation.

Prerequisites

  • Red Hat OpenShift Container Platform version 4.13 or later.
  • Access to the OpenShift web console with the cluster-admin role.
  • The ability to create an S3-compatible bucket.
  • A workstation with the oc, and aws binaries installed.

Procedure

  1. Open a terminal on your workstation, and log in to OpenShift:

    Syntax

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT

    Example

    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443

    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  2. Create a new bucket:

    Syntax

    export BUCKET=NEW_BUCKET_NAME
    export REGION=AWS_REGION_ID
    export USER=OADP_USER_NAME
    
    aws s3api create-bucket \
    --bucket $BUCKET \
    --region $REGION \
    --create-bucket-configuration LocationConstraint=$REGION

    Example

    $ export BUCKET=example-bucket-name
    $ export REGION=us-east-1
    $ export USER=velero
    $
    $ aws s3api create-bucket \
    --bucket $BUCKET \
    --region $REGION \
    --create-bucket-configuration LocationConstraint=$REGION

  3. Create a new user:

    Example

    $ aws iam create-user --user-name $USER

  4. Create a new policy:

    Example

    $ cat > velero-policy.json <<EOF
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:DescribeVolumes",
                    "ec2:DescribeSnapshots",
                    "ec2:CreateTags",
                    "ec2:CreateVolume",
                    "ec2:CreateSnapshot",
                    "ec2:DeleteSnapshot"
                ],
                "Resource": "*"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:GetObject",
                    "s3:DeleteObject",
                    "s3:PutObject",
                    "s3:AbortMultipartUpload",
                    "s3:ListMultipartUploadParts"
                ],
                "Resource": [
                    "arn:aws:s3:::${BUCKET}/*"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:ListBucket",
                    "s3:GetBucketLocation",
                    "s3:ListBucketMultipartUploads"
                ],
                "Resource": [
                    "arn:aws:s3:::${BUCKET}"
                ]
            }
        ]
    }
    EOF

  5. Associate this policy to the new user:

    Example

    $ aws iam put-user-policy \
    --user-name $USER \
    --policy-name velero \
    --policy-document file://velero-policy.json

  6. Create an access key:

    Example

    $ aws iam create-access-key --user-name $USER --output=json | jq -r '.AccessKey | [ "export AWS_ACCESS_KEY_ID=" + .AccessKeyId, "export AWS_SECRET_ACCESS_KEY=" + .SecretAccessKey ] | join("\n")'

  7. Create a credentials file with your AWS secret key information:

    Syntax

    cat << EOF > ./credentials-velero
    [default]
    aws_access_key_id=$AWS_ACCESS_KEY_ID
    aws_secret_access_key=$AWS_SECRET_ACCESS_KEY
    EOF

  8. Log in to the OpenShift web console with a user that has the cluster-admin role.
  9. From the Administrator perspective, expand the Operators navigation menu, and click OperatorHub.
  10. In the search field, type oadp, and click the OADP Operator tile provided by Red Hat.
  11. Click the Install button to show the operator details.
  12. Accept the default values, click Install on the Install Operator page, and wait for the installation to finish.
  13. After the operator installation finishes, from your workstation terminal, create a secret resource for OpenShift with your AWS credentials:

    Example

    $ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero

  14. From the OpenShift web console, click the View Operator button.
  15. Click Create instance on the DataProtectionApplication (DPA) tile.
  16. On the Create DataProtectionApplication page, select YAML view.
  17. Edit the following values in the resource file:

    1. Under the metadata section, replace velero-sample with velero.
    2. Under the spec.configuration.nodeAgent section, replace restic with kopia.
    3. Under the spec.configuration.velero section, add resourceTimeout: 10m.
    4. Under the spec.configuration.velero.defaultPlugins section, add - csi.
    5. Under the spec.snapshotLocations section, replace the us-west-2 value with your AWS regional value.
    6. Under the spec.backupLocations section, replace the us-east-1 value with your AWS regional value.
    7. Under the spec.backupLocations.objectStorage section, replace my-bucket-name with your bucket name. Replace velero with your bucket prefix name, if you use a different prefix.
  18. Click the Create button.

1.2. Backing up your Trusted Artifact Signer data

With the OpenShift API Data Protection (OADP) operator installed and an instance deployed, you can create a volume snapshot resource, and a backup resource to backup your Red Hat Trusted Artifact Signer data.

Prerequisites

  • Red Hat OpenShift Container Platform version 4.13 or later.
  • Access to the OpenShift web console with the cluster-admin role.
  • Installation of the OADP operator.
  • A workstation with the oc binary installed.

Procedure

  1. Open a terminal on your workstation, and log in to OpenShift:

    Syntax

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT

    Example

    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443

    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  2. Find and edit the VolumeSnapshotClass resource:

    Example

    $ oc get VolumeSnapshotClass -n openshift-adp
    $ oc edit VolumeSnapshotClass csi-aws-vsc -n openshift-adp

  3. Update the following values in the resource file:

    1. Under the metadata.labels section, add the velero.io/csi-volumesnapshot-class: "true" label.
    2. Save your changes, and quit the editor.
  4. Create a Backup resource:

    Example

    $ cat <<EOF | oc apply -f -
    apiVersion: velero.io/v1
    kind: Backup
    metadata:
      name: rhtas-backup
      labels:
        velero.io/storage-location: velero-1
      namespace: openshift-adp
    spec:
      schedule: 0 7 * * *
      hooks: {}
      includedNamespaces:
      - trusted-artifact-signer
      includedResources: []
      excludedResources: []
      snapshotMoveData: true
      storageLocation: velero-1
      ttl: 720h0m0s
    EOF

    Add the schedule property to enable Cron scheduling for running this backup. In the example, this backup resource runs everyday at 7:00 a.m.

    By default, all resources are backed up within the trusted-artifact-signer namespace. You can specify what resources you want to include or exclude by using the includeResources or excludedResources properties respectively.

    Important

    Depending on the storage class of the backup target, persistent volumes cannot be actively in-use for the backup to be successful.

1.3. Restoring your Trusted Artifact Signer data

With the Red Hat Trusted Artifact Signer (RHTAS) and OpenShift API Data Protection (OADP) operators installed, and a backup resource for RHTAS namespace, you can restore your data to an OpenShift cluster.

Prerequisites

Procedure

  1. Disable the RHTAS operator:

    Example

    $ oc scale deploy rhtas-operator-controller-manager --replicas=0 -n openshift-operators

  2. Create the Restore resource:

    Example

    $ cat <<EOF | oc apply -f -
    apiVersion: velero.io/v1
    kind: Restore
    metadata:
      name: rhtas-restore
      namespace: openshift-adp
    spec:
      backupName: rhtas-backup
      includedResources: []
      restoreStatus:
        includedResources:
          - securesign.rhtas.redhat.com
          - trillian.rhtas.redhat.com
          - ctlog.rhtas.redhat.com
          - fulcio.rhtas.redhat.com
          - rekor.rhtas.redhat.com
          - tuf.rhtas.redhat.com
          - timestampauthority.rhtas.redhat.com
      excludedResources:
      - pod
      - deployment
      - nodes
      - route
      - service
      - replicaset
      - events
      - cronjob
      - events.events.k8s.io
      - backups.velero.io
      - restores.velero.io
      - resticrepositories.velero.io
      - pods
      - deployments
      restorePVs: true
      existingResourcePolicy: update
    EOF

  3. If restoring your RHTAS data to a different OpenShift cluster, do the following steps.

    1. Delete the secret for the Trillian database:

      Example

      $ oc delete secret securesign-sample-trillian-db-tls
      $ oc delete pod trillian-db-xxx

      Note

      The RHTAS operator recreates the secret and restarts the pod.

    2. Run the restoreOwnerReferences.sh script.
  4. Enable the RHTAS operator:

    Example

    $ oc scale deploy rhtas-operator-controller-manager --replicas=1 -n openshift-operators

    Important

    Immediately starting the RHTAS operator after starting the restore ensures the claim of the persistent volume.

1.4. Restore owner references script

This Bash script is for restoring the ownerReferences when restoring Red Hat Trusted Artifact Signer (RHTAS) data to a different OpenShift cluster.

#!/bin/bash

# List of resources to check
RESOURCES=("Fulcio" "Rekor" "Trillian" "TimestampAuthority" "CTlog" "Tuf")


function validate_owner() {
    local RESOURCE=$1
    local ITEM=$2
    local OWNER_NAME=$3

    # Check all the labels exist and are the same
    LABELS=("app.kubernetes.io/instance" "app.kubernetes.io/part-of" "velero.io/backup-name" "velero.io/restore-name")
    for LABEL in "${LABELS[@]}"; do
        PARENT_LABEL=$(oc get Securesign "$OWNER_NAME" -o json | jq -r ".metadata.labels[\"$LABEL\"]")
        CHILD_LABEL=$(oc get $RESOURCE "$ITEM" -o json | jq -r ".metadata.labels[\"$LABEL\"]")

        if [[ -z "$CHILD_LABEL" || $CHILD_LABEL == "null" ]]; then
            echo "  $LABEL label missing in $RESOURCE"
            return 1
        elif [[ -z "$PARENT_LABEL" || $PARENT_LABEL == "null" ]]; then
            echo "  $LABEL label missing in Securesign"
            return 1
        elif [[ "$CHILD_LABEL" != "$PARENT_LABEL" ]]; then
            echo "  $LABEL labels not matching: $CHILD_LABEL != $PARENT_LABEL"
            return 1
        fi
    done

    return 0
}


for RESOURCE in "${RESOURCES[@]}"; do
    echo "Checking $RESOURCE ..."

    # Get all resources missing ownerReferences
    MISSING_REFS=$(oc get $RESOURCE -o json | jq -r '.items[] | select(.metadata.ownerReferences == null) | .metadata.name')

    for ITEM in $MISSING_REFS; do
        echo "  Missing ownerReferences in $RESOURCE/$ITEM"

        # Find the expected owner based on labels
        OWNER_NAME=$(oc get $RESOURCE "$ITEM" -o json | jq -r '.metadata.labels["app.kubernetes.io/name"]')

        if [[ -z "$OWNER_NAME" || "$OWNER_NAME" == "null" ]]; then
            echo "  Skipping $RESOURCE/$ITEM: name not found in labels"
            continue
        fi

        if ! validate_owner $RESOURCE $ITEM $OWNER_NAME; then
          echo "  Skipping ..."
          continue
        fi

        # Try to get the owner's UID from Securesign
        OWNER_UID=$(oc get Securesign "$OWNER_NAME" -o jsonpath='{.metadata.uid}' 2>/dev/null)

        if [[ -z "$OWNER_UID" || "$OWNER_UID" == "null" ]]; then
            echo "  Failed to find Securesign/$OWNER_NAME UID, skipping ..."
            continue
        fi

        echo "  Found owner: Securesign/$OWNER_NAME (UID: $OWNER_UID)"

        # Patch the object with the restored ownerReference
        oc patch $RESOURCE "$ITEM" --type='merge' -p "{
          \"metadata\": {
            \"ownerReferences\": [
              {
                \"apiVersion\": \"rhtas.redhat.com/v1alpha1\",
                \"kind\": \"Securesign\",
                \"name\": \"$OWNER_NAME\",
                \"uid\": \"$OWNER_UID\",
                \"controller\": true,
                \"blockOwnerDeletion\": true
              }
            ]
          }
        }"

        echo "Restored ownerReferences for $RESOURCE/$ITEM"
    done
done

echo "Done"

Chapter 2. Trusted Artifact Signer’s implementation of The Update Framework

Starting with Red Hat Trusted Artifact Signer (RHTAS) version 1.1, we implemented The Update Framework (TUF) as a trust root to store public keys, and certificates used by RHTAS services. The Update Framework is a sophisticated framework for securing software update systems, and this makes it ideal for securing shipped artifacts. The Update Framework refers to the RHTAS services as trusted root targets. There are four trusted targets, one for each RHTAS service: Fulcio, Certificate Transparency (CT) log, Rekor, and Timestamp Authority (TSA). Client software, such as cosign, use the RHTAS trust root targets to sign and verify artifact signatures. A simple HTTP server distributes the public keys and certificates to the client software. This simple HTTP server has the TUF repository of the individual targets.

When deploying the RHTAS operator in OpenShift, by default, we create a TUF repository, and prepopulate the individual targets. By default, the expiration date of all metadata files is 52 weeks from the time you deploy a Securesign instance. Red Hat recommends choosing shorter expiration periods, and rotating your public keys and certificates often. Doing these maintenance tasks regularly can help prevent attacks on your code base.

Chapter 3. Updating The Update Framework metadata files

By default, The Update Framework (TUF) metadata files expire after 52 weeks from the deployment date of a Securesign instance. At a minimum, you have to update the TUF metadata files at least once every 52 weeks before they expire. Red Hat recommends updating the metadata files more often than once a year.

This procedure walks you through refreshing the root, and non-root metadata files.

Prerequisites

  • Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • A workstation with the oc binary installed.

Procedure

  1. Download the tuftool binary from the OpenShift cluster to your workstation.

    Important

    The tuftool binary is only available for Linux operating systems.

    1. From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
    2. Open a terminal on your workstation, decompress the binary .gz file, and set the execute bit:

      Example

      $ gunzip tuftool-amd64.gz
      $ chmod +x tuftool-amd64

    3. Move and rename the binary to a location within your $PATH environment:

      Example

      $ sudo mv tuftool-amd64 /usr/local/bin/tuftool

  2. Log in to OpenShift from the command line:

    Syntax

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT

    Example

    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443

    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  3. Switch to the RHTAS project:

    Example

    $ oc project trusted-artifact-signer

  4. Configure your shell environment:

    Example

    $ export WORK="${HOME}/trustroot-example"
    $ export ROOT="${WORK}/root/root.json"
    $ export KEYDIR="${WORK}/keys"
    $ export INPUT="${WORK}/input"
    $ export TUF_REPO="${WORK}/tuf-repo"
    $ export TUF_SERVER_POD="$(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")"
    
    $ export TIMESTAMP_EXPIRATION="in 10 days"
    $ export SNAPSHOT_EXPIRATION="in 26 weeks"
    $ export TARGETS_EXPIRATION="in 26 weeks"
    $ export ROOT_EXPIRATION="in 26 weeks"

    Set the expiration durations according to your requirements.

  5. Create a temporary TUF directory structure:

    Example

    $ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"

  6. Download the TUF contents to the temporary TUF directory structure:

    Example

    $ oc extract --to "${KEYDIR}/" secret/tuf-root-keys
    $ oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}"
    $ cp "${TUF_REPO}/root.json" "${ROOT}"

  7. You can update the timestamp, snapshot, and targets metadata all in one command:

    Example

    $ tuftool update \
      --root "${ROOT}" \
      --key "${KEYDIR}/timestamp.pem" \
      --key "${KEYDIR}/snapshot.pem" \
      --key "${KEYDIR}/targets.pem" \
      --timestamp-expires "${TIMESTAMP_EXPIRATION}" \
      --snapshot-expires "${SNAPSHOT_EXPIRATION}" \
      --targets-expires "${TARGETS_EXPIRATION}" \
      --outdir "${TUF_REPO}" \
      --metadata-url "file://${TUF_REPO}"

    Note

    You can also run the TUF metadata update on a subset of TUF metadata files. For example, the timestamp.json metadata file expires more often than the other metadata files. Therefore, you can just update the timestamp metadata file by running the following command:

    $ tuftool update \
      --root "${ROOT}" \
      --key "${KEYDIR}/timestamp.pem" \
      --timestamp-expires "${TIMESTAMP_EXPIRATION}" \
      --outdir "${TUF_REPO}" \
      --metadata-url "file://${TUF_REPO}"
  8. Only update the root expiration date if it is about to expire:

    Example

    $ tuftool root expire "${ROOT}" "${ROOT_EXPIRATION}"

    Note

    You can skip this step if the root file is not close to expiring.

  9. Update the root version:

    Example

    $ tuftool root bump-version "${ROOT}"

  10. Sign the root metadata file again:

    Example

    $ tuftool root sign "${ROOT}" -k "${KEYDIR}/root.pem"

  11. Set the new root version, and copy the root metadata file in place:

    Example

    $ export NEW_ROOT_VERSION=$(cat "${ROOT}" | jq -r ".signed.version")
    $ cp "${ROOT}" "${TUF_REPO}/root.json"
    $ cp "${ROOT}" "${TUF_REPO}/${NEW_ROOT_VERSION}.root.json"

  12. Upload these changes to the TUF server:

    Example

    $ oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"

Chapter 4. Rotate your certificates and keys

As a systems administrator, you can proactively rotate the certificates and signer keys used by the Red Hat Trusted Artifact Signer (RHTAS) service running on Red Hat OpenShift. Rotating your keys regularly can prevent key tampering, and theft. These procedures guide you through expiring your old certificates and signer keys, and replacing them with a new certificate and signer key for the underlying services that make up RHTAS. You can rotate keys and certificates for the following services:

  • Rekor
  • Certificate Transparency log
  • Fulcio
  • Timestamp Authority

4.1. Rotating the Rekor signer key

You can proactively rotate Rekor’s signer key by using the sharding feature to freeze the log tree, and create a new log tree with a new signer key. This procedure walks you through expiring your old Rekor signer key, and replacing it with a new signer key for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old Rekor signer key still allows you to verify artifacts signed by the old key.

Important

This procedure requires downtime to the Rekor service.

Prerequisites

  • Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • A workstation with the oc, openssl, and cosign binaries installed.

Procedure

  1. Download the rekor-cli binary from the OpenShift cluster to your workstation.

    1. Login to the OpenShift web console. From the home page, click the ? icon, click Command line tools, go to the rekor-cli download section, and click the link for your platform.
    2. Open a terminal on your workstation, decompress the binary .gz file, and set the execute bit:

      Example

      $ gunzip rekor-cli-amd64.gz
      $ chmod +x rekor-cli-amd64

    3. Move and rename the binary to a location within your $PATH environment:

      Example

      $ sudo mv rekor-cli-amd64 /usr/local/bin/rekor-cli

  2. Download the tuftool binary from the OpenShift cluster to your workstation.

    Important

    The tuftool binary is only available for Linux operating systems.

    1. From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
    2. From a terminal on your workstation, decompress the binary .gz file, and set the execute bit:

      Example

      $ gunzip tuftool-amd64.gz
      $ chmod +x tuftool-amd64

    3. Move and rename the binary to a location within your $PATH environment:

      Example

      $ sudo mv tuftool-amd64 /usr/local/bin/tuftool

  3. Log in to OpenShift from the command line:

    Syntax

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT

    Example

    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443

    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  4. Switch to the RHTAS project:

    Example

    $ oc project trusted-artifact-signer

  5. Get the Rekor URL:

    Example

    $ export REKOR_URL=$(oc get rekor -o jsonpath='{.items[0].status.url}')

  6. Get the log tree identifier for the active shard:

    Example

    $ export OLD_TREE_ID=$(rekor-cli loginfo --rekor_server $REKOR_URL --format json | jq -r .TreeID)

  7. Scale down the Rekor instance, and set the log tree to the DRAINING state:

    Example

    $ oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver:8091 --tree_id=${OLD_TREE_ID} --tree_state=DRAINING

    While draining, the tree log will not accept any new entries. Watch and wait for the queue to empty.

    Important

    You must wait for the queues to be empty before proceeding to the next step. If leaves are still integrating while draining, then freezing the log tree during this process can cause the log path to exceed the maximum merge delay (MMD).

  8. Freeze the log tree:

    Example

    $ oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver:8091 --tree_id=${OLD_TREE_ID} --tree_state=FROZEN

  9. Get the length of the frozen log tree:

    Example

    $ export OLD_SHARD_LENGTH=$(rekor-cli loginfo --rekor_server $REKOR_URL --format json | jq -r .ActiveTreeSize)

  10. Get Rekor’s public key for the old shard:

    Example

    $ export OLD_PUBLIC_KEY=$(curl -s $REKOR_URL/api/v1/log/publicKey | base64 | tr -d '\n')

  11. Create a new log tree:

    Example

    $ export NEW_TREE_ID=$(oc run createtree --image registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- -logtostderr=false --admin_server=trillian-logserver:8091 --display_name=rekor-tree)

    Now you have two log trees, one frozen tree, and a new tree that will become the active shard.

  12. Create a new private key:

    Example

    $ openssl ecparam -genkey -name secp384r1 -noout -out new-rekor.pem

    Important

    The new key must have a unique file name.

  13. Create a new secret resource with the new signer key:

    Example

    $ oc create secret generic rekor-signer-key --from-file=private=new-rekor.pem

  14. Update the Securesign Rekor configuration with the new tree identifier and the old sharding information:

    Example

    $ read -r -d '' SECURESIGN_PATCH_1 <<EOF
    [
        {
            "op": "replace",
            "path": "/spec/rekor/treeID",
            "value": $NEW_TREE_ID
        },
        {
            "op": "add",
            "path": "/spec/rekor/sharding/-",
            "value": {
                "treeID": $OLD_TREE_ID,
                "treeLength": $OLD_SHARD_LENGTH,
                "encodedPublicKey": "$OLD_PUBLIC_KEY"
            }
        },
        {
            "op": "replace",
            "path": "/spec/rekor/signer/keyRef",
            "value": {"name": "rekor-signer-key", "key": "private"}
        }
    ]
    EOF

    Note

    If you have /spec/rekor/signer/keyPasswordRef set with a value, then create a new separate update to remove it:

    Example

    $ read -r -d '' SECURESIGN_PATCH_2 <<EOF
    [
        {
            "op": "remove",
            "path": "/spec/rekor/signer/keyPasswordRef"
        }
    ]
    EOF

    Apply this update after applying the first update.

  15. Update the Securesign instance:

    Example

    $ oc patch Securesign securesign-sample --type='json' -p="$SECURESIGN_PATCH_1"

  16. Wait for the Rekor server to redeploy with the new signer key:

    Example

    $ oc wait pod -l app.kubernetes.io/name=rekor-server --for=condition=Ready

  17. Get the new public key:

    Example

    $ export NEW_KEY_NAME=new-rekor.pub
    $ curl $(oc get rekor -o jsonpath='{.items[0].status.url}')/api/v1/log/publicKey -o $NEW_KEY_NAME

  18. Configure The Update Framework (TUF) service to use the new Rekor public key.

    1. Set up your shell environment:

      Example

      $ export WORK="${HOME}/trustroot-example"
      $ export ROOT="${WORK}/root/root.json"
      $ export KEYDIR="${WORK}/keys"
      $ export INPUT="${WORK}/input"
      $ export TUF_REPO="${WORK}/tuf-repo"
      $ export TUF_SERVER_POD="$(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")"

    2. Create a temporary TUF directory structure:

      Example

      $ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"

    3. Download the TUF contents to the temporary TUF directory structure:

      Example

      $ oc extract --to "${KEYDIR}/" secret/tuf-root-keys
      $ oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}"
      $ cp "${TUF_REPO}/root.json" "${ROOT}"

    4. Find the active Rekor signer key file name. Open the latest target file, for example, 1.target.json, within the local TUF repository. In this file you will find the active Rekor signer key file name, for example, rekor.pub. Set an environment variable with this active Rekor signer key file name:

      Example

      $ export ACTIVE_KEY_NAME=rekor.pub

    5. Update the Rekor signer key with the old public key:

      Example

      $ echo $OLD_PUBLIC_KEY | base64 -d > $ACTIVE_KEY_NAME

    6. Expire the old Rekor signer key:

      Example

      $ tuftool rhtas \
        --root "${ROOT}" \
        --key "${KEYDIR}/snapshot.pem" \
        --key "${KEYDIR}/targets.pem" \
        --key "${KEYDIR}/timestamp.pem" \
        --set-rekor-target "${ACTIVE_KEY_NAME}" \
        --rekor-uri "https://rekor.rhtas" \
        --rekor-status "Expired" \
        --outdir "${TUF_REPO}" \
        --metadata-url "file://${TUF_REPO}"

    7. Add the new Rekor signer key:

      Example

      $ tuftool rhtas \
        --root "${ROOT}" \
        --key "${KEYDIR}/snapshot.pem" \
        --key "${KEYDIR}/targets.pem" \
        --key "${KEYDIR}/timestamp.pem" \
        --set-rekor-target "${NEW_KEY_NAME}" \
        --rekor-uri "https://rekor.rhtas" \
        --outdir "${TUF_REPO}" \
        --metadata-url "file://${TUF_REPO}"

    8. Upload these changes to the TUF server:

      Example

      $ oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"

    9. Delete the working directory:

      Example

      $ rm -r $WORK

  19. Update the cosign configuration with the updated TUF configuration:

    Example

    $ cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json

    Now, you are ready to sign and verify your artifacts with the new Rekor signer key.

4.2. Rotating the Certificate Transparency log signer key

You can proactively rotate Certificate Transparency (CT) log signer key by using the sharding feature to freeze the log tree, and create a new log tree with a new signer key. This procedure walks you through expiring your old CT log signer key, and replacing it with a new signer key for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old CT log signer key still allows you to verify artifacts signed by the old key.

Prerequisites

  • Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • A workstation with the oc, openssl, and cosign binaries installed.

Procedure

  1. Download the tuftool binary from the OpenShift cluster to your workstation.

    Important

    The tuftool binary is only available for Linux operating systems.

    1. From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
    2. Open a terminal on your workstation, decompress the binary .gz file, and set the execute bit:

      Example

      $ gunzip tuftool-amd64.gz
      $ chmod +x tuftool-amd64

    3. Move and rename the binary to a location within your $PATH environment:

      Example

      $ sudo mv tuftool-amd64 /usr/local/bin/tuftool

  2. Log in to OpenShift from the command line:

    Syntax

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT

    Example

    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443

    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  3. Switch to the RHTAS project:

    Example

    $ oc project trusted-artifact-signer

  4. Make a backup of the current CT log configuration, and keys:

    Example

    $ export SERVER_CONFIG_NAME=$(oc get ctlog -o jsonpath='{.items[0].status.serverConfigRef.name}')
    $ oc get secret $SERVER_CONFIG_NAME -o jsonpath="{.data.config}" | base64 --decode > config.txtpb
    $ oc get secret $SERVER_CONFIG_NAME -o jsonpath="{.data.fulcio-0}" | base64 --decode > fulcio-0.pem
    $ oc get secret $SERVER_CONFIG_NAME -o jsonpath="{.data.private}" | base64 --decode > private.pem
    $ oc get secret $SERVER_CONFIG_NAME -o jsonpath="{.data.public}" | base64 --decode > public.pem

  5. Capture the current tree identifier:

    Example

    $ export OLD_TREE_ID=$(oc get ctlog -o jsonpath='{.items[0].status.treeID}')

  6. Set the log tree to the DRAINING state:

    Example

    $ oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver:8091 --tree_id=${OLD_TREE_ID} --tree_state=DRAINING

    While draining, the tree log will not accept any new entries. Watch and wait for the queue to empty.

    Important

    You must wait for the queues to be empty before proceeding to the next step. If leaves are still integrating while draining, then freezing the log tree during this process can cause the log path to exceed the maximum merge delay (MMD).

  7. Once the queue has been fully drained, freeze the log:

    Example

    $ oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver:8091 --tree_id=${OLD_TREE_ID} --tree_state=FROZEN

  8. Create a new Merkle tree, and capture the new tree identifier:

    Example

    $ export NEW_TREE_ID=$(kubectl run createtree --image registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- -logtostderr=false --admin_server=trillian-logserver:8091 --display_name=ctlog-tree)

  9. Generate a new certificate, along with new public and private keys:

    Example

    $ openssl ecparam -genkey -name prime256v1 -noout -out new-ctlog.pem
    $ openssl ec -in new-ctlog.pem -pubout -out new-ctlog-public.pem
    $ openssl ec -in new-ctlog.pem -out new-ctlog.pass.pem -des3 -passout pass:"CHANGE_ME"

    Replace CHANGE_ME with a new password.

    Important

    The certificate and new keys must have unique file names.

  10. Update the CT log configuration.

    1. Open the config.txtpb file for editing.
    2. For the frozen log, add the not_after_limit field to the frozen log entry, rename the prefix value to a unique name, and replace the old path to the private key with ctfe-keys/private-0:

      Example

      ...
      log_configs:{
        # frozen log
        config:{
          log_id:2066075212146181968
          prefix:"trusted-artifact-signer-0"
          roots_pem_file:"/ctfe-keys/fulcio-0"
          private_key:{[type.googleapis.com/keyspb.PEMKeyFile]:{path:"/ctfe-keys/private-0" password:"Example123"}}
          public_key:{der:"0Y0\x13\x06\x07*\x86H\xce=\x02\x01\x06\x08*\x86H\xce=\x03\x01\x07\x03B\x00\x04)'.\xffUJ\xe2s)\xefR\x8a\xfcO\xdcewȶy\xa7\x9d<\x13\xb0\x1c\x99\x96\xe4'\xe3v\x07:\xc8I+\x08J\x9d\x8a\xed\x06\xe4\xaeI:q\x98\xf4\xbc<o4VD\x0cr\xf9\x9c\xecxT\x84"}
          not_after_limit:{seconds:1728056285 nanos:012111000}
          ext_key_usages:"CodeSigning"
          log_backend_name:"trillian"
        }

      Note

      You can get the current time value for seconds and nanoseconds, by running the following commands: date +%s, and date +%N.

      Important

      The not_after_limit field defines the end of the timestamp range for the frozen log only. Certificates beyond this point in time are no longer accepted for inclusion in this log.

    3. Copy and paste the frozen log config block, appending it to the configuration file to create a new entry.
    4. Change the following lines in the new config block. Set the log_id to the new tree identifier, change the prefix to trusted-artifact-signer, change the private_key path to ctfe-keys/private, remove the public_key line, and change not_after_limit to not_after_start and set the timestamp range:

      Example

      ...
      log_configs:{
        # frozen log
        ...
        # new active log
        config:{
      	  log_id: NEW_TREE_ID
      	  prefix:"trusted-artifact-signer"
      	  roots_pem_file:"/ctfe-keys/fulcio-0"
      	  private_key:{[type.googleapis.com/keyspb.PEMKeyFile]:{path:"ctfe-keys/private" password:"CHANGE_ME"}}
      	  ext_key_usages:"CodeSigning"
      	  not_after_start:{seconds:1713201754 nanos:155663000}
      	  log_backend_name:"trillian"
        }

      Add the NEW_TREE_ID, and replace CHANGE_ME with the new private key password. The password here must match the password used for generating the new private and public keys.

      Important

      The not_after_start field defines the beginning of the timestamp range inclusively. This means the log will start accepting certificates at this point in time.

  11. Create a new secret resource:

    Example

    $ oc create secret generic ctlog-config \
    --from-file=config=config.txtpb \
    --from-file=private=new-ctlog.pass.pem \
    --from-file=public=new-ctlog-public.pem \
    --from-file=fulcio-0=fulcio-0.pem \
    --from-file=private-0=private.pem \
    --from-file=public-0=public.pem \
    --from-literal=password=CHANGE_ME

    Replace CHANGE_ME with the new private key password.

  12. Configure The Update Framework (TUF) service to use the new CT log public key.

    1. Set up your shell environment:

      Example

      $ export WORK="${HOME}/trustroot-example"
      $ export ROOT="${WORK}/root/root.json"
      $ export KEYDIR="${WORK}/keys"
      $ export INPUT="${WORK}/input"
      $ export TUF_REPO="${WORK}/tuf-repo"
      $ export TUF_SERVER_POD="$(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")"

    2. Create a temporary TUF directory structure:

      Example

      $ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"

    3. Download the TUF contents to the temporary TUF directory structure:

      Example

      $ oc extract --to "${KEYDIR}/" secret/tuf-root-keys
      $ oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}"
      $ cp "${TUF_REPO}/root.json" "${ROOT}"

    4. Find the active CT log public key file name. Open the latest target file, for example, 1.targets.json, within the local TUF repository. In this target file you will find the active CT log public key file name, for example, ctfe.pub. Set an environment variable with this active CT log public key file name:

      Example

      $ export ACTIVE_CTFE_NAME=ctfe.pub

    5. Extract the active CT log public key from OpenShift:

      Example

      $ oc get secret $(oc get ctlog securesign-sample -o jsonpath='{.status.publicKeyRef.name}') -o jsonpath='{.data.public}' | base64 -d > $ACTIVE_CTFE_NAME

    6. Expire the old CT log signer key:

      Example

      $ tuftool rhtas \
        --root "${ROOT}" \
        --key "${KEYDIR}/snapshot.pem" \
        --key "${KEYDIR}/targets.pem" \
        --key "${KEYDIR}/timestamp.pem" \
        --set-ctlog-target "$ACTIVE_CTFE_NAME" \
        --ctlog-uri "https://ctlog.rhtas" \
        --ctlog-status "Expired" \
        --outdir "${TUF_REPO}" \
        --metadata-url "file://${TUF_REPO}"

    7. Add the new CT log signer key:

      Example

      $ tuftool rhtas \
        --root "${ROOT}" \
        --key "${KEYDIR}/snapshot.pem" \
        --key "${KEYDIR}/targets.pem" \
        --key "${KEYDIR}/timestamp.pem" \
        --set-ctlog-target "new-ctlog-public.pem" \
        --ctlog-uri "https://ctlog.rhtas" \
        --outdir "${TUF_REPO}" \
        --metadata-url "file://${TUF_REPO}"

    8. Upload these changes to the TUF server:

      Example

      $ oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"

    9. Delete the working directory:

      Example

      $ rm -r $WORK

  13. Update the Securesign CT log configuration with the new tree identifier:

    Example

    $ read -r -d '' SECURESIGN_PATCH <<EOF
    [
    	{
        	"op": "replace",
        	"path": "/spec/ctlog/serverConfigRef",
        	"value": {"name": "ctlog-config"}
    	},
        {
            "op": "replace",
            "path": "/spec/ctlog/treeID",
            "value": $NEW_TREE_ID
        },
    	{
        	"op": "replace",
        	"path": "/spec/ctlog/privateKeyRef",
        	"value": {"name": "ctlog-config", "key": "private"}
    	},
        {
            "op": "replace",
            "path": "/spec/ctlog/privateKeyPasswordRef",
            "value": {"name": "ctlog-config", "key": "password"}
        },
    	{
        	"op": "replace",
        	"path": "/spec/ctlog/publicKeyRef",
        	"value": {"name": "ctlog-config", "key": "public"}
    	}
    ]
    EOF

  14. Patch the Securesign instance:

    Example

    $ oc patch Securesign securesign-sample --type='json' -p="$SECURESIGN_PATCH"

  15. Wait for the CT log server to redeploy:

    Example

    $ oc wait pod -l app.kubernetes.io/name=ctlog --for=condition=Ready

  16. Update the cosign configuration with the updated TUF configuration:

    Example

    $ cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json

    Now, you are ready to sign and verify your artifacts with the new CT log signer key.

4.3. Rotating the Fulcio certificate

You can proactively rotate the certificate used by the Fulcio service. This procedure walks you through expiring your old Fulcio certificate, and replacing it with a new certificate for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old Fulcio certificate still allows you to verify artifacts signed by the old certificate.

Prerequisites

  • Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • A workstation with the oc, openssl, and cosign binaries installed.

Procedure

  1. Download the tuftool binary from the OpenShift cluster to your workstation.

    Important

    The tuftool binary is only available for Linux operating systems.

    1. From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
    2. Open a terminal on your workstation, decompress the binary .gz file, and set the execute bit:

      Example

      $ gunzip tuftool-amd64.gz
      $ chmod +x tuftool-amd64

    3. Move and rename the binary to a location within your $PATH environment:

      Example

      $ sudo mv tuftool-amd64 /usr/local/bin/tuftool

  2. Log in to OpenShift from the command line:

    Syntax

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT

    Example

    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443

    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  3. Switch to the RHTAS project:

    Example

    $ oc project trusted-artifact-signer

  4. Generate a new certificate, along with new public and private keys:

    Example

    $ openssl ecparam -genkey -name prime256v1 -noout -out new-fulcio.pem
    $ openssl ec -in new-fulcio.pem -pubout -out new-fulcio-public.pem
    $ openssl ec -in new-fulcio.pem -out new-fulcio.pass.pem -des3 -passout pass:"CHANGE_ME"
    $ openssl req -new -x509 -key new-fulcio.pass.pem -out new-fulcio.cert.pem

    Replace CHANGE_ME with a new password.

    Important

    The certificate and new keys must have unique file names.

  5. Create a new secret:

    Example

    $ oc create secret generic fulcio-config \
    --from-file=private=new-fulcio.pass.pem \
    --from-file=cert=new-fulcio.cert.pem \
    --from-literal=password=CHANGE_ME

    Replace CHANGE_ME with a new password.

    Note

    The password here must match the password used for generating the new private and public keys.

  6. Configure The Update Framework (TUF) service to use the new Fulcio certificate.

    1. Set up your shell environment:

      Example

      $ export WORK="${HOME}/trustroot-example"
      $ export ROOT="${WORK}/root/root.json"
      $ export KEYDIR="${WORK}/keys"
      $ export INPUT="${WORK}/input"
      $ export TUF_REPO="${WORK}/tuf-repo"
      $ export TUF_SERVER_POD="$(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")"

    2. Create a temporary TUF directory structure:

      Example

      $ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"

    3. Download the TUF contents to the temporary TUF directory structure:

      Example

      $ oc extract --to "${KEYDIR}/" secret/tuf-root-keys
      $ oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}"
      $ cp "${TUF_REPO}/root.json" "${ROOT}"

    4. Find the active Fulcio certificate file name. Open the latest target file, for example, 1.targets.json, within the local TUF repository. In this file you will find the active Fulcio certificate file name, for example, fulcio_v1.crt.pem. Set an environment variable with this active Fulcio certificate file name:

      Example

      $ export ACTIVE_CERT_NAME=fulcio_v1.crt.pem

    5. Extract the active Fulico certificate from OpenShift:

      Example

      $ oc get secret $(oc get fulcio securesign-sample -o jsonpath='{.status.certificate.caRef.name}') -o jsonpath='{.data.cert}' | base64 -d > $ACTIVE_CERT_NAME

    6. Expire the old certificate:

      Example

      $ tuftool rhtas \
        --root "${ROOT}" \
        --key "${KEYDIR}/snapshot.pem" \
        --key "${KEYDIR}/targets.pem" \
        --key "${KEYDIR}/timestamp.pem" \
        --set-fulcio-target "$ACTIVE_CERT_NAME" \
        --fulcio-uri "https://fulcio.rhtas" \
        --fulcio-status "Expired" \
        --outdir "${TUF_REPO}" \
        --metadata-url "file://${TUF_REPO}"

    7. Add the new Fulcio certificate:

      Example

      $ tuftool rhtas \
        --root "${ROOT}" \
        --key "${KEYDIR}/snapshot.pem" \
        --key "${KEYDIR}/targets.pem" \
        --key "${KEYDIR}/timestamp.pem" \
        --set-fulcio-target "new-fulcio.cert.pem" \
        --fulcio-uri "https://fulcio.rhtas" \
        --outdir "${TUF_REPO}" \
        --metadata-url "file://${TUF_REPO}"

    8. Upload these changes to the TUF server:

      Example

      $ oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"

    9. Delete the working directory:

      Example

      $ rm -r $WORK

  7. Update the Securesign Fulcio configuration:

    Example

    $ read -r -d '' SECURESIGN_PATCH <<EOF
    [
    {
        "op": "replace",
        "path": "/spec/fulcio/certificate/privateKeyRef",
        "value": {"name": "fulcio-config", "key": "private"}
    },
    {
        "op": "replace",
        "path": "/spec/fulcio/certificate/privateKeyPasswordRef",
        "value": {"name": "fulcio-config", "key": "password"}
    },
    {
        "op": "replace",
        "path": "/spec/fulcio/certificate/caRef",
        "value": {"name": "fulcio-config", "key": "cert"}
    },
    {
        "op": "replace",
        "path": "/spec/ctlog/rootCertificates",
        "value": [{"name": "fulcio-config", "key": "cert"}]
    }
    ]
    EOF

  8. Patch the Securesign instance:

    Example

    $ oc patch Securesign securesign-sample --type='json' -p="$SECURESIGN_PATCH"

  9. Wait for the Fulcio server to redeploy:

    Example

    $ oc wait pod -l app.kubernetes.io/name=fulcio-server --for=condition=Ready
    $ oc wait pod -l app.kubernetes.io/name=ctlog --for=condition=Ready

  10. Update the cosign configuration with the updated TUF configuration:

    Example

    $ cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json

    Now, you are ready to sign and verify your artifacts with the new Fulcio certificate.

4.4. Rotating the Timestamp Authority signer key and certificate chain

You can proactively rotate the Timestamp Authority (TSA) signer key and certificate chain. This procedure walks you through expiring your old TSA signer key and certificate chain, and replacing them with a new ones for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old TSA signer key and certificate chain still allows you to verify artifacts signed by the old key and certificate chain.

Prerequisites

  • Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • A workstation with the oc and openssl binaries installed.

Procedure

  1. Download the tuftool binary from the OpenShift cluster to your workstation.

    Important

    The tuftool binary is only available for Linux operating systems.

    1. From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
    2. Open a terminal on your workstation, decompress the binary .gz file, and set the execute bit:

      Example

      $ gunzip tuftool-amd64.gz
      $ chmod +x tuftool-amd64

    3. Move and rename the binary to a location within your $PATH environment:

      Example

      $ sudo mv tuftool-amd64 /usr/local/bin/tuftool

  2. Log in to OpenShift from the command line:

    Syntax

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT

    Example

    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443

    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  3. Switch to the RHTAS project:

    Example

    $ oc project trusted-artifact-signer

  4. Generate a new certificate chain, and a new signer key.

    Important

    The new certificate and keys must have unique file names.

    1. Create a temporary working directory:

      Example

      $ mkdir certs && cd certs

    2. Create the root certificate authority (CA) private key, and set a password:

      Example

      $ openssl req -x509 -newkey rsa:2048 -days 365 -sha256 -nodes \
      -keyout rootCA.key.pem -out rootCA.crt.pem \
      -passout pass:"CHANGE_ME" \
      -subj "/C=CC/ST=state/L=Locality/O=RH/OU=RootCA/CN=RootCA" \
      -addext "basicConstraints=CA:true" -addext "keyUsage=cRLSign, keyCertSign"

      Replace CHANGE_ME with a new password.

    3. Create the intermediate CA private key and certificate signing request (CSR), and set a password:

      Example

      $ openssl req -newkey rsa:2048 -sha256 \
      -keyout intermediateCA.key.pem -out intermediateCA.csr.pem \
      -passout pass:"CHANGE_ME" \
      -subj "/C=CC/ST=state/L=Locality/O=RH/OU=IntermediateCA/CN=IntermediateCA"

      Replace CHANGE_ME with a new password.

    4. Sign the intermediate CA certificate with the root CA:

      Example

      $ openssl x509 -req -in intermediateCA.csr.pem -CA rootCA.crt.pem -CAkey rootCA.key.pem \
      -CAcreateserial -out intermediateCA.crt.pem -days 365 -sha256 \
      -extfile <(echo -e "basicConstraints=CA:true\nkeyUsage=cRLSign, keyCertSign\nextendedKeyUsage=critical,timeStamping") \
      -passin pass:"CHANGE_ME"

      Replace CHANGE_ME with the root CA private key password to sign the intermediate CA certificate.

    5. Create the leaf CA private key and CSR, and set a password:

      Example

      $ openssl req -newkey rsa:2048 -sha256 \
      -keyout leafCA.key.pem -out leafCA.csr.pem \
      -passout pass:"CHANGE_ME" \
      -subj "/C=CC/ST=state/L=Locality/O=RH/OU=LeafCA/CN=LeafCA"

    6. Sign the leaf CA certificate with the intermediate CA:

      Example

      $ openssl x509 -req -in leafCA.csr.pem -CA intermediateCA.crt.pem -CAkey intermediateCA.key.pem \
        -CAcreateserial -out leafCA.crt.pem -days 365 -sha256 \
        -extfile <(echo -e "basicConstraints=CA:false\nkeyUsage=cRLSign, keyCertSign\nextendedKeyUsage=critical,timeStamping") \
        -passin pass:"CHANGE_ME"

      Replace CHANGE_ME with the intermediate CA private key password to sign the leaf CA certificate.

    7. Create the certificate chain by combining the newly created certificates:

      Example

      $ cat leafCA.crt.pem intermediateCA.crt.pem rootCA.crt.pem > new-cert-chain.pem

  5. Create a new secret resource with the signer key:

    Example

    $ oc create secret generic rotated-signer-key --from-file=rotated-signer-key=certs/leafCA.key.pem

  6. Create a new secret resource with the new certificate chain:

    Example

    $ oc create secret generic rotated-cert-chain --from-file=rotated-cert-chain=certs/new-cert-chain.pem

  7. Create a new secret resource with for the password:

    Example

    $ oc create secret generic rotated-password --from-literal=rotated-password=CHANGE_ME

    Replace CHANGE_ME with the intermediate CA private key password.

  8. Find your active TSA certificate file name, the TSA URL string, and configure your shell environment with these values:

    Example

    $ export ACTIVE_CERT_CHAIN_NAME=tsa.certchain.pem
    $ export TSA_URL=$(oc get timestampauthority securesign-sample -o jsonpath='{.status.url}')/api/v1/timestamp
    $ curl $TSA_URL/certchain -o $ACTIVE_CERT_CHAIN_NAME

  9. Update the Securesign TSA configuration:

    Example

    $ read -r -d '' SECURESIGN_PATCH <<EOF
    [
        {
            "op": "replace",
            "path": "/spec/tsa/signer/certificateChain",
            "value": {
                "certificateChainRef" : {"name": "rotated-cert-chain", "key": "rotated-cert-chain"}
            }
        },
        {
            "op": "replace",
            "path": "/spec/tsa/signer/file",
            "value": {
                    "privateKeyRef": {"name": "rotated-signer-key", "key": "rotated-signer-key"},
                    "passwordRef": {"name": "rotated-password", "key": "rotated-password"}
                }
        }
    ]
    EOF

  10. Patch the Securesign instance:

    Example

    $ oc patch Securesign securesign-sample --type='json' -p="$SECURESIGN_PATCH"

  11. Wait for the TSA server to redeploy with the new signer key and certificate chain:

    Example

    $ oc get pods -w -l app.kubernetes.io/name=tsa-server

  12. Get the new certificate chain:

    Example

    $ export NEW_CERT_CHAIN_NAME=new_tsa.certchain.pem
    $ curl $TSA_URL/certchain -o $NEW_CERT_CHAIN_NAME

  13. Configure The Update Framework (TUF) service to use the new TSA certificate chain.

    1. Set up your shell environment:

      Example

      $ export WORK="${HOME}/trustroot-example"
      $ export ROOT="${WORK}/root/root.json"
      $ export KEYDIR="${WORK}/keys"
      $ export INPUT="${WORK}/input"
      $ export TUF_REPO="${WORK}/tuf-repo"
      $ export TUF_SERVER_POD="$(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")"

    2. Create a temporary TUF directory structure:

      Example

      $ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"

    3. Download the TUF contents to the temporary TUF directory structure:

      Example

      $ oc extract --to "${KEYDIR}/" secret/tuf-root-keys
      $ oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}"
      $ cp "${TUF_REPO}/root.json" "${ROOT}"

    4. Expire the old TSA certificate:

      Example

      $ tuftool rhtas \
        --root "${ROOT}" \
        --key "${KEYDIR}/snapshot.pem" \
        --key "${KEYDIR}/targets.pem" \
        --key "${KEYDIR}/timestamp.pem" \
        --set-tsa-target "$ACTIVE_CERT_CHAIN_NAME" \
        --tsa-uri "$TSA_URL" \
        --tsa-status "Expired" \
        --outdir "${TUF_REPO}" \
        --metadata-url "file://${TUF_REPO}"

    5. Add the new TSA certificate:

      Example

      $ tuftool rhtas \
        --root "${ROOT}" \
        --key "${KEYDIR}/snapshot.pem" \
        --key "${KEYDIR}/targets.pem" \
        --key "${KEYDIR}/timestamp.pem" \
        --set-tsa-target "$NEW_CERT_CHAIN_NAME" \
        --tsa-uri "$TSA_URL" \
        --outdir "${TUF_REPO}" \
        --metadata-url "file://${TUF_REPO}"

    6. Upload these changes to the TUF server:

      Example

      $ oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"

    7. Delete the working directory:

      Example

      $ rm -r $WORK

  14. Update the cosign configuration with the updated TUF configuration:

    Example

    $ cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json

    Now, you are ready to sign and verify your artifacts that uses the new TSA signer key, and certificate.

Chapter 5. Using your own certificate authority bundle

You can bring your organization’s certificate authority (CA) bundle for signing and verifying your build artifacts with Red Hat’s Trusted Artifact Signer (RHTAS) service.

Prerequisites

  • Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • Your CA root certificate.
  • A workstation with the oc binary installed.

Procedure

  1. Log in to OpenShift from the command line:

    Syntax

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT

    Example

    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443

    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  2. Switch to the RHTAS project:

    Example

    $ oc project trusted-artifact-signer

  3. Create a new ConfigMap by using your organization’s CA root certificate bundle:

    Example

    $ oc create configmap custom-ca-bundle --from-file=ca-bundle.crt

    Important

    The certificate filename must be ca-bundle.crt.

  4. Open the Securesign resource for editing:

    Example

    $ oc edit Securesign securesign-sample

    1. Add the rhtas.redhat.com/trusted-ca under the metadata.annotations section:

      Example

      apiVersion: rhtas.redhat.com/v1alpha1
      kind: Securesign
      metadata:
        name: example-instance
        annotations:
      	rhtas.redhat.com/trusted-ca: custom-ca-bundle
      spec:
      ...

    2. Save, and quit the editor.
  5. Open the Fulcio resource for editing:

    Example

    $ oc edit Fulcio securesign-sample

    1. Add the rhtas.redhat.com/trusted-ca under the metadata.annotations section:

      Example

      apiVersion: rhtas.redhat.com/v1alpha1
      kind: Fulcio
      metadata:
        name: example-instance
        annotations:
          rhtas.redhat.com/trusted-ca: custom-ca-bundle
      spec:
      ...

    2. Save, and quit the editor.
  6. Wait for the RHTAS operator to reconfigure before signing and verifying artifacts.

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.