Administration Guide
General administration for the Trusted Artifact Signer service
Abstract
Preface
Welcome to the Red Hat Trusted Artifact Signer Administration Guide.
This guide can help you with maintenance routines and tasks for Red Hat’s Trusted Artifact Signer (RHTAS) service running on Red Hat platforms. Content organized by your installation platform:
You can find information about deploying the Trusted Artifact Signer service in the Deployment Guide.
Chapter 1. Red Hat OpenShift Container Platform
1.1. Protect your signing data
As a systems administrator, protecting the signing data of your software supply chain is critical when there is data loss due to hardware failure or accidental data deletion.
The OpenShift API Data Protection (OADP) product provides data protection to applications running on Red Hat OpenShift Container Platform. By using the OADP product, this can help us get the software developers back to signing and verifying code as quickly as possible. After installing and configuring the OADP operator you can start backing up and restoring your Red Hat Trusted Artifact Signer (RHTAS) data.
1.1.1. Installing and configuring the OADP operator
The OpenShift API Data Protection (OADP) operator gives you the ability to backup OpenShift application resources and internal container images. You can use the OADP operator to backup and restore your Trusted Artifact Signer data.
This procedure uses Amazon Web Services (AWS) Simple Storage Service (S3) to create a bucket for illustrating how to configure the OADP operator. You can choose to use a different supported S3-compatible object storage platform instead of AWS, such as Red Hat OpenShift Data Foundation.
Prerequisites
- Red Hat OpenShift Container Platform 4.15 or later.
-
Access to the OpenShift web console with the
cluster-admin
role. - The ability to create an S3-compatible bucket.
-
A workstation with the
oc
, andaws
binaries installed.
Procedure
Open a terminal on your workstation, and log in to OpenShift:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc login --token=TOKEN --server=SERVER_URL_AND_PORT
oc login --token=TOKEN --server=SERVER_URL_AND_PORT
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
NoteYou can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.
Create a new bucket:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export BUCKET=NEW_BUCKET_NAME export REGION=AWS_REGION_ID export USER=OADP_USER_NAME aws s3api create-bucket \ --bucket $BUCKET \ --region $REGION \ --create-bucket-configuration LocationConstraint=$REGION
export BUCKET=NEW_BUCKET_NAME export REGION=AWS_REGION_ID export USER=OADP_USER_NAME aws s3api create-bucket \ --bucket $BUCKET \ --region $REGION \ --create-bucket-configuration LocationConstraint=$REGION
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export BUCKET=example-bucket-name export REGION=us-east-1 export USER=velero aws s3api create-bucket \ --bucket $BUCKET \ --region $REGION \ --create-bucket-configuration LocationConstraint=$REGION
export BUCKET=example-bucket-name export REGION=us-east-1 export USER=velero aws s3api create-bucket \ --bucket $BUCKET \ --region $REGION \ --create-bucket-configuration LocationConstraint=$REGION
Create a new user:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow aws iam create-user --user-name $USER
aws iam create-user --user-name $USER
Create a new policy:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat > velero-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVolumes", "ec2:DescribeSnapshots", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::${BUCKET}/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads" ], "Resource": [ "arn:aws:s3:::${BUCKET}" ] } ] } EOF
cat > velero-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVolumes", "ec2:DescribeSnapshots", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::${BUCKET}/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads" ], "Resource": [ "arn:aws:s3:::${BUCKET}" ] } ] } EOF
Associate this policy to the new user:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow aws iam put-user-policy \ --user-name $USER \ --policy-name velero \ --policy-document file://velero-policy.json
aws iam put-user-policy \ --user-name $USER \ --policy-name velero \ --policy-document file://velero-policy.json
Create an access key:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow aws iam create-access-key --user-name $USER --output=json | jq -r '.AccessKey | [ "export AWS_ACCESS_KEY_ID=" + .AccessKeyId, "export AWS_SECRET_ACCESS_KEY=" + .SecretAccessKey ] | join("\n")'
aws iam create-access-key --user-name $USER --output=json | jq -r '.AccessKey | [ "export AWS_ACCESS_KEY_ID=" + .AccessKeyId, "export AWS_SECRET_ACCESS_KEY=" + .SecretAccessKey ] | join("\n")'
Create a credentials file with your AWS secret key information:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat << EOF > ./credentials-velero [default] aws_access_key_id=$AWS_ACCESS_KEY_ID aws_secret_access_key=$AWS_SECRET_ACCESS_KEY EOF
cat << EOF > ./credentials-velero [default] aws_access_key_id=$AWS_ACCESS_KEY_ID aws_secret_access_key=$AWS_SECRET_ACCESS_KEY EOF
-
Log in to the OpenShift web console with a user that has the
cluster-admin
role. - From the Administrator perspective, expand the Operators navigation menu, and click OperatorHub.
- In the search field, type oadp, and click the OADP Operator tile provided by Red Hat.
- Click the Install button to show the operator details.
- Accept the default values, click Install on the Install Operator page, and wait for the installation to finish.
After the operator installation finishes, from your workstation terminal, create a secret resource for OpenShift with your AWS credentials:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
- From the OpenShift web console, click the View Operator button.
- Click Create instance on the DataProtectionApplication (DPA) tile.
- On the Create DataProtectionApplication page, select YAML view.
Edit the following values in the resource file:
-
Under the
metadata
section, replacevelero-sample
withvelero
. -
Under the
spec.configuration.nodeAgent
section, replacerestic
withkopia
. -
Under the
spec.configuration.velero
section, addresourceTimeout: 10m
. -
Under the
spec.configuration.velero.defaultPlugins
section, add- csi
. -
Under the
spec.snapshotLocations
section, replace theus-west-2
value with your AWS regional value. -
Under the
spec.backupLocations
section, replace theus-east-1
value with your AWS regional value. -
Under the
spec.backupLocations.objectStorage
section, replacemy-bucket-name
with your bucket name. Replacevelero
with your bucket prefix name, if you use a different prefix.
-
Under the
- Click the Create button.
Additional resources
1.1.2. Backing up your Trusted Artifact Signer data
With the OpenShift API Data Protection (OADP) operator installed and with an instance deployed, you can create a volume snapshot resource, and a backup resource to backup your Red Hat Trusted Artifact Signer (RHTAS) data.
Prerequisites
- Red Hat OpenShift Container Platform 4.15 or later.
-
Access to the OpenShift web console with the
cluster-admin
role. - Installation of the OADP operator.
-
A workstation with the
oc
binary installed.
Procedure
Open a terminal on your workstation, and log in to OpenShift:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc login --token=TOKEN --server=SERVER_URL_AND_PORT
oc login --token=TOKEN --server=SERVER_URL_AND_PORT
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
NoteYou can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.
Find and edit the
VolumeSnapshotClass
resource:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get VolumeSnapshotClass -n openshift-adp oc edit VolumeSnapshotClass csi-aws-vsc -n openshift-adp
oc get VolumeSnapshotClass -n openshift-adp oc edit VolumeSnapshotClass csi-aws-vsc -n openshift-adp
Update the following values in the resource file:
-
Under the
metadata.labels
section, add thevelero.io/csi-volumesnapshot-class: "true"
label. - Save your changes, and quit the editor.
-
Under the
Create a
Backup
resource:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat <<EOF | oc apply -f - apiVersion: velero.io/v1 kind: Backup metadata: name: rhtas-backup labels: velero.io/storage-location: velero-1 namespace: openshift-adp spec: schedule: 0 7 * * * hooks: {} includedNamespaces: - trusted-artifact-signer includedResources: [] excludedResources: [] snapshotMoveData: true storageLocation: velero-1 ttl: 720h0m0s EOF
cat <<EOF | oc apply -f - apiVersion: velero.io/v1 kind: Backup metadata: name: rhtas-backup labels: velero.io/storage-location: velero-1 namespace: openshift-adp spec: schedule: 0 7 * * * hooks: {} includedNamespaces: - trusted-artifact-signer includedResources: [] excludedResources: [] snapshotMoveData: true storageLocation: velero-1 ttl: 720h0m0s EOF
Add the schedule property to enable Cron scheduling for running this backup. In the example, this backup resource runs everyday at 7:00 a.m.
By default, all resources are backed up within the trusted-artifact-signer namespace. You can specify what resources you want to include or exclude by using the includeResources or excludedResources properties respectively.
ImportantDepending on the storage class of the backup target, persistent volumes cannot be actively in-use for the backup to be successful.
1.1.3. Restoring your Trusted Artifact Signer data
With the Red Hat Trusted Artifact Signer (RHTAS) and OpenShift API Data Protection (OADP) operators installed, and a backup resource for RHTAS namespace, you can restore your data to an OpenShift cluster.
Prerequisites
- Red Hat OpenShift Container Platform version 4.15 or later.
-
Access to the OpenShift web console with the
cluster-admin
role. - Installation of the RHTAS operator.
- Installation of the OADP operator.
-
A backup resource of the
trusted-artifact-signer
namespace structure. -
A workstation with the
oc
binary installed.
Procedure
Disable the RHTAS operator:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc scale deploy rhtas-operator-controller-manager --replicas=0 -n openshift-operators
oc scale deploy rhtas-operator-controller-manager --replicas=0 -n openshift-operators
Create the
Restore
resource:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat <<EOF | oc apply -f - apiVersion: velero.io/v1 kind: Restore metadata: name: rhtas-restore namespace: openshift-adp spec: backupName: rhtas-backup includedResources: [] restoreStatus: includedResources: - securesign.rhtas.redhat.com - trillian.rhtas.redhat.com - ctlog.rhtas.redhat.com - fulcio.rhtas.redhat.com - rekor.rhtas.redhat.com - tuf.rhtas.redhat.com - timestampauthority.rhtas.redhat.com excludedResources: - pod - deployment - nodes - route - service - replicaset - events - cronjob - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - pods - deployments restorePVs: true existingResourcePolicy: update EOF
cat <<EOF | oc apply -f - apiVersion: velero.io/v1 kind: Restore metadata: name: rhtas-restore namespace: openshift-adp spec: backupName: rhtas-backup includedResources: [] restoreStatus: includedResources: - securesign.rhtas.redhat.com - trillian.rhtas.redhat.com - ctlog.rhtas.redhat.com - fulcio.rhtas.redhat.com - rekor.rhtas.redhat.com - tuf.rhtas.redhat.com - timestampauthority.rhtas.redhat.com excludedResources: - pod - deployment - nodes - route - service - replicaset - events - cronjob - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - pods - deployments restorePVs: true existingResourcePolicy: update EOF
If restoring your RHTAS data to a different OpenShift cluster, do the following steps.
Delete the secret for the Trillian database:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete secret securesign-sample-trillian-db-tls oc delete pod trillian-db-xxx
oc delete secret securesign-sample-trillian-db-tls oc delete pod trillian-db-xxx
NoteThe RHTAS operator recreates the secret and restarts the pod.
-
Run the
restoreOwnerReferences.sh
script.
Enable the RHTAS operator:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc scale deploy rhtas-operator-controller-manager --replicas=1 -n openshift-operators
oc scale deploy rhtas-operator-controller-manager --replicas=1 -n openshift-operators
ImportantImmediately starting the RHTAS operator after starting the restore ensures the claim of the persistent volume.
1.2. The Update Framework
As a systems administrator, understanding Red Hat’s implementation of The Update Framework (TUF) for Red Hat Trusted Artifact Signer (RHTAS) is important in helping you to maintaining a secure coding environment for developers. You can refresh TUF’s root and non-root metadata periodically to help prevent mix-and-match attacks on a code base. Refreshing the TUF metadata gives clients the ability to detect and reject outdated or tampered-with files.
1.2.1. Trusted Artifact Signer’s implementation of The Update Framework
Starting with Red Hat Trusted Artifact Signer (RHTAS) version 1.1, we implemented The Update Framework (TUF) as a trust root to store public keys, and certificates used by RHTAS services. The Update Framework is a sophisticated framework for securing software update systems, and this makes it ideal for securing shipped artifacts. The Update Framework refers to the RHTAS services as trusted root targets. There are four trusted targets, one for each RHTAS service: Fulcio, Certificate Transparency (CT) log, Rekor, and Timestamp Authority (TSA). Client software, such as cosign
, use the RHTAS trust root targets to sign and verify artifact signatures. A simple HTTP server distributes the public keys and certificates to the client software. This simple HTTP server has the TUF repository of the individual targets.
By default, when deploying RHTAS on Red Hat OpenShift or Red Hat Enterprise Linux, we create a TUF repository, and prepopulate the individual targets. By default, the expiration date of all metadata files is 52 weeks from the time you deploy the RHTAS service. Red Hat recommends choosing shorter expiration periods, and rotating your public keys and certificates often. Doing these maintenance tasks regularly can help prevent attacks on your code base.
1.2.2. Updating The Update Framework metadata files
By default, The Update Framework (TUF) metadata files expire after 52 weeks from the Red Hat Trusted Artifact Signer (RHTAS) deployment date. At a minimum, you have to update the TUF metadata files at least once every 52 weeks before they expire. Red Hat recommends updating the metadata files more often than once a year.
This procedure walks you through refreshing the root, and non-root metadata files.
Prerequisites
- Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
- A running Securesign instance.
-
A workstation with the
oc
binary installed.
Procedure
Download the
tuftool
binary from the OpenShift cluster to your workstation.ImportantCurrently, the
tuftool
binary is only available for Linux operating systems on the x86_64 architecture.- From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
Open a terminal on your workstation, decompress the binary
.gz
file, and set the execute bit:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gunzip tuftool-amd64.gz chmod +x tuftool-amd64
gunzip tuftool-amd64.gz chmod +x tuftool-amd64
Move and rename the binary to a location within your
$PATH
environment:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo mv tuftool-amd64 /usr/local/bin/tuftool
sudo mv tuftool-amd64 /usr/local/bin/tuftool
Log in to OpenShift from the command line:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc login --token=TOKEN --server=SERVER_URL_AND_PORT
oc login --token=TOKEN --server=SERVER_URL_AND_PORT
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
NoteYou can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.
Switch to the RHTAS project:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc project trusted-artifact-signer
oc project trusted-artifact-signer
Configure your shell environment:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export WORK="${HOME}/trustroot-example" export ROOT="${WORK}/root/root.json" export KEYDIR="${WORK}/keys" export INPUT="${WORK}/input" export TUF_REPO="${WORK}/tuf-repo" export TUF_SERVER_POD="$(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")" export TIMESTAMP_EXPIRATION="in 10 days" export SNAPSHOT_EXPIRATION="in 26 weeks" export TARGETS_EXPIRATION="in 26 weeks" export ROOT_EXPIRATION="in 26 weeks"
export WORK="${HOME}/trustroot-example" export ROOT="${WORK}/root/root.json" export KEYDIR="${WORK}/keys" export INPUT="${WORK}/input" export TUF_REPO="${WORK}/tuf-repo" export TUF_SERVER_POD="$(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")" export TIMESTAMP_EXPIRATION="in 10 days" export SNAPSHOT_EXPIRATION="in 26 weeks" export TARGETS_EXPIRATION="in 26 weeks" export ROOT_EXPIRATION="in 26 weeks"
Set the expiration durations according to your requirements.
Create a temporary TUF directory structure:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
Download the TUF contents to the temporary TUF directory structure:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc extract --to "${KEYDIR}/" secret/tuf-root-keys oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
oc extract --to "${KEYDIR}/" secret/tuf-root-keys oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
You can update the timestamp, snapshot, and targets metadata all in one command:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool update \ --root "${ROOT}" \ --key "${KEYDIR}/timestamp.pem" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --timestamp-expires "${TIMESTAMP_EXPIRATION}" \ --snapshot-expires "${SNAPSHOT_EXPIRATION}" \ --targets-expires "${TARGETS_EXPIRATION}" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
tuftool update \ --root "${ROOT}" \ --key "${KEYDIR}/timestamp.pem" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --timestamp-expires "${TIMESTAMP_EXPIRATION}" \ --snapshot-expires "${SNAPSHOT_EXPIRATION}" \ --targets-expires "${TARGETS_EXPIRATION}" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
NoteYou can also run the TUF metadata update on a subset of TUF metadata files. For example, the
timestamp.json
metadata file expires more often than the other metadata files. Therefore, you can just update the timestamp metadata file by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool update \ --root "${ROOT}" \ --key "${KEYDIR}/timestamp.pem" \ --timestamp-expires "${TIMESTAMP_EXPIRATION}" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
tuftool update \ --root "${ROOT}" \ --key "${KEYDIR}/timestamp.pem" \ --timestamp-expires "${TIMESTAMP_EXPIRATION}" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
Only update the root expiration date if it is about to expire:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool root expire "${ROOT}" "${ROOT_EXPIRATION}"
tuftool root expire "${ROOT}" "${ROOT_EXPIRATION}"
NoteYou can skip this step if the root file is not close to expiring.
Update the root version:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool root bump-version "${ROOT}"
tuftool root bump-version "${ROOT}"
Sign the root metadata file again:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool root sign "${ROOT}" -k "${KEYDIR}/root.pem"
tuftool root sign "${ROOT}" -k "${KEYDIR}/root.pem"
Set the new root version, and copy the root metadata file in place:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export NEW_ROOT_VERSION=$(cat "${ROOT}" | jq -r ".signed.version") cp "${ROOT}" "${TUF_REPO}/root.json" cp "${ROOT}" "${TUF_REPO}/${NEW_ROOT_VERSION}.root.json"
export NEW_ROOT_VERSION=$(cat "${ROOT}" | jq -r ".signed.version") cp "${ROOT}" "${TUF_REPO}/root.json" cp "${ROOT}" "${TUF_REPO}/${NEW_ROOT_VERSION}.root.json"
Upload these changes to the TUF server:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"
oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"
1.3. Rotate your certificates and keys
As a systems administrator, you can proactively rotate the certificates and signer keys used by the Red Hat Trusted Artifact Signer (RHTAS) service running on Red Hat OpenShift. Rotating your keys regularly can prevent key tampering, and theft. These procedures guide you through expiring your old certificates and signer keys, and replacing them with a new certificate and signer key for the underlying services that make up RHTAS. You can rotate keys and certificates for the following services:
- Rekor
- Certificate Transparency log
- Fulcio
- Timestamp Authority
1.3.1. Rotating the Rekor signer key
You can proactively rotate Rekor’s signer key by using the sharding feature to freeze the log tree, and create a new log tree with a new signer key. This procedure walks you through expiring your old Rekor signer key, and replacing it with a new signer key for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old Rekor signer key still allows you to verify artifacts signed by the old key.
This procedure requires downtime to the Rekor service.
Prerequisites
- Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
- A running Securesign instance.
-
A workstation with the
oc
,openssl
, andcosign
binaries installed.
Procedure
Download the
rekor-cli
binary from the OpenShift cluster to your workstation.- Login to the OpenShift web console. From the home page, click the ? icon, click Command line tools, go to the rekor-cli download section, and click the link for your platform.
Open a terminal on your workstation, decompress the binary
.gz
file, and set the execute bit:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gunzip rekor-cli-amd64.gz chmod +x rekor-cli-amd64
gunzip rekor-cli-amd64.gz chmod +x rekor-cli-amd64
Move and rename the binary to a location within your
$PATH
environment:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo mv rekor-cli-amd64 /usr/local/bin/rekor-cli
sudo mv rekor-cli-amd64 /usr/local/bin/rekor-cli
Download the
tuftool
binary from the OpenShift cluster to your workstation.ImportantThe
tuftool
binary is only available for Linux operating systems.- From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
From a terminal on your workstation, decompress the binary
.gz
file, and set the execute bit:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gunzip tuftool-amd64.gz chmod +x tuftool-amd64
gunzip tuftool-amd64.gz chmod +x tuftool-amd64
Move and rename the binary to a location within your
$PATH
environment:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo mv tuftool-amd64 /usr/local/bin/tuftool
sudo mv tuftool-amd64 /usr/local/bin/tuftool
Log in to OpenShift from the command line:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc login --token=TOKEN --server=SERVER_URL_AND_PORT
oc login --token=TOKEN --server=SERVER_URL_AND_PORT
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
NoteYou can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.
Switch to the RHTAS project:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc project trusted-artifact-signer
oc project trusted-artifact-signer
Get the Rekor URL:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export REKOR_URL=$(oc get rekor -o jsonpath='{.items[0].status.url}')
export REKOR_URL=$(oc get rekor -o jsonpath='{.items[0].status.url}')
Get the log tree identifier for the active shard:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export OLD_TREE_ID=$(rekor-cli loginfo --rekor_server $REKOR_URL --format json | jq -r .TreeID)
export OLD_TREE_ID=$(rekor-cli loginfo --rekor_server $REKOR_URL --format json | jq -r .TreeID)
Set the log tree to the
DRAINING
state:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver:8091 --tree_id=${OLD_TREE_ID} --tree_state=DRAINING
oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver:8091 --tree_id=${OLD_TREE_ID} --tree_state=DRAINING
While draining, the tree log will not accept any new entries. Watch and wait for the queue to empty.
ImportantYou must wait for the queues to be empty before proceeding to the next step. If leaves are still integrating while draining, then freezing the log tree during this process can cause the log path to exceed the maximum merge delay (MMD) threshold.
Freeze the log tree:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver:8091 --tree_id=${OLD_TREE_ID} --tree_state=FROZEN
oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver:8091 --tree_id=${OLD_TREE_ID} --tree_state=FROZEN
Get the length of the frozen log tree:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export OLD_SHARD_LENGTH=$(rekor-cli loginfo --rekor_server $REKOR_URL --format json | jq -r .ActiveTreeSize)
export OLD_SHARD_LENGTH=$(rekor-cli loginfo --rekor_server $REKOR_URL --format json | jq -r .ActiveTreeSize)
Get Rekor’s public key for the old shard:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export OLD_PUBLIC_KEY=$(curl -s $REKOR_URL/api/v1/log/publicKey | base64 | tr -d '\n')
export OLD_PUBLIC_KEY=$(curl -s $REKOR_URL/api/v1/log/publicKey | base64 | tr -d '\n')
Create a new log tree:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export NEW_TREE_ID=$(oc run createtree --image registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- -logtostderr=false --admin_server=trillian-logserver:8091 --display_name=rekor-tree)
export NEW_TREE_ID=$(oc run createtree --image registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- -logtostderr=false --admin_server=trillian-logserver:8091 --display_name=rekor-tree)
Now you have two log trees, one frozen tree, and a new tree that will become the active shard.
Create a new private key:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow openssl ecparam -genkey -name secp384r1 -noout -out new-rekor.pem
openssl ecparam -genkey -name secp384r1 -noout -out new-rekor.pem
ImportantThe new key must have a unique file name.
Create a new secret resource with the new signer key:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create secret generic rekor-signer-key --from-file=private=new-rekor.pem
oc create secret generic rekor-signer-key --from-file=private=new-rekor.pem
Update the Securesign Rekor configuration with the new tree identifier and the old sharding information:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow read -r -d '' SECURESIGN_PATCH_1 <<EOF [ { "op": "replace", "path": "/spec/rekor/treeID", "value": $NEW_TREE_ID }, { "op": "add", "path": "/spec/rekor/sharding/-", "value": { "treeID": $OLD_TREE_ID, "treeLength": $OLD_SHARD_LENGTH, "encodedPublicKey": "$OLD_PUBLIC_KEY" } }, { "op": "replace", "path": "/spec/rekor/signer/keyRef", "value": {"name": "rekor-signer-key", "key": "private"} } ] EOF
read -r -d '' SECURESIGN_PATCH_1 <<EOF [ { "op": "replace", "path": "/spec/rekor/treeID", "value": $NEW_TREE_ID }, { "op": "add", "path": "/spec/rekor/sharding/-", "value": { "treeID": $OLD_TREE_ID, "treeLength": $OLD_SHARD_LENGTH, "encodedPublicKey": "$OLD_PUBLIC_KEY" } }, { "op": "replace", "path": "/spec/rekor/signer/keyRef", "value": {"name": "rekor-signer-key", "key": "private"} } ] EOF
NoteIf you have
/spec/rekor/signer/keyPasswordRef
set with a value, then create a new separate update to remove it:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow read -r -d '' SECURESIGN_PATCH_2 <<EOF [ { "op": "remove", "path": "/spec/rekor/signer/keyPasswordRef" } ] EOF
read -r -d '' SECURESIGN_PATCH_2 <<EOF [ { "op": "remove", "path": "/spec/rekor/signer/keyPasswordRef" } ] EOF
Apply this update after applying the first update.
Update the Securesign instance:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc patch Securesign securesign-sample --type='json' -p="$SECURESIGN_PATCH_1"
oc patch Securesign securesign-sample --type='json' -p="$SECURESIGN_PATCH_1"
Wait for the Rekor server to redeploy with the new signer key:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc wait pod -l app.kubernetes.io/name=rekor-server --for=condition=Ready
oc wait pod -l app.kubernetes.io/name=rekor-server --for=condition=Ready
Get the new public key:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export NEW_KEY_NAME=new-rekor.pub curl $(oc get rekor -o jsonpath='{.items[0].status.url}')/api/v1/log/publicKey -o $NEW_KEY_NAME
export NEW_KEY_NAME=new-rekor.pub curl $(oc get rekor -o jsonpath='{.items[0].status.url}')/api/v1/log/publicKey -o $NEW_KEY_NAME
Configure The Update Framework (TUF) service to use the new Rekor public key.
Configure your shell environment:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export WORK="${HOME}/trustroot-example" export ROOT="${WORK}/root/root.json" export KEYDIR="${WORK}/keys" export INPUT="${WORK}/input" export TUF_REPO="${WORK}/tuf-repo" export TUF_SERVER_POD="$(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")"
export WORK="${HOME}/trustroot-example" export ROOT="${WORK}/root/root.json" export KEYDIR="${WORK}/keys" export INPUT="${WORK}/input" export TUF_REPO="${WORK}/tuf-repo" export TUF_SERVER_POD="$(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")"
Create a temporary TUF directory structure:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
Download the TUF contents to the temporary TUF directory structure:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc extract --to "${KEYDIR}/" secret/tuf-root-keys oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
oc extract --to "${KEYDIR}/" secret/tuf-root-keys oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
Find the active Rekor signer key file name. Open the latest target file, for example,
1.target.json
, within the local TUF repository. In this file you will find the active Rekor signer key file name, for example,rekor.pub
. Set an environment variable with this active Rekor signer key file name:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export ACTIVE_KEY_NAME=rekor.pub
export ACTIVE_KEY_NAME=rekor.pub
Update the Rekor signer key with the old public key:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow echo $OLD_PUBLIC_KEY | base64 -d > $ACTIVE_KEY_NAME
echo $OLD_PUBLIC_KEY | base64 -d > $ACTIVE_KEY_NAME
Expire the old Rekor signer key:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-rekor-target "${ACTIVE_KEY_NAME}" \ --rekor-uri "${REKOR_URL}" \ --rekor-status "Expired" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-rekor-target "${ACTIVE_KEY_NAME}" \ --rekor-uri "${REKOR_URL}" \ --rekor-status "Expired" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
Add the new Rekor signer key:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-rekor-target "${NEW_KEY_NAME}" \ --rekor-uri "${REKOR_URL}" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-rekor-target "${NEW_KEY_NAME}" \ --rekor-uri "${REKOR_URL}" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
Upload these changes to the TUF server:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"
oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"
Delete the working directory:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rm -r $WORK
rm -r $WORK
Update the
cosign
configuration with the updated TUF configuration:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
Now, you are ready to sign and verify your artifacts with the new Rekor signer key.
1.3.2. Rotating the Certificate Transparency log signer key
You can proactively rotate Certificate Transparency (CT) log signer key by using the sharding feature to freeze the log tree, and create a new log tree with a new signer key. This procedure walks you through expiring your old CT log signer key, and replacing it with a new signer key for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old CT log signer key still allows you to verify artifacts signed by the old key.
Prerequisites
- Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
- A running Securesign instance.
-
A workstation with the
oc
,openssl
, andcosign
binaries installed.
Procedure
Download the
tuftool
binary from the OpenShift cluster to your workstation.ImportantCurrently, the
tuftool
binary is only available for Linux operating systems on the x86_64 architecture.- From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
Open a terminal on your workstation, decompress the binary
.gz
file, and set the execute bit:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gunzip tuftool-amd64.gz chmod +x tuftool-amd64
gunzip tuftool-amd64.gz chmod +x tuftool-amd64
Move and rename the binary to a location within your
$PATH
environment:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo mv tuftool-amd64 /usr/local/bin/tuftool
sudo mv tuftool-amd64 /usr/local/bin/tuftool
Log in to OpenShift from the command line:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc login --token=TOKEN --server=SERVER_URL_AND_PORT
oc login --token=TOKEN --server=SERVER_URL_AND_PORT
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
NoteYou can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.
Switch to the RHTAS project:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc project trusted-artifact-signer
oc project trusted-artifact-signer
Make a backup of the current CT log configuration, and keys:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export SERVER_CONFIG_NAME=$(oc get ctlog -o jsonpath='{.items[0].status.serverConfigRef.name}') oc get secret $SERVER_CONFIG_NAME -o jsonpath="{.data.config}" | base64 --decode > config.txtpb oc get secret $SERVER_CONFIG_NAME -o jsonpath="{.data.fulcio-0}" | base64 --decode > fulcio-0.pem oc get secret $SERVER_CONFIG_NAME -o jsonpath="{.data.private}" | base64 --decode > private.pem oc get secret $SERVER_CONFIG_NAME -o jsonpath="{.data.public}" | base64 --decode > public.pem
export SERVER_CONFIG_NAME=$(oc get ctlog -o jsonpath='{.items[0].status.serverConfigRef.name}') oc get secret $SERVER_CONFIG_NAME -o jsonpath="{.data.config}" | base64 --decode > config.txtpb oc get secret $SERVER_CONFIG_NAME -o jsonpath="{.data.fulcio-0}" | base64 --decode > fulcio-0.pem oc get secret $SERVER_CONFIG_NAME -o jsonpath="{.data.private}" | base64 --decode > private.pem oc get secret $SERVER_CONFIG_NAME -o jsonpath="{.data.public}" | base64 --decode > public.pem
Capture the current tree identifier:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export OLD_TREE_ID=$(oc get ctlog -o jsonpath='{.items[0].status.treeID}')
export OLD_TREE_ID=$(oc get ctlog -o jsonpath='{.items[0].status.treeID}')
Set the log tree to the
DRAINING
state:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver:8091 --tree_id=${OLD_TREE_ID} --tree_state=DRAINING
oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver:8091 --tree_id=${OLD_TREE_ID} --tree_state=DRAINING
While draining, the tree log will not accept any new entries. Watch and wait for the queue to empty.
ImportantYou must wait for the queues to be empty before proceeding to the next step. If leaves are still integrating while draining, then freezing the log tree during this process can cause the log path to exceed the maximum merge delay (MMD) threshold.
Once the queue has been fully drained, freeze the log:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver:8091 --tree_id=${OLD_TREE_ID} --tree_state=FROZEN
oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver:8091 --tree_id=${OLD_TREE_ID} --tree_state=FROZEN
Create a new Merkle tree, and capture the new tree identifier:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export NEW_TREE_ID=$(kubectl run createtree --image registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- -logtostderr=false --admin_server=trillian-logserver:8091 --display_name=ctlog-tree)
export NEW_TREE_ID=$(kubectl run createtree --image registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- -logtostderr=false --admin_server=trillian-logserver:8091 --display_name=ctlog-tree)
Generate a new certificate, along with new public and private keys:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow openssl ecparam -genkey -name prime256v1 -noout -out new-ctlog.pem openssl ec -in new-ctlog.pem -pubout -out new-ctlog-public.pem openssl ec -in new-ctlog.pem -out new-ctlog.pass.pem -des3 -passout pass:"CHANGE_ME"
openssl ecparam -genkey -name prime256v1 -noout -out new-ctlog.pem openssl ec -in new-ctlog.pem -pubout -out new-ctlog-public.pem openssl ec -in new-ctlog.pem -out new-ctlog.pass.pem -des3 -passout pass:"CHANGE_ME"
Replace CHANGE_ME with a new password.
ImportantThe certificate and new keys must have unique file names.
Update the CT log configuration.
-
Open the
config.txtpb
file for editing. For the frozen log, add the
not_after_limit
field to the frozen log entry, rename the prefix value to a unique name, and replace the old path to the private key withctfe-keys/private-0
:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ... log_configs:{ # frozen log config:{ log_id:2066075212146181968 prefix:"trusted-artifact-signer-0" roots_pem_file:"/ctfe-keys/fulcio-0" private_key:{[type.googleapis.com/keyspb.PEMKeyFile]:{path:"/ctfe-keys/private-0" password:"Example123"}} public_key:{der:"0Y0\x13\x06\x07*\x86H\xce=\x02\x01\x06\x08*\x86H\xce=\x03\x01\x07\x03B\x00\x04)'.\xffUJ\xe2s)\xefR\x8a\xfcO\xdcewȶy\xa7\x9d<\x13\xb0\x1c\x99\x96\xe4'\xe3v\x07:\xc8I+\x08J\x9d\x8a\xed\x06\xe4\xaeI:q\x98\xf4\xbc<o4VD\x0cr\xf9\x9c\xecxT\x84"} not_after_limit:{seconds:1728056285 nanos:012111000} ext_key_usages:"CodeSigning" log_backend_name:"trillian" }
... log_configs:{ # frozen log config:{ log_id:2066075212146181968 prefix:"trusted-artifact-signer-0" roots_pem_file:"/ctfe-keys/fulcio-0" private_key:{[type.googleapis.com/keyspb.PEMKeyFile]:{path:"/ctfe-keys/private-0" password:"Example123"}} public_key:{der:"0Y0\x13\x06\x07*\x86H\xce=\x02\x01\x06\x08*\x86H\xce=\x03\x01\x07\x03B\x00\x04)'.\xffUJ\xe2s)\xefR\x8a\xfcO\xdcewȶy\xa7\x9d<\x13\xb0\x1c\x99\x96\xe4'\xe3v\x07:\xc8I+\x08J\x9d\x8a\xed\x06\xe4\xaeI:q\x98\xf4\xbc<o4VD\x0cr\xf9\x9c\xecxT\x84"} not_after_limit:{seconds:1728056285 nanos:012111000} ext_key_usages:"CodeSigning" log_backend_name:"trillian" }
NoteYou can get the current time value for seconds and nanoseconds, by running the following commands:
date +%s
, anddate +%N
.ImportantThe
not_after_limit
field defines the end of the timestamp range for the frozen log only. Certificates beyond this point in time are no longer accepted for inclusion in this log.-
Copy and paste the frozen log
config
block, appending it to the configuration file to create a new entry. Change the following lines in the new
config
block. Set thelog_id
to the new tree identifier, change theprefix
totrusted-artifact-signer
, change theprivate_key
path toctfe-keys/private
, remove thepublic_key
line, and changenot_after_limit
tonot_after_start
and set the timestamp range:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ... log_configs:{ # frozen log ... # new active log config:{ log_id: NEW_TREE_ID prefix:"trusted-artifact-signer" roots_pem_file:"/ctfe-keys/fulcio-0" private_key:{[type.googleapis.com/keyspb.PEMKeyFile]:{path:"ctfe-keys/private" password:"CHANGE_ME"}} ext_key_usages:"CodeSigning" not_after_start:{seconds:1713201754 nanos:155663000} log_backend_name:"trillian" }
... log_configs:{ # frozen log ... # new active log config:{ log_id: NEW_TREE_ID prefix:"trusted-artifact-signer" roots_pem_file:"/ctfe-keys/fulcio-0" private_key:{[type.googleapis.com/keyspb.PEMKeyFile]:{path:"ctfe-keys/private" password:"CHANGE_ME"}} ext_key_usages:"CodeSigning" not_after_start:{seconds:1713201754 nanos:155663000} log_backend_name:"trillian" }
Add the NEW_TREE_ID, and replace CHANGE_ME with the new private key password. The password here must match the password used for generating the new private and public keys.
ImportantThe
not_after_start
field defines the beginning of the timestamp range inclusively. This means the log will start accepting certificates at this point in time.
-
Open the
Create a new secret resource:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create secret generic ctlog-config \ --from-file=config=config.txtpb \ --from-file=private=new-ctlog.pass.pem \ --from-file=public=new-ctlog-public.pem \ --from-file=fulcio-0=fulcio-0.pem \ --from-file=private-0=private.pem \ --from-file=public-0=public.pem \ --from-literal=password=CHANGE_ME
oc create secret generic ctlog-config \ --from-file=config=config.txtpb \ --from-file=private=new-ctlog.pass.pem \ --from-file=public=new-ctlog-public.pem \ --from-file=fulcio-0=fulcio-0.pem \ --from-file=private-0=private.pem \ --from-file=public-0=public.pem \ --from-literal=password=CHANGE_ME
Replace CHANGE_ME with the new private key password.
Configure The Update Framework (TUF) service to use the new CT log public key.
Configure your shell environment:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export WORK="${HOME}/trustroot-example" export ROOT="${WORK}/root/root.json" export KEYDIR="${WORK}/keys" export INPUT="${WORK}/input" export TUF_REPO="${WORK}/tuf-repo" export TUF_SERVER_POD="$(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")"
export WORK="${HOME}/trustroot-example" export ROOT="${WORK}/root/root.json" export KEYDIR="${WORK}/keys" export INPUT="${WORK}/input" export TUF_REPO="${WORK}/tuf-repo" export TUF_SERVER_POD="$(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")"
Create a temporary TUF directory structure:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
Download the TUF contents to the temporary TUF directory structure:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc extract --to "${KEYDIR}/" secret/tuf-root-keys oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
oc extract --to "${KEYDIR}/" secret/tuf-root-keys oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
Find the active CT log public key file name. Open the latest target file, for example,
1.targets.json
, within the local TUF repository. In this target file you will find the active CT log public key file name, for example,ctfe.pub
. Set an environment variable with this active CT log public key file name:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export ACTIVE_CTFE_NAME=ctfe.pub
export ACTIVE_CTFE_NAME=ctfe.pub
Extract the active CT log public key from OpenShift:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get secret $(oc get ctlog securesign-sample -o jsonpath='{.status.publicKeyRef.name}') -o jsonpath='{.data.public}' | base64 -d > $ACTIVE_CTFE_NAME
oc get secret $(oc get ctlog securesign-sample -o jsonpath='{.status.publicKeyRef.name}') -o jsonpath='{.data.public}' | base64 -d > $ACTIVE_CTFE_NAME
Expire the old CT log signer key:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-ctlog-target "$ACTIVE_CTFE_NAME" \ --ctlog-uri "https://ctlog.rhtas" \ --ctlog-status "Expired" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-ctlog-target "$ACTIVE_CTFE_NAME" \ --ctlog-uri "https://ctlog.rhtas" \ --ctlog-status "Expired" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
Add the new CT log signer key:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-ctlog-target "new-ctlog-public.pem" \ --ctlog-uri "https://ctlog.rhtas" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-ctlog-target "new-ctlog-public.pem" \ --ctlog-uri "https://ctlog.rhtas" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
Upload these changes to the TUF server:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"
oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"
Update the Securesign CT log configuration with the new tree identifier:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow read -r -d '' SECURESIGN_PATCH <<EOF [ { "op": "replace", "path": "/spec/ctlog/serverConfigRef", "value": {"name": "ctlog-config"} }, { "op": "replace", "path": "/spec/ctlog/treeID", "value": $NEW_TREE_ID }, { "op": "replace", "path": "/spec/ctlog/privateKeyRef", "value": {"name": "ctlog-config", "key": "private"} }, { "op": "replace", "path": "/spec/ctlog/privateKeyPasswordRef", "value": {"name": "ctlog-config", "key": "password"} }, { "op": "replace", "path": "/spec/ctlog/publicKeyRef", "value": {"name": "ctlog-config", "key": "public"} } ] EOF
read -r -d '' SECURESIGN_PATCH <<EOF [ { "op": "replace", "path": "/spec/ctlog/serverConfigRef", "value": {"name": "ctlog-config"} }, { "op": "replace", "path": "/spec/ctlog/treeID", "value": $NEW_TREE_ID }, { "op": "replace", "path": "/spec/ctlog/privateKeyRef", "value": {"name": "ctlog-config", "key": "private"} }, { "op": "replace", "path": "/spec/ctlog/privateKeyPasswordRef", "value": {"name": "ctlog-config", "key": "password"} }, { "op": "replace", "path": "/spec/ctlog/publicKeyRef", "value": {"name": "ctlog-config", "key": "public"} } ] EOF
Patch the Securesign instance:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc patch Securesign securesign-sample --type='json' -p="$SECURESIGN_PATCH"
oc patch Securesign securesign-sample --type='json' -p="$SECURESIGN_PATCH"
Wait for the CT log server to redeploy:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc wait pod -l app.kubernetes.io/name=ctlog --for=condition=Ready
oc wait pod -l app.kubernetes.io/name=ctlog --for=condition=Ready
Delete the working directory:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rm -r $WORK
rm -r $WORK
Update the
cosign
configuration with the updated TUF configuration:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
Now, you are ready to sign and verify your artifacts with the new CT log signer key.
1.3.3. Rotating the Fulcio certificate
You can proactively rotate the certificate used by the Fulcio service. This procedure walks you through expiring your old Fulcio certificate, and replacing it with a new certificate for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old Fulcio certificate still allows you to verify artifacts signed by the old certificate.
Prerequisites
- Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
- A running Securesign instance.
-
A workstation with the
oc
,openssl
, andcosign
binaries installed.
Procedure
Download the
tuftool
binary from the OpenShift cluster to your workstation.ImportantCurrently, the
tuftool
binary is only available for Linux operating systems on the x86_64 architecture.- From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
Open a terminal on your workstation, decompress the binary
.gz
file, and set the execute bit:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gunzip tuftool-amd64.gz chmod +x tuftool-amd64
gunzip tuftool-amd64.gz chmod +x tuftool-amd64
Move and rename the binary to a location within your
$PATH
environment:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo mv tuftool-amd64 /usr/local/bin/tuftool
sudo mv tuftool-amd64 /usr/local/bin/tuftool
Log in to OpenShift from the command line:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc login --token=TOKEN --server=SERVER_URL_AND_PORT
oc login --token=TOKEN --server=SERVER_URL_AND_PORT
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
NoteYou can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.
Switch to the RHTAS project:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc project trusted-artifact-signer
oc project trusted-artifact-signer
Generate a new certificate, along with new public and private keys:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow openssl ecparam -genkey -name prime256v1 -noout -out new-fulcio.pem openssl ec -in new-fulcio.pem -pubout -out new-fulcio-public.pem openssl ec -in new-fulcio.pem -out new-fulcio.pass.pem -des3 -passout pass:"CHANGE_ME" openssl req -new -x509 -key new-fulcio.pass.pem -out new-fulcio.cert.pem
openssl ecparam -genkey -name prime256v1 -noout -out new-fulcio.pem openssl ec -in new-fulcio.pem -pubout -out new-fulcio-public.pem openssl ec -in new-fulcio.pem -out new-fulcio.pass.pem -des3 -passout pass:"CHANGE_ME" openssl req -new -x509 -key new-fulcio.pass.pem -out new-fulcio.cert.pem
Replace CHANGE_ME with a new password.
ImportantThe certificate and new keys must have unique file names.
Create a new secret:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create secret generic fulcio-config \ --from-file=private=new-fulcio.pass.pem \ --from-file=cert=new-fulcio.cert.pem \ --from-literal=password=CHANGE_ME
oc create secret generic fulcio-config \ --from-file=private=new-fulcio.pass.pem \ --from-file=cert=new-fulcio.cert.pem \ --from-literal=password=CHANGE_ME
Replace CHANGE_ME with a new password.
NoteThe password here must match the password used for generating the new private and public keys.
Configure The Update Framework (TUF) service to use the new Fulcio certificate.
Set up your shell environment:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export WORK="${HOME}/trustroot-example" export ROOT="${WORK}/root/root.json" export KEYDIR="${WORK}/keys" export INPUT="${WORK}/input" export TUF_REPO="${WORK}/tuf-repo" export TUF_SERVER_POD="$(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")"
export WORK="${HOME}/trustroot-example" export ROOT="${WORK}/root/root.json" export KEYDIR="${WORK}/keys" export INPUT="${WORK}/input" export TUF_REPO="${WORK}/tuf-repo" export TUF_SERVER_POD="$(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")"
Create a temporary TUF directory structure:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
Download the TUF contents to the temporary TUF directory structure:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc extract --to "${KEYDIR}/" secret/tuf-root-keys oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
oc extract --to "${KEYDIR}/" secret/tuf-root-keys oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
Find the active Fulcio certificate file name. Open the latest target file, for example,
1.targets.json
, within the local TUF repository. In this file you will find the active Fulcio certificate file name, for example,fulcio_v1.crt.pem
. Set an environment variable with this active Fulcio certificate file name:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export ACTIVE_CERT_NAME=fulcio_v1.crt.pem
export ACTIVE_CERT_NAME=fulcio_v1.crt.pem
Extract the active Fulcio certificate from OpenShift:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get secret $(oc get fulcio securesign-sample -o jsonpath='{.status.certificate.caRef.name}') -o jsonpath='{.data.cert}' | base64 -d > $ACTIVE_CERT_NAME
oc get secret $(oc get fulcio securesign-sample -o jsonpath='{.status.certificate.caRef.name}') -o jsonpath='{.data.cert}' | base64 -d > $ACTIVE_CERT_NAME
Expire the old certificate:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-fulcio-target "$ACTIVE_CERT_NAME" \ --fulcio-uri "https://fulcio.rhtas" \ --fulcio-status "Expired" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-fulcio-target "$ACTIVE_CERT_NAME" \ --fulcio-uri "https://fulcio.rhtas" \ --fulcio-status "Expired" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
Add the new Fulcio certificate:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-fulcio-target "new-fulcio.cert.pem" \ --fulcio-uri "https://fulcio.rhtas" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-fulcio-target "new-fulcio.cert.pem" \ --fulcio-uri "https://fulcio.rhtas" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
Upload these changes to the TUF server:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"
oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"
Delete the working directory:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rm -r $WORK
rm -r $WORK
Update the Securesign Fulcio configuration:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow read -r -d '' SECURESIGN_PATCH <<EOF [ { "op": "replace", "path": "/spec/fulcio/certificate/privateKeyRef", "value": {"name": "fulcio-config", "key": "private"} }, { "op": "replace", "path": "/spec/fulcio/certificate/privateKeyPasswordRef", "value": {"name": "fulcio-config", "key": "password"} }, { "op": "replace", "path": "/spec/fulcio/certificate/caRef", "value": {"name": "fulcio-config", "key": "cert"} }, { "op": "replace", "path": "/spec/ctlog/rootCertificates", "value": [{"name": "fulcio-config", "key": "cert"}] } ] EOF
read -r -d '' SECURESIGN_PATCH <<EOF [ { "op": "replace", "path": "/spec/fulcio/certificate/privateKeyRef", "value": {"name": "fulcio-config", "key": "private"} }, { "op": "replace", "path": "/spec/fulcio/certificate/privateKeyPasswordRef", "value": {"name": "fulcio-config", "key": "password"} }, { "op": "replace", "path": "/spec/fulcio/certificate/caRef", "value": {"name": "fulcio-config", "key": "cert"} }, { "op": "replace", "path": "/spec/ctlog/rootCertificates", "value": [{"name": "fulcio-config", "key": "cert"}] } ] EOF
Patch the Securesign instance:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc patch Securesign securesign-sample --type='json' -p="$SECURESIGN_PATCH"
oc patch Securesign securesign-sample --type='json' -p="$SECURESIGN_PATCH"
Wait for the Fulcio server to redeploy:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc wait pod -l app.kubernetes.io/name=fulcio-server --for=condition=Ready oc wait pod -l app.kubernetes.io/name=ctlog --for=condition=Ready
oc wait pod -l app.kubernetes.io/name=fulcio-server --for=condition=Ready oc wait pod -l app.kubernetes.io/name=ctlog --for=condition=Ready
Update the
cosign
configuration with the updated TUF configuration:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
Now, you are ready to sign and verify your artifacts with the new Fulcio certificate.
1.3.4. Rotating the Timestamp Authority signer key and certificate chain
You can proactively rotate the Timestamp Authority (TSA) signer key and certificate chain. This procedure walks you through expiring your old TSA signer key and certificate chain, and replacing them with a new ones for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old TSA signer key and certificate chain still allows you to verify artifacts signed by the old key and certificate chain.
Prerequisites
- Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
- A running Securesign instance.
-
A workstation with the
oc
andopenssl
binaries installed.
Procedure
Download the
tuftool
binary from the OpenShift cluster to your workstation.ImportantCurrently, the
tuftool
binary is only available for Linux operating systems on the x86_64 architecture.- From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
Open a terminal on your workstation, decompress the binary
.gz
file, and set the execute bit:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gunzip tuftool-amd64.gz chmod +x tuftool-amd64
gunzip tuftool-amd64.gz chmod +x tuftool-amd64
Move and rename the binary to a location within your
$PATH
environment:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo mv tuftool-amd64 /usr/local/bin/tuftool
sudo mv tuftool-amd64 /usr/local/bin/tuftool
Log in to OpenShift from the command line:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc login --token=TOKEN --server=SERVER_URL_AND_PORT
oc login --token=TOKEN --server=SERVER_URL_AND_PORT
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
NoteYou can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.
Switch to the RHTAS project:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc project trusted-artifact-signer
oc project trusted-artifact-signer
Generate a new certificate chain, and a new signer key.
ImportantThe new certificate and keys must have unique file names.
Create a temporary working directory:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkdir certs && cd certs
mkdir certs && cd certs
Create the root certificate authority (CA) private key, and set a password:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow openssl req -x509 -newkey rsa:2048 -days 365 -sha256 -nodes \ -keyout rootCA.key.pem -out rootCA.crt.pem \ -passout pass:"CHANGE_ME" \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=RootCA/CN=RootCA" \ -addext "basicConstraints=CA:true" -addext "keyUsage=cRLSign, keyCertSign"
openssl req -x509 -newkey rsa:2048 -days 365 -sha256 -nodes \ -keyout rootCA.key.pem -out rootCA.crt.pem \ -passout pass:"CHANGE_ME" \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=RootCA/CN=RootCA" \ -addext "basicConstraints=CA:true" -addext "keyUsage=cRLSign, keyCertSign"
Replace CHANGE_ME with a new password.
Create the intermediate CA private key and certificate signing request (CSR), and set a password:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow openssl req -newkey rsa:2048 -sha256 \ -keyout intermediateCA.key.pem -out intermediateCA.csr.pem \ -passout pass:"CHANGE_ME" \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=IntermediateCA/CN=IntermediateCA"
openssl req -newkey rsa:2048 -sha256 \ -keyout intermediateCA.key.pem -out intermediateCA.csr.pem \ -passout pass:"CHANGE_ME" \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=IntermediateCA/CN=IntermediateCA"
Replace CHANGE_ME with a new password.
Sign the intermediate CA certificate with the root CA:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow openssl x509 -req -in intermediateCA.csr.pem -CA rootCA.crt.pem -CAkey rootCA.key.pem \ -CAcreateserial -out intermediateCA.crt.pem -days 365 -sha256 \ -extfile <(echo -e "basicConstraints=CA:true\nkeyUsage=cRLSign, keyCertSign\nextendedKeyUsage=critical,timeStamping") \ -passin pass:"CHANGE_ME"
openssl x509 -req -in intermediateCA.csr.pem -CA rootCA.crt.pem -CAkey rootCA.key.pem \ -CAcreateserial -out intermediateCA.crt.pem -days 365 -sha256 \ -extfile <(echo -e "basicConstraints=CA:true\nkeyUsage=cRLSign, keyCertSign\nextendedKeyUsage=critical,timeStamping") \ -passin pass:"CHANGE_ME"
Replace CHANGE_ME with the root CA private key password to sign the intermediate CA certificate.
Create the leaf CA private key and CSR, and set a password:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow openssl req -newkey rsa:2048 -sha256 \ -keyout leafCA.key.pem -out leafCA.csr.pem \ -passout pass:"CHANGE_ME" \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=LeafCA/CN=LeafCA"
openssl req -newkey rsa:2048 -sha256 \ -keyout leafCA.key.pem -out leafCA.csr.pem \ -passout pass:"CHANGE_ME" \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=LeafCA/CN=LeafCA"
Sign the leaf CA certificate with the intermediate CA:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow openssl x509 -req -in leafCA.csr.pem -CA intermediateCA.crt.pem -CAkey intermediateCA.key.pem \ -CAcreateserial -out leafCA.crt.pem -days 365 -sha256 \ -extfile <(echo -e "basicConstraints=CA:false\nkeyUsage=cRLSign, keyCertSign\nextendedKeyUsage=critical,timeStamping") \ -passin pass:"CHANGE_ME"
openssl x509 -req -in leafCA.csr.pem -CA intermediateCA.crt.pem -CAkey intermediateCA.key.pem \ -CAcreateserial -out leafCA.crt.pem -days 365 -sha256 \ -extfile <(echo -e "basicConstraints=CA:false\nkeyUsage=cRLSign, keyCertSign\nextendedKeyUsage=critical,timeStamping") \ -passin pass:"CHANGE_ME"
Replace CHANGE_ME with the intermediate CA private key password to sign the leaf CA certificate.
Create the certificate chain by combining the newly created certificates together:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat leafCA.crt.pem intermediateCA.crt.pem rootCA.crt.pem > new-tsa.certchain.pem
cat leafCA.crt.pem intermediateCA.crt.pem rootCA.crt.pem > new-tsa.certchain.pem
Create a new secret resource with the signer key:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create secret generic rotated-signer-key --from-file=rotated-signer-key=certs/leafCA.key.pem
oc create secret generic rotated-signer-key --from-file=rotated-signer-key=certs/leafCA.key.pem
Create a new secret resource with the new certificate chain:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create secret generic rotated-cert-chain --from-file=rotated-cert-chain=certs/new-tsa.certchain.pem
oc create secret generic rotated-cert-chain --from-file=rotated-cert-chain=certs/new-tsa.certchain.pem
Create a new secret resource with for the password:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create secret generic rotated-password --from-literal=rotated-password=CHANGE_ME
oc create secret generic rotated-password --from-literal=rotated-password=CHANGE_ME
Replace CHANGE_ME with the intermediate CA private key password.
Find your active TSA certificate file name, the TSA URL string, and configure your shell environment with these values:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export ACTIVE_CERT_CHAIN_NAME=tsa.certchain.pem export TSA_URL=$(oc get timestampauthority securesign-sample -o jsonpath='{.status.url}')/api/v1/timestamp curl $TSA_URL/certchain -o $ACTIVE_CERT_CHAIN_NAME
export ACTIVE_CERT_CHAIN_NAME=tsa.certchain.pem export TSA_URL=$(oc get timestampauthority securesign-sample -o jsonpath='{.status.url}')/api/v1/timestamp curl $TSA_URL/certchain -o $ACTIVE_CERT_CHAIN_NAME
Update the Securesign TSA configuration:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow read -r -d '' SECURESIGN_PATCH <<EOF [ { "op": "replace", "path": "/spec/tsa/signer/certificateChain", "value": { "certificateChainRef" : {"name": "rotated-cert-chain", "key": "rotated-cert-chain"} } }, { "op": "replace", "path": "/spec/tsa/signer/file", "value": { "privateKeyRef": {"name": "rotated-signer-key", "key": "rotated-signer-key"}, "passwordRef": {"name": "rotated-password", "key": "rotated-password"} } } ] EOF
read -r -d '' SECURESIGN_PATCH <<EOF [ { "op": "replace", "path": "/spec/tsa/signer/certificateChain", "value": { "certificateChainRef" : {"name": "rotated-cert-chain", "key": "rotated-cert-chain"} } }, { "op": "replace", "path": "/spec/tsa/signer/file", "value": { "privateKeyRef": {"name": "rotated-signer-key", "key": "rotated-signer-key"}, "passwordRef": {"name": "rotated-password", "key": "rotated-password"} } } ] EOF
Patch the Securesign instance:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc patch Securesign securesign-sample --type='json' -p="$SECURESIGN_PATCH"
oc patch Securesign securesign-sample --type='json' -p="$SECURESIGN_PATCH"
Wait for the TSA server to redeploy with the new signer key and certificate chain:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pods -w -l app.kubernetes.io/name=tsa-server
oc get pods -w -l app.kubernetes.io/name=tsa-server
Get the new certificate chain:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export NEW_CERT_CHAIN_NAME=new-tsa.certchain.pem curl $TSA_URL/certchain -o $NEW_CERT_CHAIN_NAME
export NEW_CERT_CHAIN_NAME=new-tsa.certchain.pem curl $TSA_URL/certchain -o $NEW_CERT_CHAIN_NAME
Configure The Update Framework (TUF) service to use the new TSA certificate chain.
Set up your shell environment:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export WORK="${HOME}/trustroot-example" export ROOT="${WORK}/root/root.json" export KEYDIR="${WORK}/keys" export INPUT="${WORK}/input" export TUF_REPO="${WORK}/tuf-repo" export TUF_SERVER_POD="$(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")"
export WORK="${HOME}/trustroot-example" export ROOT="${WORK}/root/root.json" export KEYDIR="${WORK}/keys" export INPUT="${WORK}/input" export TUF_REPO="${WORK}/tuf-repo" export TUF_SERVER_POD="$(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")"
Create a temporary TUF directory structure:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
Download the TUF contents to the temporary TUF directory structure:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc extract --to "${KEYDIR}/" secret/tuf-root-keys oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
oc extract --to "${KEYDIR}/" secret/tuf-root-keys oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
Expire the old TSA certificate:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-tsa-target "$ACTIVE_CERT_CHAIN_NAME" \ --tsa-uri "$TSA_URL" \ --tsa-status "Expired" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-tsa-target "$ACTIVE_CERT_CHAIN_NAME" \ --tsa-uri "$TSA_URL" \ --tsa-status "Expired" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
Add the new TSA certificate:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-tsa-target "$NEW_CERT_CHAIN_NAME" \ --tsa-uri "$TSA_URL" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-tsa-target "$NEW_CERT_CHAIN_NAME" \ --tsa-uri "$TSA_URL" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
Upload these changes to the TUF server:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"
oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"
Delete the working directory:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rm -r $WORK
rm -r $WORK
Update the
cosign
configuration with the updated TUF configuration:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
Now, you are ready to sign and verify your artifacts that uses the new TSA signer key, and certificate.
1.4. Using your own certificate authority bundle
You can bring your organization’s certificate authority (CA) bundle for signing and verifying your build artifacts with Red Hat’s Trusted Artifact Signer (RHTAS) service.
Prerequisites
- Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
- A running Securesign instance.
- Your CA root certificate.
-
A workstation with the
oc
binary installed.
Procedure
Log in to OpenShift from the command line:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc login --token=TOKEN --server=SERVER_URL_AND_PORT
oc login --token=TOKEN --server=SERVER_URL_AND_PORT
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
NoteYou can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.
Switch to the RHTAS project:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc project trusted-artifact-signer
oc project trusted-artifact-signer
Create a new ConfigMap by using your organization’s CA root certificate bundle:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create configmap custom-ca-bundle --from-file=ca-bundle.crt
oc create configmap custom-ca-bundle --from-file=ca-bundle.crt
ImportantThe certificate filename must be
ca-bundle.crt
.Open the Securesign resource for editing:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc edit Securesign securesign-sample
oc edit Securesign securesign-sample
Add the
rhtas.redhat.com/trusted-ca
under themetadata.annotations
section:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: rhtas.redhat.com/v1alpha1 kind: Securesign metadata: name: example-instance annotations: rhtas.redhat.com/trusted-ca: custom-ca-bundle spec: ...
apiVersion: rhtas.redhat.com/v1alpha1 kind: Securesign metadata: name: example-instance annotations: rhtas.redhat.com/trusted-ca: custom-ca-bundle spec: ...
- Save, and quit the editor.
Open the Fulcio resource for editing:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc edit Fulcio securesign-sample
oc edit Fulcio securesign-sample
Add the
rhtas.redhat.com/trusted-ca
under themetadata.annotations
section:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: rhtas.redhat.com/v1alpha1 kind: Fulcio metadata: name: example-instance annotations: rhtas.redhat.com/trusted-ca: custom-ca-bundle spec: ...
apiVersion: rhtas.redhat.com/v1alpha1 kind: Fulcio metadata: name: example-instance annotations: rhtas.redhat.com/trusted-ca: custom-ca-bundle spec: ...
- Save, and quit the editor.
- Wait for the RHTAS operator to reconfigure before signing and verifying artifacts.
Chapter 2. Red Hat Enterprise Linux
2.1. Protect your signing data
As a systems administrator, protecting the signing data of your software supply chain is critical when there is data loss due to hardware failure or accidental data deletion.
For Red Hat Trusted Artifact Signer (RHTAS) deployments on Red Hat Enterprise Linux, you can simply create encrypted backups of your signing data to a local file system.
2.1.1. Backing up your Trusted Artifact Signer data
You can schedule automatic backups of your Red Hat Trusted Artifact Signer (RHTAS) data to a mounted file system. Data backups are encrypted with SSL, and compressed.
The RHTAS service does not support concurrent manual backup and restore operations.
Prerequisites
- Red Hat Enterprise Linux 9.4 or later.
- A deployment of RHTAS running on Red Hat Enterprise Linux managed by Ansible.
- A SSH connection to the managed node, with root-level privileges on the managed node.
Procedure
- Open for editing the RHTAS Ansible Playbook.
Under the
tas_single_node_backup_restore.backup
section, set theenabled
variable totrue
:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tas_single_node_backup_restore: backup: enabled: true
tas_single_node_backup_restore: backup: enabled: true
By default, a daily backup job runs at midnight every day. You can change this to better fit your schedule.
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tas_single_node_backup_restore: backup: enabled: true schedule: "*-*-* 00:00:00"
tas_single_node_backup_restore: backup: enabled: true schedule: "*-*-* 00:00:00"
Set a
passphrase
, and specify the local backup directory:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tas_single_node_backup_restore: backup: enabled: true schedule: "*-*-* 00:00:00" force_run: false passphrase: "example123" directory: /root/tas_backups
tas_single_node_backup_restore: backup: enabled: true schedule: "*-*-* 00:00:00" force_run: false passphrase: "example123" directory: /root/tas_backups
-
Optional. To start an immediate backup job, set the
force_run
variable totrue
. - Save the changes, and quit the editor.
Run the RHTAS Ansible Playbook to apply the changes:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ansible-playbook -i inventory play.yml
ansible-playbook -i inventory play.yml
After the backup finishes, the resulting encrypted, and compressed file name format is,
BACKUP-<date-and-time>-UTC.tar.gz.enc
.
2.1.2. Restoring your Trusted Artifact Signer data
You can restore snapshots of your Red Hat Trusted Artifact Signer (RHTAS) data from a backup source.
Prerequisites
- Red Hat Enterprise Linux 9.4 or later.
- A deployment of RHTAS running on Red Hat Enterprise Linux managed by Ansible.
- A SSH connection to the managed node, with root-level privileges on the managed node.
- The backup source file is available.
- Know the passphrase used for the backup source.
Procedure
- Copy the backup data file to a directory on the Ansible control node.
- Open for editing the RHTAS Ansible Playbook.
Under the
tas_single_node_backup_restore.restore
section, set theenabled
variable totrue
:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tas_single_node_backup_restore: ... restore: enabled: true
tas_single_node_backup_restore: ... restore: enabled: true
Specify the source location of the backup file, and give the correct passphrase:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tas_single_node_backup_restore: ... restore: enabled: true source: "PATH_TO_BACKUP_FILE" passphrase: "example123"
tas_single_node_backup_restore: ... restore: enabled: true source: "PATH_TO_BACKUP_FILE" passphrase: "example123"
-
Under the
tas_single_node_backup_restore.backup
section, verify that theforce_run
variable tofalse
. If theforce_run
variable totrue
, then set it tofalse
. . Run the RHTAS Ansible Playbook to apply the changes:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ansible-playbook -i inventory play.yml
ansible-playbook -i inventory play.yml
The restoration process starts, and does a re-execution of all tasks to validate the integrity of the RHTAS service.
2.2. The Update Framework
As a systems administrator, understanding Red Hat’s implementation of The Update Framework (TUF) for Red Hat Trusted Artifact Signer (RHTAS) is important in helping you to maintaining a secure coding environment for developers. You can refresh TUF’s root and non-root metadata periodically to help prevent mix-and-match attacks on a code base. Refreshing the TUF metadata gives clients the ability to detect and reject outdated or tampered-with files.
2.2.1. Trusted Artifact Signer’s implementation of The Update Framework
Starting with Red Hat Trusted Artifact Signer (RHTAS) version 1.1, we implemented The Update Framework (TUF) as a trust root to store public keys, and certificates used by RHTAS services. The Update Framework is a sophisticated framework for securing software update systems, and this makes it ideal for securing shipped artifacts. The Update Framework refers to the RHTAS services as trusted root targets. There are four trusted targets, one for each RHTAS service: Fulcio, Certificate Transparency (CT) log, Rekor, and Timestamp Authority (TSA). Client software, such as cosign
, use the RHTAS trust root targets to sign and verify artifact signatures. A simple HTTP server distributes the public keys and certificates to the client software. This simple HTTP server has the TUF repository of the individual targets.
By default, when deploying RHTAS on Red Hat OpenShift or Red Hat Enterprise Linux, we create a TUF repository, and prepopulate the individual targets. By default, the expiration date of all metadata files is 52 weeks from the time you deploy the RHTAS service. Red Hat recommends choosing shorter expiration periods, and rotating your public keys and certificates often. Doing these maintenance tasks regularly can help prevent attacks on your code base.
2.2.2. Updating The Update Framework metadata files
By default, The Update Framework (TUF) metadata files expire after 52 weeks from the Red Hat Trusted Artifact Signer (RHTAS) deployment date. At a minimum, you have to update the TUF metadata files at least once every 52 weeks before they expire. Red Hat recommends updating the metadata files more often than once a year.
This procedure walks you through refreshing the root, and non-root metadata files.
Prerequisites
- Installation of RHTAS running on Red Hat Enterprise Linux (RHEL) managed by Ansible.
-
A workstation with the
rsync
, andpodman
binaries installed. - A SSH connection to the managed node, with root-level privileges on the managed node.
Procedure
Download the
tuftool
binary from the local command-line interface (CLI) tool download page to your workstation.NoteThe URL address is the configured node as defined by the
tas_single_node_base_hostname
variable. An example URL address would be,https://cli-server.example.com
, given thetas_single_node_base_hostname
value asexample.com
.ImportantCurrently, the
tuftool
binary is only available for Linux operating systems on the x86_64 architecture.- From the download page, go to the tuftool download section, and click the link for your platform.
Open a terminal on your workstation, decompress the binary
.gz
file, and set the execute bit:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gunzip tuftool-amd64.gz chmod +x tuftool-amd64
gunzip tuftool-amd64.gz chmod +x tuftool-amd64
Move and rename the binary to a location within your
$PATH
environment:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo mv tuftool-amd64 /usr/local/bin/tuftool
sudo mv tuftool-amd64 /usr/local/bin/tuftool
Configure your shell environment:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export WORK="${HOME}/trustroot-example" export ROOT="${WORK}/root/root.json" export KEYDIR="${WORK}/keys" export INPUT="${WORK}/input" export TUF_REPO="${WORK}/tuf-repo" export MANAGED_NODE_IP=IP_OF_ANSIBLE_MANAGED_NODE export MANAGED_NODE_SSH_USER=USER_TO_CONNECT_TO_MANAGED_NODE export REMOTE_KEYS_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-signing-keys" | tr -d '[:space:]') export REMOTE_TUF_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-repository" | tr -d '[:space:]') export TIMESTAMP_EXPIRATION="in 10 days" export SNAPSHOT_EXPIRATION="in 26 weeks" export TARGETS_EXPIRATION="in 26 weeks" export ROOT_EXPIRATION="in 26 weeks"
export WORK="${HOME}/trustroot-example" export ROOT="${WORK}/root/root.json" export KEYDIR="${WORK}/keys" export INPUT="${WORK}/input" export TUF_REPO="${WORK}/tuf-repo" export MANAGED_NODE_IP=IP_OF_ANSIBLE_MANAGED_NODE export MANAGED_NODE_SSH_USER=USER_TO_CONNECT_TO_MANAGED_NODE export REMOTE_KEYS_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-signing-keys" | tr -d '[:space:]') export REMOTE_TUF_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-repository" | tr -d '[:space:]') export TIMESTAMP_EXPIRATION="in 10 days" export SNAPSHOT_EXPIRATION="in 26 weeks" export TARGETS_EXPIRATION="in 26 weeks" export ROOT_EXPIRATION="in 26 weeks"
Replace IP_OF_ANSIBLE_MANAGED_NODE and USER_TO_CONNECT_TO_MANAGED_NODE with your relevant values.
Set the expiration durations according to your requirements.
Create a temporary TUF directory structure:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
Download the TUF contents to the temporary TUF directory structure:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
You can update the timestamp, snapshot, and targets metadata all in one command:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool update \ --root "${ROOT}" \ --key "${KEYDIR}/timestamp.pem" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --timestamp-expires "${TIMESTAMP_EXPIRATION}" \ --snapshot-expires "${SNAPSHOT_EXPIRATION}" \ --targets-expires "${TARGETS_EXPIRATION}" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
tuftool update \ --root "${ROOT}" \ --key "${KEYDIR}/timestamp.pem" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --timestamp-expires "${TIMESTAMP_EXPIRATION}" \ --snapshot-expires "${SNAPSHOT_EXPIRATION}" \ --targets-expires "${TARGETS_EXPIRATION}" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
NoteYou can also run the TUF metadata update on a subset of TUF metadata files. For example, the
timestamp.json
metadata file expires more often than the other metadata files. Therefore, you can just update the timestamp metadata file by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool update \ --root "${ROOT}" \ --key "${KEYDIR}/timestamp.pem" \ --timestamp-expires "${TIMESTAMP_EXPIRATION}" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
tuftool update \ --root "${ROOT}" \ --key "${KEYDIR}/timestamp.pem" \ --timestamp-expires "${TIMESTAMP_EXPIRATION}" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
Only update the root expiration date if it is about to expire:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool root expire "${ROOT}" "${ROOT_EXPIRATION}"
tuftool root expire "${ROOT}" "${ROOT_EXPIRATION}"
NoteYou can skip this step if the root file is not close to expiring.
Update the root version:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool root bump-version "${ROOT}"
tuftool root bump-version "${ROOT}"
Sign the root metadata file again:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool root sign "${ROOT}" -k "${KEYDIR}/root.pem"
tuftool root sign "${ROOT}" -k "${KEYDIR}/root.pem"
Set the new root version, and copy the root metadata file in place:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export NEW_ROOT_VERSION=$(cat "${ROOT}" | jq -r ".signed.version") cp "${ROOT}" "${TUF_REPO}/root.json" cp "${ROOT}" "${TUF_REPO}/${NEW_ROOT_VERSION}.root.json"
export NEW_ROOT_VERSION=$(cat "${ROOT}" | jq -r ".signed.version") cp "${ROOT}" "${TUF_REPO}/root.json" cp "${ROOT}" "${TUF_REPO}/${NEW_ROOT_VERSION}.root.json"
Upload these changes to the TUF server.
Create a compressed archive of the TUF repository:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tar -C "${WORK}" -czvf repository.tar.gz tuf-repo
tar -C "${WORK}" -czvf repository.tar.gz tuf-repo
Update the RHTAS Ansible Playbook with these two lines:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"
tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"
Run the RHTAS Anisble Playbook to apply the changes:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ansible-playbook -i inventory play.yml
ansible-playbook -i inventory play.yml
Additional resources
2.3. Rotate your certificates and keys
As a systems administrator, you can proactively rotate the certificates and signer keys used by the Red Hat Trusted Artifact Signer (RHTAS) service running on Red Hat OpenShift. Rotating your keys regularly can prevent key tampering, and theft. These procedures guide you through expiring your old certificates and signer keys, and replacing them with a new certificate and signer key for the underlying services that make up RHTAS. You can rotate keys and certificates for the following services:
- Rekor
- Certificate Transparency log
- Fulcio
- Timestamp Authority
2.3.1. Rotating the Rekor signer key
You can proactively rotate Rekor’s signer key by using the sharding feature to freeze the log tree, and create a new log tree with a new signer key. This procedure walks you through expiring your old Rekor signer key, and replacing it with a new signer key for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old Rekor signer key still allows you to verify artifacts signed by the old key.
This procedure requires downtime to the Rekor service.
Prerequisites
- Installation of RHTAS running on Red Hat Enterprise Linux managed by Ansible.
-
A workstation with the
rsync
,openssl
, andcosign
binaries installed. - A SSH connection to the managed node, with root-level privileges on the managed node.
Procedure
Download the
rekor-cli
binary from the local command-line interface (CLI) tool download page to your workstation.Open a web browser, and go to the CLI server web page.
NoteThe URL address is the configured node as defined by the
tas_single_node_base_hostname
variable. An example URL address would be,https://cli-server.example.com
, given that the value oftas_single_node_base_hostname
isexample.com
.- From the download page, go to the rekor-cli download section, and click the link for your platform.
From a terminal on your workstation, decompress the binary
.gz
file, and set the execute bit:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gunzip rekor-cli-amd64.gz chmod +x rekor-cli-amd64
gunzip rekor-cli-amd64.gz chmod +x rekor-cli-amd64
Move and rename the binary to a location within your
$PATH
environment:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo mv rekor-cli-amd64 /usr/local/bin/rekor-cli
sudo mv rekor-cli-amd64 /usr/local/bin/rekor-cli
Download the
tuftool
binary from the local command-line interface (CLI) tool download page to your workstation.ImportantCurrently, the
tuftool
binary is only available for Linux operating systems on the x86_64 architecture.- From the download page, go to the tuftool download section, and click the link for your platform.
From a terminal on your workstation, decompress the binary
.gz
file, and set the execute bit:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gunzip tuftool-amd64.gz chmod +x tuftool-amd64
gunzip tuftool-amd64.gz chmod +x tuftool-amd64
Move and rename the binary to a location within your
$PATH
environment:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo mv tuftool-amd64 /usr/local/bin/tuftool
sudo mv tuftool-amd64 /usr/local/bin/tuftool
Assign shell variables to the base hostname, and the Rekor URL:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export BASE_HOSTNAME=BASE_HOSTNAME_OF_RHTAS_SERVICE export REKOR_URL=https://rekor.${BASE_HOSTNAME}
export BASE_HOSTNAME=BASE_HOSTNAME_OF_RHTAS_SERVICE export REKOR_URL=https://rekor.${BASE_HOSTNAME}
Replace BASE_HOSTNAME_OF_RHTAS_SERVICE with the value of the
tas_single_node_base_hostname
variable.Get the log tree identifier for the active shard:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export OLD_TREE_ID=$(rekor-cli loginfo --rekor_server $REKOR_URL --format json | jq -r .TreeID)
export OLD_TREE_ID=$(rekor-cli loginfo --rekor_server $REKOR_URL --format json | jq -r .TreeID)
Configure your shell environment:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export MANAGED_NODE_IP=IP_OF_ANSIBLE_MANAGED_NODE export MANAGED_NODE_SSH_USER=USER_TO_CONNECT_TO_MANAGED_NODE export REMOTE_KEYS_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-signing-keys" | tr -d '[:space:]') export REMOTE_TUF_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-repository" | tr -d '[:space:]')
export MANAGED_NODE_IP=IP_OF_ANSIBLE_MANAGED_NODE export MANAGED_NODE_SSH_USER=USER_TO_CONNECT_TO_MANAGED_NODE export REMOTE_KEYS_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-signing-keys" | tr -d '[:space:]') export REMOTE_TUF_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-repository" | tr -d '[:space:]')
Replace IP_OF_ANSIBLE_MANAGED_NODE and USER_TO_CONNECT_TO_MANAGED_NODE with values for your environment.
Set the log tree to the
DRAINING
state:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run --network=rhtas --rm registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --admin_server=trillian-logserver-pod:8091 --tree_id=${OLD_TREE_ID} --tree_state=DRAINING"
ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run --network=rhtas --rm registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --admin_server=trillian-logserver-pod:8091 --tree_id=${OLD_TREE_ID} --tree_state=DRAINING"
While draining, the tree log will not accept any new entries. Watch and wait for the queue to empty.
ImportantYou must wait for the queues to be empty before proceeding to the next step. If leaves are still integrating while draining, then freezing the log tree during this process can cause the log path to exceed the maximum merge delay (MMD) threshold.
Freeze the log tree:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run --network=rhtas --rm registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --tree_id=${OLD_TREE_ID} --admin_server=trillian-logserver-pod:8091 --tree_state=FROZEN"
ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run --network=rhtas --rm registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --tree_id=${OLD_TREE_ID} --admin_server=trillian-logserver-pod:8091 --tree_state=FROZEN"
Get the length of the frozen log tree:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export OLD_SHARD_LENGTH=$(rekor-cli loginfo --rekor_server $REKOR_URL --format json | jq -r .ActiveTreeSize)
export OLD_SHARD_LENGTH=$(rekor-cli loginfo --rekor_server $REKOR_URL --format json | jq -r .ActiveTreeSize)
Get Rekor’s public key for the old shard:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export OLD_PUBLIC_KEY=$(curl -s $REKOR_URL/api/v1/log/publicKey | base64 | tr -d '\n')
export OLD_PUBLIC_KEY=$(curl -s $REKOR_URL/api/v1/log/publicKey | base64 | tr -d '\n')
Create a new log tree:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export NEW_TREE_ID=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run -q --network=rhtas --rm registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --logtostderr=false --admin_server=trillian-logserver-pod:8091 --display_name=rekor-tree | tr -d '[:punct:][:blank:][:cntrl:]'")
export NEW_TREE_ID=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run -q --network=rhtas --rm registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --logtostderr=false --admin_server=trillian-logserver-pod:8091 --display_name=rekor-tree | tr -d '[:punct:][:blank:][:cntrl:]'")
Now you have two log trees, one frozen tree, and a new tree that will become the active shard.
Create a new private key and an associated public key:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow openssl ecparam -genkey -name secp384r1 -noout -out new-rekor.pem openssl ec -in new-rekor.pem -pubout -out new-rekor.pub export NEW_KEY_NAME=new-rekor.pub
openssl ecparam -genkey -name secp384r1 -noout -out new-rekor.pem openssl ec -in new-rekor.pem -pubout -out new-rekor.pub export NEW_KEY_NAME=new-rekor.pub
ImportantThe new key must have a unique file name.
Get the active Rekor signing key, and save the key to a file:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/rekor-signer0.key ./rekor-signer0.key echo "$OLD_PUBLIC_KEY" | base64 -d > rekor.pub
rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/rekor-signer0.key ./rekor-signer0.key echo "$OLD_PUBLIC_KEY" | base64 -d > rekor.pub
Update the Rekor configuration in the RHTAS Ansible playbook:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tas_single_node_rekor: active_signer_id: "new-rekor-key" active_tree_id: NEW_TREE_ID private_keys: - id: "new-rekor-key" key: | {{ lookup('file', 'new-rekor.pem') }} - id: "private-0" key: | {{ lookup('file', 'rekor-signer0.key') }} public_keys: - id: "new-rekor-pubkey" key: | {{ lookup('file', 'new-rekor.pub') }} - id: "public-0" key: | {{ lookup('file', 'rekor.pub') }} sharding_config: - tree_id: OLD_TREE_ID tree_length: OLD_SHARD_LENGTH pem_pub_key: "public-0"
tas_single_node_rekor: active_signer_id: "new-rekor-key" active_tree_id: NEW_TREE_ID private_keys: - id: "new-rekor-key" key: | {{ lookup('file', 'new-rekor.pem') }} - id: "private-0" key: | {{ lookup('file', 'rekor-signer0.key') }} public_keys: - id: "new-rekor-pubkey" key: | {{ lookup('file', 'new-rekor.pub') }} - id: "public-0" key: | {{ lookup('file', 'rekor.pub') }} sharding_config: - tree_id: OLD_TREE_ID tree_length: OLD_SHARD_LENGTH pem_pub_key: "public-0"
Configure The Update Framework (TUF) service to use the new Rekor public key.
Configure your shell environment:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export WORK="${HOME}/trustroot-example" export ROOT="${WORK}/root/root.json" export KEYDIR="${WORK}/keys" export INPUT="${WORK}/input" export TUF_REPO="${WORK}/tuf-repo" export TUF_URL="https://tuf.${BASE_HOSTNAME}"
export WORK="${HOME}/trustroot-example" export ROOT="${WORK}/root/root.json" export KEYDIR="${WORK}/keys" export INPUT="${WORK}/input" export TUF_REPO="${WORK}/tuf-repo" export TUF_URL="https://tuf.${BASE_HOSTNAME}"
Create a temporary TUF directory structure:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
Download the TUF contents to the temporary TUF directory structure:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
Assign an environment variable to the active Rekor signer key file name:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export ACTIVE_KEY_NAME=rekor.pub
export ACTIVE_KEY_NAME=rekor.pub
Expire the old Rekor signer key:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-rekor-target "${ACTIVE_KEY_NAME}" \ --rekor-uri "${REKOR_URL}" \ --rekor-status "Expired" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-rekor-target "${ACTIVE_KEY_NAME}" \ --rekor-uri "${REKOR_URL}" \ --rekor-status "Expired" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
Add the new Rekor signer key:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-rekor-target "${NEW_KEY_NAME}" \ --rekor-uri "${REKOR_URL}" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-rekor-target "${NEW_KEY_NAME}" \ --rekor-uri "${REKOR_URL}" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
Create a compressed archive file of the updated TUF repository:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tar -C "${WORK}" -czvf repository.tar.gz tuf-repo
tar -C "${WORK}" -czvf repository.tar.gz tuf-repo
Update the RHTAS Ansible playbook by adding the new compressed archive file name to the
tas_single_node_trust_root
variable:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"
tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"
Delete the working directory:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rm -r $WORK
rm -r $WORK
Run the RHTAS Ansible Playbook to apply the changes:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ansible-playbook -i inventory play.yml
ansible-playbook -i inventory play.yml
Update the
cosign
configuration with the updated TUF configuration:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
Now, you are ready to sign and verify your artifacts with the new Rekor signer key.
2.3.2. Rotating the Certificate Transparency log signer key
You can proactively rotate Certificate Transparency (CT) log signer key by using the sharding feature to freeze the log tree, and create a new log tree with a new signer key. This procedure walks you through expiring your old CT log signer key, and replacing it with a new signer key for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old CT log signer key still allows you to verify artifacts signed by the old key.
Prerequisites
- Installation of RHTAS running on Red Hat Enterprise Linux managed by Ansible.
-
A workstation with the
rsync
,openssl
, andcosign
binaries installed. - A SSH connection to the managed node, with root-level privileges on the managed node.
Procedure
Download the
tuftool
binary from the local command-line interface (CLI) tool download page to your workstation.NoteThe URL address is the configured node as defined by the
tas_single_node_base_hostname
variable. An example URL address would be,https://cli-server.example.com
, given thetas_single_node_base_hostname
value asexample.com
.ImportantCurrently, the
tuftool
binary is only available for Linux operating systems on the x86_64 architecture.- From the download page, go to the tuftool download section, and click the link for your platform.
Open a terminal on your workstation, decompress the binary
.gz
file, and set the execute bit:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gunzip tuftool-amd64.gz chmod +x tuftool-amd64
gunzip tuftool-amd64.gz chmod +x tuftool-amd64
Move and rename the binary to a location within your
$PATH
environment:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo mv tuftool-amd64 /usr/local/bin/tuftool
sudo mv tuftool-amd64 /usr/local/bin/tuftool
Configure your shell environment:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export MANAGED_NODE_IP=IP_OF_ANSIBLE_MANAGED_NODE export MANAGED_NODE_SSH_USER=USER_TO_CONNECT_TO_MANAGED_NODE export REMOTE_KEYS_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-signing-keys" | tr -d '[:space:]') export REMOTE_TUF_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-repository" | tr -d '[:space:]') export BASE_HOSTNAME=BASE_HOSTNAME_OF_RHTAS_SERVICE
export MANAGED_NODE_IP=IP_OF_ANSIBLE_MANAGED_NODE export MANAGED_NODE_SSH_USER=USER_TO_CONNECT_TO_MANAGED_NODE export REMOTE_KEYS_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-signing-keys" | tr -d '[:space:]') export REMOTE_TUF_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-repository" | tr -d '[:space:]') export BASE_HOSTNAME=BASE_HOSTNAME_OF_RHTAS_SERVICE
Replace BASE_HOSTNAME_OF_RHTAS_SERVICE with the value of the
tas_single_node_base_hostname
variable.Download the CTlog configuration map, the CTlog keys, and the Fulcio root certificate to your workstation:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/configs/ctlog-config.yaml ./ctlog-config.yaml rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/ctlog0.key ./ctfe.key rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/ctlog0.pub ./ctfe.pub rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/fulcio.pem ./fulcio-0.pem
rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/configs/ctlog-config.yaml ./ctlog-config.yaml rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/ctlog0.key ./ctfe.key rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/ctlog0.pub ./ctfe.pub rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/fulcio.pem ./fulcio-0.pem
Capture the current tree identifier:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export OLD_TREE_ID=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo cat /etc/rhtas/configs/ctlog-treeid-config.yaml | grep 'tree_id:' | awk '{print \$2}'" | tr -d '[:punct:][:blank:][:cntrl:]')
export OLD_TREE_ID=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo cat /etc/rhtas/configs/ctlog-treeid-config.yaml | grep 'tree_id:' | awk '{print \$2}'" | tr -d '[:punct:][:blank:][:cntrl:]')
Set the log tree to the
DRAINING
state:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run --network=rhtas --rm registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --tree_id=${OLD_TREE_ID} --admin_server=trillian-logserver-pod:8091 --tree_state=DRAINING"
ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run --network=rhtas --rm registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --tree_id=${OLD_TREE_ID} --admin_server=trillian-logserver-pod:8091 --tree_state=DRAINING"
While draining, the tree log will not accept any new entries. Watch and wait for the queue to empty.
ImportantYou must wait for the queues to be empty before proceeding to the next step. If leaves are still integrating while draining, then freezing the log tree during this process can cause the log path to exceed the maximum merge delay (MMD) threshold.
Once the queue has been fully drained, freeze the log:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run --network=rhtas --rm registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --tree_id=${OLD_TREE_ID} --admin_server=trillian-logserver-pod:8091 --tree_state=FROZEN"
ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run --network=rhtas --rm registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --tree_id=${OLD_TREE_ID} --admin_server=trillian-logserver-pod:8091 --tree_state=FROZEN"
Create a new Merkle tree, and capture the new tree identifier:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export NEW_TREE_ID=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run -q --network=rhtas --rm registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --logtostderr=false --admin_server=trillian-logserver-pod:8091 --display_name=ctlog-tree" | tr -d '[:punct:][:blank:][:cntrl:]')
export NEW_TREE_ID=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run -q --network=rhtas --rm registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --logtostderr=false --admin_server=trillian-logserver-pod:8091 --display_name=ctlog-tree" | tr -d '[:punct:][:blank:][:cntrl:]')
Generate a new certificate, along with new public and private keys:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow openssl ecparam -genkey -name prime256v1 -noout -out new-ctlog.pem openssl ec -in new-ctlog.pem -pubout -out new-ctlog-public.pem openssl ec -in new-ctlog.pem -out new-ctlog.pass.pem -des3 -passout pass:"CHANGE_ME"
openssl ecparam -genkey -name prime256v1 -noout -out new-ctlog.pem openssl ec -in new-ctlog.pem -pubout -out new-ctlog-public.pem openssl ec -in new-ctlog.pem -out new-ctlog.pass.pem -des3 -passout pass:"CHANGE_ME"
Replace CHANGE_ME with a new password.
ImportantThe certificate and new keys must have unique file names.
Update the CT log configuration.
- Open the RHTAS Ansible playbook for editing.
Configuring the CTlog signer key rotation for the first time, you need to add the following to the
tas_single_node_ctlog.sharding_config
section:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tas_single_node_ctlog: sharding_config: - treeid: OLD_TREE_ID # frozen log prefix: "rhtasansible" private_key: "private-0" password: "rhtas" root_pem_file: "/ctfe-keys/fulcio-0" not_after_limit: seconds: 1728056285 nanos: 012111000
tas_single_node_ctlog: sharding_config: - treeid: OLD_TREE_ID # frozen log prefix: "rhtasansible" private_key: "private-0" password: "rhtas" root_pem_file: "/ctfe-keys/fulcio-0" not_after_limit: seconds: 1728056285 nanos: 012111000
Replace OLD_TREE_ID with the contents contained in the
$OLD_TREE_ID
environment variable.NoteYou can get the current time value for seconds and nanoseconds, by running the following commands:
date +%s
, anddate +%N
.ImportantThe
not_after_limit
field defines the end of the timestamp range for the frozen log only. Certificates beyond this point in time are no longer accepted for inclusion in this log.-
Copy and paste the frozen log block, appending it to the
tas_single_node_ctlog.sharding_config
section, creating a new entry. Change the following lines in the new log block. Set the
treeid
to the new tree identifier, change theprefix
totrusted-artifact-signer
, change theprivate_key
path toprivate-1
, changenot_after_limit
tonot_after_start
, set the timestamp range, and updatetas_single_node_fulcio.ct_log_prefix
for Fulcio to make use of the new log:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tas_single_node_ctlog: sharding_config: ... # frozen log - treeid: NEW_TREE_ID # new active log prefix: "trusted-artifact-signer" private_key: "private-1" password: "CHANGE_ME" root_pem_file: "/ctfe-keys/fulcio-0" not_after_start: seconds: 1713201754 nanos: 155663000 tas_single_node_fulcio: ct_log_prefix: "trusted-artifact-signer"
tas_single_node_ctlog: sharding_config: ... # frozen log - treeid: NEW_TREE_ID # new active log prefix: "trusted-artifact-signer" private_key: "private-1" password: "CHANGE_ME" root_pem_file: "/ctfe-keys/fulcio-0" not_after_start: seconds: 1713201754 nanos: 155663000 tas_single_node_fulcio: ct_log_prefix: "trusted-artifact-signer"
Replace CHANGE_ME with the new private key password. The password here must match the password used for generating the new private and public keys.
ImportantThe
not_after_start
field defines the beginning of the timestamp range inclusively. This means the log will start accepting certificates at this point in time.
Update the
tas_single_node_ctlog
section for CTlog to distribute the new keys to the managed node:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tas_single_node_ctlog: ... private_keys: - id: private-0 key: | {{ lookup('file', 'ctfe.key') }} - id: private-1 key: | {{ lookup('file', 'new-ctlog.pass.pem') }} public_keys: - id: public-0 key: | {{ lookup('file', 'ctfe.pub') }} - id: public-1 key: | {{ lookup('file', 'new-ctlog-public.pem') }}
tas_single_node_ctlog: ... private_keys: - id: private-0 key: | {{ lookup('file', 'ctfe.key') }} - id: private-1 key: | {{ lookup('file', 'new-ctlog.pass.pem') }} public_keys: - id: public-0 key: | {{ lookup('file', 'ctfe.pub') }} - id: public-1 key: | {{ lookup('file', 'new-ctlog-public.pem') }}
Configure The Update Framework (TUF) service to use the new CT log public key.
Configure your shell environment:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export WORK="${HOME}/trustroot-example" export ROOT="${WORK}/root/root.json" export KEYDIR="${WORK}/keys" export INPUT="${WORK}/input" export TUF_REPO="${WORK}/tuf-repo" export TUF_URL="https://tuf.${BASE_HOSTNAME}"
export WORK="${HOME}/trustroot-example" export ROOT="${WORK}/root/root.json" export KEYDIR="${WORK}/keys" export INPUT="${WORK}/input" export TUF_REPO="${WORK}/tuf-repo" export TUF_URL="https://tuf.${BASE_HOSTNAME}"
Create a temporary TUF directory structure:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
Download the TUF contents to the temporary TUF directory structure:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
Assign an environment variable to the active CT log signer key file name:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export ACTIVE_CTFE_NAME=ctfe.pub
export ACTIVE_CTFE_NAME=ctfe.pub
Expire the old CT log signer key:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-ctlog-target "$ACTIVE_CTFE_NAME" \ --ctlog-uri "https://ctlog.rhtas" \ --ctlog-status "Expired" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-ctlog-target "$ACTIVE_CTFE_NAME" \ --ctlog-uri "https://ctlog.rhtas" \ --ctlog-status "Expired" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
Add the new CT log signer key:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-ctlog-target "new-ctlog-public.pem" \ --ctlog-uri "https://ctlog.rhtas" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-ctlog-target "new-ctlog-public.pem" \ --ctlog-uri "https://ctlog.rhtas" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
Create a compressed archive file of the updated TUF repository:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tar -C "${WORK}" -czvf repository.tar.gz tuf-repo
tar -C "${WORK}" -czvf repository.tar.gz tuf-repo
Update the RHTAS Ansible playbook by adding the new compressed archive file name to the
tas_single_node_trust_root
variable:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"
tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"
- Save the changes to the playbook, and close your text editor.
Run the RHTAS Ansible playbook to apply the changes:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ansible-playbook -i inventory play.yml
ansible-playbook -i inventory play.yml
Delete the working directory:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rm -r $WORK
rm -r $WORK
Update the
cosign
configuration with the updated TUF configuration:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
Now, you are ready to sign and verify your artifacts with the new CT log signer key.
2.3.3. Rotating the Fulcio certificate
You can proactively rotate the certificate used by the Fulcio service. This procedure walks you through expiring your old Fulcio certificate, and replacing it with a new certificate for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old Fulcio certificate still allows you to verify artifacts signed by the old certificate.
Prerequisites
- Installation of RHTAS running on Red Hat Enterprise Linux managed by Ansible.
-
A workstation with the
rsync
,openssl
, andcosign
binaries installed. - A SSH connection to the managed node, with root-level privileges on the managed node.
Procedure
Download the
tuftool
binary from the local command-line interface (CLI) tool download page to your workstation.NoteThe URL address is the configured node as defined by the
tas_single_node_base_hostname
variable. An example URL address would be,https://cli-server.example.com
, given thetas_single_node_base_hostname
value asexample.com
.ImportantCurrently, the
tuftool
binary is only available for Linux operating systems on the x86_64 architecture.- From the download page, go to the tuftool download section, and click the link for your platform.
Open a terminal on your workstation, decompress the binary
.gz
file, and set the execute bit:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gunzip tuftool-amd64.gz chmod +x tuftool-amd64
gunzip tuftool-amd64.gz chmod +x tuftool-amd64
Move and rename the binary to a location within your
$PATH
environment:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo mv tuftool-amd64 /usr/local/bin/tuftool
sudo mv tuftool-amd64 /usr/local/bin/tuftool
Generate a new certificate, along with new public and private keys:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow openssl ecparam -genkey -name prime256v1 -noout -out new-fulcio.pem openssl ec -in new-fulcio.pem -pubout -out new-fulcio-public.pem openssl ec -in new-fulcio.pem -out new-fulcio.pass.pem -des3 -passout pass:"CHANGE_ME" openssl req -new -x509 -key new-fulcio.pass.pem -out new-fulcio.cert.pem
openssl ecparam -genkey -name prime256v1 -noout -out new-fulcio.pem openssl ec -in new-fulcio.pem -pubout -out new-fulcio-public.pem openssl ec -in new-fulcio.pem -out new-fulcio.pass.pem -des3 -passout pass:"CHANGE_ME" openssl req -new -x509 -key new-fulcio.pass.pem -out new-fulcio.cert.pem
Replace CHANGE_ME with a new password.
ImportantThe certificate and new keys must have unique file names.
Update the RHTAS Ansible playbook by adding the new private key file name, the new certificate content, and the password to the
tas_single_node_fulcio
variable:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tas_single_node_fulcio: root_ca: "{{ lookup('file', 'new-fulcio.cert.pem') }}" private_key: "{{ lookup('file', 'new-fulcio.pass.pem') }}" ca_passphrase: CHANGE_ME
tas_single_node_fulcio: root_ca: "{{ lookup('file', 'new-fulcio.cert.pem') }}" private_key: "{{ lookup('file', 'new-fulcio.pass.pem') }}" ca_passphrase: CHANGE_ME
Replace CHANGE_ME with a new password.
NoteThe password here must match the password used for generating the new private and public keys.
NoteWe recommend sourcing the passphrase either from a file or encrypted by using Ansible Vault.
Configure The Update Framework (TUF) service to use the new Fulcio certificate.
Set up your shell environment:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export WORK="${HOME}/trustroot-example" export ROOT="${WORK}/root/root.json" export KEYDIR="${WORK}/keys" export INPUT="${WORK}/input" export TUF_REPO="${WORK}/tuf-repo" export TUF_URL="https://tuf.${BASE_HOSTNAME}" export MANAGED_NODE_IP=IP_OF_ANSIBLE_MANAGED_NODE export MANAGED_NODE_SSH_USER=USER_TO_CONNECT_TO_MANAGED_NODE export REMOTE_KEYS_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-signing-keys" | tr -d '[:space:]') export REMOTE_TUF_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-repository" | tr -d '[:space:]')
export WORK="${HOME}/trustroot-example" export ROOT="${WORK}/root/root.json" export KEYDIR="${WORK}/keys" export INPUT="${WORK}/input" export TUF_REPO="${WORK}/tuf-repo" export TUF_URL="https://tuf.${BASE_HOSTNAME}" export MANAGED_NODE_IP=IP_OF_ANSIBLE_MANAGED_NODE export MANAGED_NODE_SSH_USER=USER_TO_CONNECT_TO_MANAGED_NODE export REMOTE_KEYS_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-signing-keys" | tr -d '[:space:]') export REMOTE_TUF_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-repository" | tr -d '[:space:]')
Create a temporary TUF directory structure:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
Download the TUF contents to the temporary TUF directory structure:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
Find the active Fulcio certificate file name. Open the latest target file, for example,
1.targets.json
, within the local TUF repository. In this file you will find the active Fulcio certificate file name, for example,fulcio_v1.crt.pem
. Set an environment variable with this active Fulcio certificate file name:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export ACTIVE_CERT_NAME=fulcio_v1.crt.pem
export ACTIVE_CERT_NAME=fulcio_v1.crt.pem
Get the active Fulico certificate from the managed node:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/fulcio.pem "${ACTIVE_CERT_NAME}"
rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/fulcio.pem "${ACTIVE_CERT_NAME}"
Expire the old certificate:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-fulcio-target "$ACTIVE_CERT_NAME" \ --fulcio-uri "https://fulcio.rhtas" \ --fulcio-status "Expired" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-fulcio-target "$ACTIVE_CERT_NAME" \ --fulcio-uri "https://fulcio.rhtas" \ --fulcio-status "Expired" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
Add the new Fulcio certificate:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-fulcio-target "new-fulcio.cert.pem" \ --fulcio-uri "https://fulcio.rhtas" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-fulcio-target "new-fulcio.cert.pem" \ --fulcio-uri "https://fulcio.rhtas" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
Create a compressed archive file of the updated TUF repository:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tar -C "${WORK}" -czvf repository.tar.gz tuf-repo
tar -C "${WORK}" -czvf repository.tar.gz tuf-repo
Update the RHTAS Ansible playbook by adding the new compressed archive file content to the
tas_single_node_trust_root
variable:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"
tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"
Delete the working directory:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rm -r $WORK
rm -r $WORK
Run the RHTAS Ansible Playbook to apply the changes:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ansible-playbook -i inventory play.yml
ansible-playbook -i inventory play.yml
Update the
cosign
configuration with the updated TUF configuration:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
Now, you are ready to sign and verify your artifacts with the new Fulcio certificate.
Additional resources
2.3.4. Rotating the Timestamp Authority signer key and certificate chain
You can proactively rotate the Timestamp Authority (TSA) signer key and certificate chain. This procedure walks you through expiring your old TSA signer key and certificate chain, and replacing them with a new ones for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old TSA signer key and certificate chain still allows you to verify artifacts signed by the old key and certificate chain.
Prerequisites
- Installation of RHTAS running on Red Hat Enterprise Linux managed by Ansible.
-
A workstation with the
rsync
,openssl
, andcosign
binaries installed. - A SSH connection to the managed node, with root-level privileges on the managed node.
Procedure
Download the
tuftool
binary from the local command-line interface (CLI) tool download page to your workstation.NoteThe URL address is the configured node as defined by the
tas_single_node_base_hostname
variable. An example URL address would be,https://cli-server.example.com
, given that the value oftas_single_node_base_hostname
isexample.com
.ImportantCurrently, the
tuftool
binary is only available for Linux operating systems on the x86_64 architecture.- From the download page, go to the tuftool download section, and click the link for your platform.
Open a terminal on your workstation, decompress the binary
.gz
file, and set the execute bit:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gunzip tuftool-amd64.gz chmod +x tuftool-amd64
gunzip tuftool-amd64.gz chmod +x tuftool-amd64
Move and rename the binary to a location within your
$PATH
environment:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo mv tuftool-amd64 /usr/local/bin/tuftool
sudo mv tuftool-amd64 /usr/local/bin/tuftool
Generate a new certificate chain, and a new signer key.
ImportantThe new certificate and keys must have unique file names.
Create a temporary working directory:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkdir certs && cd certs
mkdir certs && cd certs
Create the root certificate authority (CA) private key, and set a password:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow openssl req -x509 -newkey rsa:2048 -days 365 -sha256 -nodes \ -keyout rootCA.key.pem -out rootCA.crt.pem \ -passout pass:"CHANGE_ME" \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=RootCA/CN=RootCA" \ -addext "basicConstraints=CA:true" -addext "keyUsage=cRLSign, keyCertSign"
openssl req -x509 -newkey rsa:2048 -days 365 -sha256 -nodes \ -keyout rootCA.key.pem -out rootCA.crt.pem \ -passout pass:"CHANGE_ME" \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=RootCA/CN=RootCA" \ -addext "basicConstraints=CA:true" -addext "keyUsage=cRLSign, keyCertSign"
Replace CHANGE_ME with a new password.
Create the intermediate CA private key and certificate signing request (CSR), and set a password:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow openssl req -newkey rsa:2048 -sha256 \ -keyout intermediateCA.key.pem -out intermediateCA.csr.pem \ -passout pass:"CHANGE_ME" \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=IntermediateCA/CN=IntermediateCA"
openssl req -newkey rsa:2048 -sha256 \ -keyout intermediateCA.key.pem -out intermediateCA.csr.pem \ -passout pass:"CHANGE_ME" \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=IntermediateCA/CN=IntermediateCA"
Replace CHANGE_ME with a new password.
Sign the intermediate CA certificate with the root CA:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow openssl x509 -req -in intermediateCA.csr.pem -CA rootCA.crt.pem -CAkey rootCA.key.pem \ -CAcreateserial -out intermediateCA.crt.pem -days 365 -sha256 \ -extfile <(echo -e "basicConstraints=CA:true\nkeyUsage=cRLSign, keyCertSign\nextendedKeyUsage=critical,timeStamping") \ -passin pass:"CHANGE_ME"
openssl x509 -req -in intermediateCA.csr.pem -CA rootCA.crt.pem -CAkey rootCA.key.pem \ -CAcreateserial -out intermediateCA.crt.pem -days 365 -sha256 \ -extfile <(echo -e "basicConstraints=CA:true\nkeyUsage=cRLSign, keyCertSign\nextendedKeyUsage=critical,timeStamping") \ -passin pass:"CHANGE_ME"
Replace CHANGE_ME with the root CA private key password to sign the intermediate CA certificate.
Create the leaf CA private key and CSR, and set a password:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow openssl req -newkey rsa:2048 -sha256 \ -keyout leafCA.key.pem -out leafCA.csr.pem \ -passout pass:"CHANGE_ME" \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=LeafCA/CN=LeafCA"
openssl req -newkey rsa:2048 -sha256 \ -keyout leafCA.key.pem -out leafCA.csr.pem \ -passout pass:"CHANGE_ME" \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=LeafCA/CN=LeafCA"
Sign the leaf CA certificate with the intermediate CA:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow openssl x509 -req -in leafCA.csr.pem -CA intermediateCA.crt.pem -CAkey intermediateCA.key.pem \ -CAcreateserial -out leafCA.crt.pem -days 365 -sha256 \ -extfile <(echo -e "basicConstraints=CA:false\nkeyUsage=cRLSign, keyCertSign\nextendedKeyUsage=critical,timeStamping") \ -passin pass:"CHANGE_ME"
openssl x509 -req -in leafCA.csr.pem -CA intermediateCA.crt.pem -CAkey intermediateCA.key.pem \ -CAcreateserial -out leafCA.crt.pem -days 365 -sha256 \ -extfile <(echo -e "basicConstraints=CA:false\nkeyUsage=cRLSign, keyCertSign\nextendedKeyUsage=critical,timeStamping") \ -passin pass:"CHANGE_ME"
Replace CHANGE_ME with the intermediate CA private key password to sign the leaf CA certificate.
Create the certificate chain by combining the newly created certificates together:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat leafCA.crt.pem intermediateCA.crt.pem rootCA.crt.pem > new-tsa.certchain.pem
cat leafCA.crt.pem intermediateCA.crt.pem rootCA.crt.pem > new-tsa.certchain.pem
Update the RHTAS playbook with the new certificate chain, private key, and password:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tas_single_node_tsa: certificate_chain: "{{ lookup('file', 'new-tsa.certchain.pem') }}" signer_private_key: "{{ lookup('file', 'leafCA.key.pem') }}" ca_passphrase: CHANGE_ME
tas_single_node_tsa: certificate_chain: "{{ lookup('file', 'new-tsa.certchain.pem') }}" signer_private_key: "{{ lookup('file', 'leafCA.key.pem') }}" ca_passphrase: CHANGE_ME
Replace CHANGE_ME with the leaf CA private key password.
NoteRed Hat recommends sourcing the passphrase either from a file or encrypted by using Ansible Vault.
Find your active TSA certificate file name, the TSA URL string, and configure your shell environment with these values:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export BASE_HOSTNAME=BASE_HOSTNAME_OF_RHTAS_SERVICE export ACTIVE_CERT_CHAIN_NAME=tsa.certchain.pem export TSA_URL=https://tsa.${BASE_HOSTNAME}/api/v1/timestamp curl $TSA_URL/certchain -o $ACTIVE_CERT_CHAIN_NAME
export BASE_HOSTNAME=BASE_HOSTNAME_OF_RHTAS_SERVICE export ACTIVE_CERT_CHAIN_NAME=tsa.certchain.pem export TSA_URL=https://tsa.${BASE_HOSTNAME}/api/v1/timestamp curl $TSA_URL/certchain -o $ACTIVE_CERT_CHAIN_NAME
Configure The Update Framework (TUF) service to use the new TSA certificate chain.
Set up your shell environment:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export WORK="${HOME}/trustroot-example" export ROOT="${WORK}/root/root.json" export KEYDIR="${WORK}/keys" export INPUT="${WORK}/input" export TUF_REPO="${WORK}/tuf-repo" export TUF_URL="https://tuf.${BASE_HOSTNAME}" export MANAGED_NODE_IP=IP_OF_ANSIBLE_MANAGED_NODE export MANAGED_NODE_SSH_USER=USER_TO_CONNECT_TO_MANAGED_NODE export NEW_CERT_CHAIN_NAME=new-tsa.certchain.pem
export WORK="${HOME}/trustroot-example" export ROOT="${WORK}/root/root.json" export KEYDIR="${WORK}/keys" export INPUT="${WORK}/input" export TUF_REPO="${WORK}/tuf-repo" export TUF_URL="https://tuf.${BASE_HOSTNAME}" export MANAGED_NODE_IP=IP_OF_ANSIBLE_MANAGED_NODE export MANAGED_NODE_SSH_USER=USER_TO_CONNECT_TO_MANAGED_NODE export NEW_CERT_CHAIN_NAME=new-tsa.certchain.pem
Create a temporary TUF directory structure:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
Download the TUF contents to the temporary TUF directory structure:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export REMOTE_KEYS_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-signing-keys" | tr -d '[:space:]') export REMOTE_TUF_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-repository" | tr -d '[:space:]') rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
export REMOTE_KEYS_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-signing-keys" | tr -d '[:space:]') export REMOTE_TUF_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-repository" | tr -d '[:space:]') rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
Expire the old TSA certificate:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-tsa-target "$ACTIVE_CERT_CHAIN_NAME" \ --tsa-uri "$TSA_URL" \ --tsa-status "Expired" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-tsa-target "$ACTIVE_CERT_CHAIN_NAME" \ --tsa-uri "$TSA_URL" \ --tsa-status "Expired" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
Add the new TSA certificate:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-tsa-target "$NEW_CERT_CHAIN_NAME" \ --tsa-uri "$TSA_URL" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-tsa-target "$NEW_CERT_CHAIN_NAME" \ --tsa-uri "$TSA_URL" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"
Create a compressed archive file of the updated TUF repository:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tar -C "${WORK}" -czvf repository.tar.gz tuf-repo
tar -C "${WORK}" -czvf repository.tar.gz tuf-repo
Update the RHTAS Ansible playbook by adding the new compressed archive file name to the
tas_single_node_trust_root
variable:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"
tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"
Delete the working directory:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rm -r $WORK
rm -r $WORK
Run the RHTAS Ansible Playbook to apply the changes:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ansible-playbook -i inventory play.yml
ansible-playbook -i inventory play.yml
Update the
cosign
configuration with the updated TUF configuration:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
Now, you are ready to sign and verify your artifacts that uses the new TSA signer key, and certificate.
Additional resources
2.4. Using your own certificate authority bundle
You can bring your organization’s certificate authority (CA) bundle for signing and verifying your build artifacts with Red Hat’s Trusted Artifact Signer (RHTAS) service.
Prerequisites
- Installation of RHTAS running on Red Hat Enterprise Linux managed by Ansible.
- Your CA root certificate.
Procedure
- Open the RHTAS Ansible Playbook for editing.
Under the
tas_single_node_fulcio
section, update thetrusted_ca
with your custom CA bundle file:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ... tas_single_node_fulcio: trusted_ca: "{{ lookup('file', 'ca-bundle.crt') }}" ...
... tas_single_node_fulcio: trusted_ca: "{{ lookup('file', 'ca-bundle.crt') }}" ...
ImportantThe certificate filename must be
ca-bundle.crt
.- Save, and quit the editor.
Run the RHTAS Ansible Playbook to apply the changes:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ansible-playbook -i inventory play.yml
$ ansible-playbook -i inventory play.yml
Appendix A. Restore owner references script
This Bash script is for restoring the ownerReferences
when restoring Red Hat Trusted Artifact Signer (RHTAS) data to a different OpenShift cluster.
List of resources to check
#!/bin/bash
# List of resources to check
RESOURCES=("Fulcio" "Rekor" "Trillian" "TimestampAuthority" "CTlog" "Tuf")
function validate_owner() {
local RESOURCE=$1
local ITEM=$2
local OWNER_NAME=$3
# Check all the labels exist and are the same
LABELS=("app.kubernetes.io/instance" "app.kubernetes.io/part-of" "velero.io/backup-name" "velero.io/restore-name")
for LABEL in "${LABELS[@]}"; do
PARENT_LABEL=$(oc get Securesign "$OWNER_NAME" -o json | jq -r ".metadata.labels[\"$LABEL\"]")
CHILD_LABEL=$(oc get $RESOURCE "$ITEM" -o json | jq -r ".metadata.labels[\"$LABEL\"]")
if [[ -z "$CHILD_LABEL" || $CHILD_LABEL == "null" ]]; then
echo " $LABEL label missing in $RESOURCE"
return 1
elif [[ -z "$PARENT_LABEL" || $PARENT_LABEL == "null" ]]; then
echo " $LABEL label missing in Securesign"
return 1
elif [[ "$CHILD_LABEL" != "$PARENT_LABEL" ]]; then
echo " $LABEL labels not matching: $CHILD_LABEL != $PARENT_LABEL"
return 1
fi
done
return 0
}
for RESOURCE in "${RESOURCES[@]}"; do
echo "Checking $RESOURCE ..."
# Get all resources missing ownerReferences
MISSING_REFS=$(oc get $RESOURCE -o json | jq -r '.items[] | select(.metadata.ownerReferences == null) | .metadata.name')
for ITEM in $MISSING_REFS; do
echo " Missing ownerReferences in $RESOURCE/$ITEM"
# Find the expected owner based on labels
OWNER_NAME=$(oc get $RESOURCE "$ITEM" -o json | jq -r '.metadata.labels["app.kubernetes.io/name"]')
if [[ -z "$OWNER_NAME" || "$OWNER_NAME" == "null" ]]; then
echo " Skipping $RESOURCE/$ITEM: name not found in labels"
continue
fi
if ! validate_owner $RESOURCE $ITEM $OWNER_NAME; then
echo " Skipping ..."
continue
fi
# Try to get the owner's UID from Securesign
OWNER_UID=$(oc get Securesign "$OWNER_NAME" -o jsonpath='{.metadata.uid}' 2>/dev/null)
if [[ -z "$OWNER_UID" || "$OWNER_UID" == "null" ]]; then
echo " Failed to find Securesign/$OWNER_NAME UID, skipping ..."
continue
fi
echo " Found owner: Securesign/$OWNER_NAME (UID: $OWNER_UID)"
# Patch the object with the restored ownerReference
oc patch $RESOURCE "$ITEM" --type='merge' -p "{
\"metadata\": {
\"ownerReferences\": [
{
\"apiVersion\": \"rhtas.redhat.com/v1alpha1\",
\"kind\": \"Securesign\",
\"name\": \"$OWNER_NAME\",
\"uid\": \"$OWNER_UID\",
\"controller\": true,
\"blockOwnerDeletion\": true
}
]
}
}"
echo "Restored ownerReferences for $RESOURCE/$ITEM"
done
done
echo "Done"