Chapter 2. Red Hat Enterprise Linux
2.1. Protect your signing data Copy linkLink copied to clipboard!
As a systems administrator, protecting the signing data of your software supply chain is critical when there is data loss due to hardware failure or accidental data deletion.
For Red Hat Trusted Artifact Signer (RHTAS) deployments on Red Hat Enterprise Linux, you can simply create encrypted backups of your signing data to a local file system.
2.1.1. Backing up your Trusted Artifact Signer data Copy linkLink copied to clipboard!
You can schedule automatic backups of your Red Hat Trusted Artifact Signer (RHTAS) data to a mounted file system. Data backups are encrypted with SSL, and compressed.
The RHTAS service does not support concurrent manual backup and restore operations.
Prerequisites
- Red Hat Enterprise Linux 9.4 or later.
- A deployment of RHTAS running on Red Hat Enterprise Linux managed by Ansible.
- A SSH connection to the managed node, with root-level privileges on the managed node.
Procedure
- Open for editing the RHTAS Ansible Playbook.
Under the
tas_single_node_backup_restore.backup
section, set theenabled
variable totrue
:tas_single_node_backup_restore: backup: enabled: true
tas_single_node_backup_restore: backup: enabled: true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, a daily backup job runs at midnight every day. You can change this to better fit your schedule.
tas_single_node_backup_restore: backup: enabled: true schedule: "*-*-* 00:00:00"
tas_single_node_backup_restore: backup: enabled: true schedule: "*-*-* 00:00:00"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set a
passphrase
, and specify the local backup directory:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Optional. To start an immediate backup job, set the
force_run
variable totrue
. - Save the changes, and quit the editor.
Run the RHTAS Ansible Playbook to apply the changes:
ansible-playbook -i inventory play.yml
ansible-playbook -i inventory play.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the backup finishes, the resulting encrypted, and compressed file name format is,
BACKUP-<date-and-time>-UTC.tar.gz.enc
.
2.1.2. Restoring your Trusted Artifact Signer data Copy linkLink copied to clipboard!
You can restore snapshots of your Red Hat Trusted Artifact Signer (RHTAS) data from a backup source.
Prerequisites
- Red Hat Enterprise Linux 9.4 or later.
- A deployment of RHTAS running on Red Hat Enterprise Linux managed by Ansible.
- A SSH connection to the managed node, with root-level privileges on the managed node.
- The backup source file is available.
- Know the passphrase used for the backup source.
Procedure
- Copy the backup data file to a directory on the Ansible control node.
- Open for editing the RHTAS Ansible Playbook.
Under the
tas_single_node_backup_restore.restore
section, set theenabled
variable totrue
:tas_single_node_backup_restore: ... restore: enabled: true
tas_single_node_backup_restore: ... restore: enabled: true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the source location of the backup file, and give the correct passphrase:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Under the
tas_single_node_backup_restore.backup
section, verify that theforce_run
variable tofalse
. If theforce_run
variable totrue
, then set it tofalse
. . Run the RHTAS Ansible Playbook to apply the changes:
ansible-playbook -i inventory play.yml
$ ansible-playbook -i inventory play.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The restoration process starts, and does a re-execution of all tasks to validate the integrity of the RHTAS service.
2.2. The Update Framework Copy linkLink copied to clipboard!
As a systems administrator, understanding Red Hat’s implementation of The Update Framework (TUF) for Red Hat Trusted Artifact Signer (RHTAS) is important in helping you to maintaining a secure coding environment for developers. You can refresh TUF’s root and non-root metadata periodically to help prevent mix-and-match attacks on a code base. Refreshing the TUF metadata gives clients the ability to detect and reject outdated or tampered-with files.
2.2.1. Trusted Artifact Signer’s implementation of The Update Framework Copy linkLink copied to clipboard!
Starting with Red Hat Trusted Artifact Signer (RHTAS) version 1.1, we implemented The Update Framework (TUF) as a trust root to store public keys, and certificates used by RHTAS services. The Update Framework is a sophisticated framework for securing software update systems, and this makes it ideal for securing shipped artifacts. The Update Framework refers to the RHTAS services as trusted root targets. There are four trusted targets, one for each RHTAS service: Fulcio, Certificate Transparency (CT) log, Rekor, and Timestamp Authority (TSA). Client software, such as cosign
, use the RHTAS trust root targets to sign and verify artifact signatures. A simple HTTP server distributes the public keys and certificates to the client software. This simple HTTP server has the TUF repository of the individual targets.
By default, when deploying RHTAS on Red Hat OpenShift or Red Hat Enterprise Linux, we create a TUF repository, and prepopulate the individual targets. By default, the expiration date of all metadata files is 52 weeks from the time you deploy the RHTAS service. Red Hat recommends choosing shorter expiration periods, and rotating your public keys and certificates often. Doing these maintenance tasks regularly can help prevent attacks on your code base.
2.2.2. Updating The Update Framework metadata files Copy linkLink copied to clipboard!
By default, The Update Framework (TUF) metadata files expire after 52 weeks from the Red Hat Trusted Artifact Signer (RHTAS) deployment date. At a minimum, you have to update the TUF metadata files at least once every 52 weeks before they expire. Red Hat recommends updating the metadata files more often than once a year.
This procedure walks you through refreshing the root, and non-root metadata files.
Prerequisites
- Installation of RHTAS running on Red Hat Enterprise Linux (RHEL) managed by Ansible.
-
A workstation with the
rsync
, andpodman
binaries installed. - A SSH connection to the managed node, with root-level privileges on the managed node.
Procedure
Download the
tuftool
binary from the local command-line interface (CLI) tool download page to your workstation.NoteThe URL address is the configured node as defined by the
tas_single_node_base_hostname
variable. An example URL address would be,https://cli-server.example.com
, given thetas_single_node_base_hostname
value asexample.com
.ImportantCurrently, the
tuftool
binary is only available for Linux operating systems on the x86_64 architecture.- From the download page, go to the tuftool download section, and click the link for your platform.
Open a terminal on your workstation, decompress the binary
.gz
file, and set the execution bit:gunzip tuftool-amd64.gz chmod +x tuftool-amd64
$ gunzip tuftool-amd64.gz $ chmod +x tuftool-amd64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move and rename the binary to a location within your
$PATH
environment:sudo mv tuftool-amd64 /usr/local/bin/tuftool
$ sudo mv tuftool-amd64 /usr/local/bin/tuftool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Configure your shell environment:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace IP_OF_ANSIBLE_MANAGED_NODE and USER_TO_CONNECT_TO_MANAGED_NODE with your relevant values.
Set the expiration durations according to your requirements.
Create a temporary TUF directory structure:
mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
$ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the TUF contents to the temporary TUF directory structure:
rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
$ rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" $ rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" $ cp "${TUF_REPO}/root.json" "${ROOT}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can update the timestamp, snapshot, and targets metadata all in one command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can also run the TUF metadata update on a subset of TUF metadata files. For example, the
timestamp.json
metadata file expires more often than the other metadata files. Therefore, you can just update the timestamp metadata file by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Only update the root expiration date if it is about to expire:
tuftool root expire "${ROOT}" "${ROOT_EXPIRATION}"
$ tuftool root expire "${ROOT}" "${ROOT_EXPIRATION}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can skip this step if the root file is not close to expiring.
Update the root version:
tuftool root bump-version "${ROOT}"
$ tuftool root bump-version "${ROOT}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sign the root metadata file again:
tuftool root sign "${ROOT}" -k "${KEYDIR}/root.pem"
$ tuftool root sign "${ROOT}" -k "${KEYDIR}/root.pem"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the new root version, and copy the root metadata file in place:
export NEW_ROOT_VERSION=$(cat "${ROOT}" | jq -r ".signed.version") cp "${ROOT}" "${TUF_REPO}/root.json" cp "${ROOT}" "${TUF_REPO}/${NEW_ROOT_VERSION}.root.json"
$ export NEW_ROOT_VERSION=$(cat "${ROOT}" | jq -r ".signed.version") $ cp "${ROOT}" "${TUF_REPO}/root.json" $ cp "${ROOT}" "${TUF_REPO}/${NEW_ROOT_VERSION}.root.json"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Upload these changes to the TUF server.
Create a compressed archive of the TUF repository:
tar -C "${WORK}" -czvf repository.tar.gz tuf-repo
$ tar -C "${WORK}" -czvf repository.tar.gz tuf-repo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the RHTAS Ansible Playbook with these two lines:
tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"
tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the RHTAS Anisble Playbook to apply the changes:
ansible-playbook -i inventory play.yml
$ ansible-playbook -i inventory play.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3. Rotate your certificates and keys Copy linkLink copied to clipboard!
As a systems administrator, you can proactively rotate the certificates and signer keys used by the Red Hat Trusted Artifact Signer (RHTAS) service running on Red Hat OpenShift. Rotating your keys regularly can prevent key tampering, and theft. These procedures guide you through expiring your old certificates and signer keys, and replacing them with a new certificate and signer key for the underlying services that make up RHTAS. You can rotate keys and certificates for the following services:
- Rekor
- Certificate Transparency log
- Fulcio
- Timestamp Authority
2.3.1. Rotating the Rekor signer key Copy linkLink copied to clipboard!
You can proactively rotate Rekor’s signer key by using the sharding feature to freeze the log tree, and create a new log tree with a new signer key. This procedure walks you through expiring your old Rekor signer key, and replacing it with a new signer key for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old Rekor signer key still allows you to verify artifacts signed by the old key.
This procedure requires downtime to the Rekor service.
Prerequisites
- Installation of RHTAS running on Red Hat Enterprise Linux managed by Ansible.
-
A workstation with the
rsync
,openssl
, andcosign
binaries installed. - A SSH connection to the managed node, with root-level privileges on the managed node.
Procedure
Download the
rekor-cli
binary from the local command-line interface (CLI) tool download page to your workstation.Open a web browser, and go to the CLI server web page.
NoteThe URL address is the configured node as defined by the
tas_single_node_base_hostname
variable. An example URL address would be,https://cli-server.example.com
, given that the value oftas_single_node_base_hostname
isexample.com
.- From the download page, go to the rekor-cli download section, and click the link for your platform.
From a terminal on your workstation, decompress the binary
.gz
file, and set the execute bit:gunzip rekor-cli-amd64.gz chmod +x rekor-cli-amd64
$ gunzip rekor-cli-amd64.gz $ chmod +x rekor-cli-amd64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move and rename the binary to a location within your
$PATH
environment:sudo mv rekor-cli-amd64 /usr/local/bin/rekor-cli
$ sudo mv rekor-cli-amd64 /usr/local/bin/rekor-cli
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Download the
tuftool
binary from the local command-line interface (CLI) tool download page to your workstation.ImportantCurrently, the
tuftool
binary is only available for Linux operating systems on the x86_64 architecture.- From the download page, go to the tuftool download section, and click the link for your platform.
From a terminal on your workstation, decompress the binary
.gz
file, and set the execute bit:gunzip tuftool-amd64.gz chmod +x tuftool-amd64
$ gunzip tuftool-amd64.gz $ chmod +x tuftool-amd64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move and rename the binary to a location within your
$PATH
environment:sudo mv tuftool-amd64 /usr/local/bin/tuftool
$ sudo mv tuftool-amd64 /usr/local/bin/tuftool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Assign shell variables to the base hostname, and the Rekor URL:
export BASE_HOSTNAME=BASE_HOSTNAME_OF_RHTAS_SERVICE export REKOR_URL=https://rekor.${BASE_HOSTNAME}
$ export BASE_HOSTNAME=BASE_HOSTNAME_OF_RHTAS_SERVICE $ export REKOR_URL=https://rekor.${BASE_HOSTNAME}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace BASE_HOSTNAME_OF_RHTAS_SERVICE with the value of the
tas_single_node_base_hostname
variable.Get the log tree identifier for the active shard:
export OLD_TREE_ID=$(rekor-cli loginfo --rekor_server $REKOR_URL --format json | jq -r .TreeID)
$ export OLD_TREE_ID=$(rekor-cli loginfo --rekor_server $REKOR_URL --format json | jq -r .TreeID)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure your shell environment:
export MANAGED_NODE_IP=IP_OF_ANSIBLE_MANAGED_NODE export MANAGED_NODE_SSH_USER=USER_TO_CONNECT_TO_MANAGED_NODE export REMOTE_KEYS_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-signing-keys" | tr -d '[:space:]') export REMOTE_TUF_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-repository" | tr -d '[:space:]')
$ export MANAGED_NODE_IP=IP_OF_ANSIBLE_MANAGED_NODE $ export MANAGED_NODE_SSH_USER=USER_TO_CONNECT_TO_MANAGED_NODE $ export REMOTE_KEYS_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-signing-keys" | tr -d '[:space:]') $ export REMOTE_TUF_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-repository" | tr -d '[:space:]')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace IP_OF_ANSIBLE_MANAGED_NODE and USER_TO_CONNECT_TO_MANAGED_NODE with values for your environment.
Set the log tree to the
DRAINING
state:ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run --network=rhtas --rm registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --admin_server=trillian-logserver-pod:8091 --tree_id=${OLD_TREE_ID} --tree_state=DRAINING"
$ ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run --network=rhtas --rm registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --admin_server=trillian-logserver-pod:8091 --tree_id=${OLD_TREE_ID} --tree_state=DRAINING"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow While draining, the tree log will not accept any new entries. Watch and wait for the queue to empty.
ImportantYou must wait for the queues to be empty before proceeding to the next step. If leaves are still integrating while draining, then freezing the log tree during this process can cause the log path to exceed the maximum merge delay (MMD) threshold.
Freeze the log tree:
ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run --network=rhtas --rm registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --tree_id=${OLD_TREE_ID} --admin_server=trillian-logserver-pod:8091 --tree_state=FROZEN"
$ ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run --network=rhtas --rm registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --tree_id=${OLD_TREE_ID} --admin_server=trillian-logserver-pod:8091 --tree_state=FROZEN"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the length of the frozen log tree:
export OLD_SHARD_LENGTH=$(rekor-cli loginfo --rekor_server $REKOR_URL --format json | jq -r .ActiveTreeSize)
$ export OLD_SHARD_LENGTH=$(rekor-cli loginfo --rekor_server $REKOR_URL --format json | jq -r .ActiveTreeSize)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get Rekor’s public key for the old shard:
export OLD_PUBLIC_KEY=$(curl -s $REKOR_URL/api/v1/log/publicKey | base64 | tr -d '\n')
$ export OLD_PUBLIC_KEY=$(curl -s $REKOR_URL/api/v1/log/publicKey | base64 | tr -d '\n')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new log tree:
export NEW_TREE_ID=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run -q --network=rhtas --rm registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --logtostderr=false --admin_server=trillian-logserver-pod:8091 --display_name=rekor-tree | tr -d '[:punct:][:blank:][:cntrl:]'")
$ export NEW_TREE_ID=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run -q --network=rhtas --rm registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --logtostderr=false --admin_server=trillian-logserver-pod:8091 --display_name=rekor-tree | tr -d '[:punct:][:blank:][:cntrl:]'")
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Now you have two log trees, one frozen tree, and a new tree that will become the active shard.
Create a new private key and an associated public key:
openssl ecparam -genkey -name secp384r1 -noout -out new-rekor.pem openssl ec -in new-rekor.pem -pubout -out new-rekor.pub export NEW_KEY_NAME=new-rekor.pub
$ openssl ecparam -genkey -name secp384r1 -noout -out new-rekor.pem $ openssl ec -in new-rekor.pem -pubout -out new-rekor.pub $ export NEW_KEY_NAME=new-rekor.pub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe new key must have a unique file name.
Get the active Rekor signing key, and save the key to a file:
rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/rekor-signer0.key ./rekor-signer0.key
$ rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/rekor-signer0.key ./rekor-signer0.key echo "$OLD_PUBLIC_KEY" | base64 -d > rekor.pub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the Rekor configuration in the RHTAS Ansible playbook:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure The Update Framework (TUF) service to use the new Rekor public key.
Configure your shell environment:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a temporary TUF directory structure:
mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
$ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the TUF contents to the temporary TUF directory structure:
rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
$ rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" $ rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" $ cp "${TUF_REPO}/root.json" "${ROOT}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Assign an environment variable to the active Rekor signer key file name:
export ACTIVE_KEY_NAME=rekor.pub
$ export ACTIVE_KEY_NAME=rekor.pub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expire the old Rekor signer key:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the new Rekor signer key:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a compressed archive file of the updated TUF repository:
tar -C "${WORK}" -czvf repository.tar.gz tuf-repo
$ tar -C "${WORK}" -czvf repository.tar.gz tuf-repo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the RHTAS Ansible playbook by adding the new compressed archive file name to the
tas_single_node_trust_root
variable:tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"
tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the working directory:
rm -r $WORK
$ rm -r $WORK
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Run the RHTAS Ansible Playbook to apply the changes:
ansible-playbook -i inventory play.yml
$ ansible-playbook -i inventory play.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
cosign
configuration with the updated TUF configuration:cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
$ cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Now, you are ready to sign and verify your artifacts with the new Rekor signer key.
2.3.2. Rotating the Certificate Transparency log signer key Copy linkLink copied to clipboard!
You can proactively rotate Certificate Transparency (CT) log signer key by using the sharding feature to freeze the log tree, and create a new log tree with a new signer key. This procedure walks you through expiring your old CT log signer key, and replacing it with a new signer key for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old CT log signer key still allows you to verify artifacts signed by the old key.
Prerequisites
- Installation of RHTAS running on Red Hat Enterprise Linux managed by Ansible.
-
A workstation with the
rsync
,openssl
, andcosign
binaries installed. - A SSH connection to the managed node, with root-level privileges on the managed node.
Procedure
Download the
tuftool
binary from the local command-line interface (CLI) tool download page to your workstation.NoteThe URL address is the configured node as defined by the
tas_single_node_base_hostname
variable. An example URL address would be,https://cli-server.example.com
, given thetas_single_node_base_hostname
value asexample.com
.ImportantCurrently, the
tuftool
binary is only available for Linux operating systems on the x86_64 architecture.- From the download page, go to the tuftool download section, and click the link for your platform.
Open a terminal on your workstation, decompress the binary
.gz
file, and set the execution bit:unzip tuftool-amd64.gz chmod +x tuftool-amd64
$ unzip tuftool-amd64.gz $ chmod +x tuftool-amd64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move and rename the binary to a location within your
$PATH
environment:sudo mv tuftool-amd64 /usr/local/bin/tuftool
$ sudo mv tuftool-amd64 /usr/local/bin/tuftool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Configure your shell environment:
export MANAGED_NODE_IP=IP_OF_ANSIBLE_MANAGED_NODE export MANAGED_NODE_SSH_USER=USER_TO_CONNECT_TO_MANAGED_NODE export REMOTE_KEYS_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-signing-keys" | tr -d '[:space:]') export REMOTE_TUF_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-repository" | tr -d '[:space:]') export BASE_HOSTNAME=BASE_HOSTNAME_OF_RHTAS_SERVICE
$ export MANAGED_NODE_IP=IP_OF_ANSIBLE_MANAGED_NODE $ export MANAGED_NODE_SSH_USER=USER_TO_CONNECT_TO_MANAGED_NODE $ export REMOTE_KEYS_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-signing-keys" | tr -d '[:space:]') $ export REMOTE_TUF_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-repository" | tr -d '[:space:]') $ export BASE_HOSTNAME=BASE_HOSTNAME_OF_RHTAS_SERVICE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace BASE_HOSTNAME_OF_RHTAS_SERVICE with the value of the
tas_single_node_base_hostname
variable.Download the CTlog configuration map, the CTlog keys, and the Fulcio root certificate to your workstation:
rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/configs/ctlog-config.yaml ./ctlog-config.yaml rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/ctlog0.key ./ctfe.key rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/ctlog0.pub ./ctfe.pub rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/fulcio.pem ./fulcio-0.pem
$ rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/configs/ctlog-config.yaml ./ctlog-config.yaml $ rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/ctlog0.key ./ctfe.key $ rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/ctlog0.pub ./ctfe.pub $ rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/fulcio.pem ./fulcio-0.pem
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Capture the current tree identifier:
export OLD_TREE_ID=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo cat /etc/rhtas/configs/ctlog-treeid-config.yaml | grep 'tree_id:' | awk '{print \$2}'" | tr -d '[:punct:][:blank:][:cntrl:]')
$ export OLD_TREE_ID=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo cat /etc/rhtas/configs/ctlog-treeid-config.yaml | grep 'tree_id:' | awk '{print \$2}'" | tr -d '[:punct:][:blank:][:cntrl:]')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the log tree to the
DRAINING
state:ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run --network=rhtas --rm registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --tree_id=${OLD_TREE_ID} --admin_server=trillian-logserver-pod:8091 --tree_state=DRAINING"
$ ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run --network=rhtas --rm registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --tree_id=${OLD_TREE_ID} --admin_server=trillian-logserver-pod:8091 --tree_state=DRAINING"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow While draining, the tree log will not accept any new entries. Watch and wait for the queue to empty.
ImportantYou must wait for the queues to be empty before proceeding to the next step. If leaves are still integrating while draining, then freezing the log tree during this process can cause the log path to exceed the maximum merge delay (MMD) threshold.
Once the queue has been fully drained, freeze the log:
ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run --network=rhtas --rm registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --tree_id=${OLD_TREE_ID} --admin_server=trillian-logserver-pod:8091 --tree_state=FROZEN"
$ ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run --network=rhtas --rm registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --tree_id=${OLD_TREE_ID} --admin_server=trillian-logserver-pod:8091 --tree_state=FROZEN"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new Merkle tree, and capture the new tree identifier:
export NEW_TREE_ID=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run -q --network=rhtas --rm registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --logtostderr=false --admin_server=trillian-logserver-pod:8091 --display_name=ctlog-tree" | tr -d '[:punct:][:blank:][:cntrl:]')
$ export NEW_TREE_ID=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run -q --network=rhtas --rm registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --logtostderr=false --admin_server=trillian-logserver-pod:8091 --display_name=ctlog-tree" | tr -d '[:punct:][:blank:][:cntrl:]')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a new certificate, along with new public and private keys:
openssl ecparam -genkey -name prime256v1 -noout -out new-ctlog.pem openssl ec -in new-ctlog.pem -pubout -out new-ctlog-public.pem openssl ec -in new-ctlog.pem -out new-ctlog.pass.pem -des3 -passout pass:"CHANGE_ME"
$ openssl ecparam -genkey -name prime256v1 -noout -out new-ctlog.pem $ openssl ec -in new-ctlog.pem -pubout -out new-ctlog-public.pem $ openssl ec -in new-ctlog.pem -out new-ctlog.pass.pem -des3 -passout pass:"CHANGE_ME"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace CHANGE_ME with a new password.
ImportantThe certificate and new keys must have unique file names.
Update the CT log configuration.
- Open the RHTAS Ansible playbook for editing.
Configuring the CTlog signer key rotation for the first time, you need to add the following to the
tas_single_node_ctlog.sharding_config
section:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace OLD_TREE_ID with the contents contained in the
$OLD_TREE_ID
environment variable.NoteYou can get the current time value for seconds and nanoseconds, by running the following commands:
date +%s
, anddate +%N
.ImportantThe
not_after_limit
field defines the end of the timestamp range for the frozen log only. Certificates beyond this point in time are no longer accepted for inclusion in this log.-
Copy and paste the frozen log block, appending it to the
tas_single_node_ctlog.sharding_config
section, creating a new entry. Change the following lines in the new log block. Set the
treeid
to the new tree identifier, change theprefix
totrusted-artifact-signer
, change theprivate_key
path toprivate-1
, changenot_after_limit
tonot_after_start
, set the timestamp range, and updatetas_single_node_fulcio.ct_log_prefix
for Fulcio to make use of the new log:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace CHANGE_ME with the new private key password. The password here must match the password used for generating the new private and public keys.
ImportantThe
not_after_start
field defines the beginning of the timestamp range inclusively. This means the log will start accepting certificates at this point in time.
Update the
tas_single_node_ctlog
section for CTlog to distribute the new keys to the managed node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure The Update Framework (TUF) service to use the new CT log public key.
Configure your shell environment:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a temporary TUF directory structure:
mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
$ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the TUF contents to the temporary TUF directory structure:
rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
$ rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" $ rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" $ cp "${TUF_REPO}/root.json" "${ROOT}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Assign an environment variable to the active CT log signer key file name:
export ACTIVE_CTFE_NAME=ctfe.pub
$ export ACTIVE_CTFE_NAME=ctfe.pub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expire the old CT log signer key:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the new CT log signer key:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a compressed archive file of the updated TUF repository:
tar -C "${WORK}" -czvf repository.tar.gz tuf-repo
$ tar -C "${WORK}" -czvf repository.tar.gz tuf-repo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the RHTAS Ansible playbook by adding the new compressed archive file name to the
tas_single_node_trust_root
variable:tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"
tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes to the playbook, and close your text editor.
Run the RHTAS Ansible playbook to apply the changes:
ansible-playbook -i inventory play.yml
$ ansible-playbook -i inventory play.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the working directory:
rm -r $WORK
$ rm -r $WORK
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
cosign
configuration with the updated TUF configuration:cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
$ cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Now, you are ready to sign and verify your artifacts with the new CT log signer key.
2.3.3. Rotating the Fulcio certificate Copy linkLink copied to clipboard!
You can proactively rotate the certificate used by the Fulcio service. This procedure walks you through expiring your old Fulcio certificate, and replacing it with a new certificate for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old Fulcio certificate still allows you to verify artifacts signed by the old certificate.
Prerequisites
- Installation of RHTAS running on Red Hat Enterprise Linux managed by Ansible.
-
A workstation with the
rsync
,openssl
, andcosign
binaries installed. - A SSH connection to the managed node, with root-level privileges on the managed node.
Procedure
Download the
tuftool
binary from the local command-line interface (CLI) tool download page to your workstation.NoteThe URL address is the configured node as defined by the
tas_single_node_base_hostname
variable. An example URL address would be,https://cli-server.example.com
, given thetas_single_node_base_hostname
value asexample.com
.ImportantCurrently, the
tuftool
binary is only available for Linux operating systems on the x86_64 architecture.- From the download page, go to the tuftool download section, and click the link for your platform.
Open a terminal on your workstation, decompress the binary
.gz
file, and set the execution bit:gunzip tuftool-amd64.gz chmod +x tuftool-amd64
$ gunzip tuftool-amd64.gz $ chmod +x tuftool-amd64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move and rename the binary to a location within your
$PATH
environment:sudo mv tuftool-amd64 /usr/local/bin/tuftool
$ sudo mv tuftool-amd64 /usr/local/bin/tuftool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Generate a new certificate, along with new public and private keys:
openssl ecparam -genkey -name prime256v1 -noout -out new-fulcio.pem openssl ec -in new-fulcio.pem -pubout -out new-fulcio-public.pem openssl ec -in new-fulcio.pem -out new-fulcio.pass.pem -des3 -passout pass:"CHANGE_ME" openssl req -new -x509 -key new-fulcio.pass.pem -out new-fulcio.cert.pem
$ openssl ecparam -genkey -name prime256v1 -noout -out new-fulcio.pem $ openssl ec -in new-fulcio.pem -pubout -out new-fulcio-public.pem $ openssl ec -in new-fulcio.pem -out new-fulcio.pass.pem -des3 -passout pass:"CHANGE_ME" $ openssl req -new -x509 -key new-fulcio.pass.pem -out new-fulcio.cert.pem
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace CHANGE_ME with a new password.
ImportantThe certificate and new keys must have unique file names.
Update the RHTAS Ansible playbook by adding the new private key file name, the new certificate content, and the password to the
tas_single_node_fulcio
variable:tas_single_node_fulcio: root_ca: "{{ lookup('file', 'new-fulcio.cert.pem') }}" private_key: "{{ lookup('file', 'new-fulcio.pass.pem') }}" ca_passphrase: CHANGE_ME
tas_single_node_fulcio: root_ca: "{{ lookup('file', 'new-fulcio.cert.pem') }}" private_key: "{{ lookup('file', 'new-fulcio.pass.pem') }}" ca_passphrase: CHANGE_ME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace CHANGE_ME with a new password.
NoteThe password here must match the password used for generating the new private and public keys.
NoteWe recommend sourcing the passphrase either from a file or encrypted by using Ansible Vault.
Configure The Update Framework (TUF) service to use the new Fulcio certificate.
Set up your shell environment:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a temporary TUF directory structure:
mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
$ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the TUF contents to the temporary TUF directory structure:
rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
$ rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" $ rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" $ cp "${TUF_REPO}/root.json" "${ROOT}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Find the active Fulcio certificate file name. Open the latest target file, for example,
1.targets.json
, within the local TUF repository. In this file you will find the active Fulcio certificate file name, for example,fulcio_v1.crt.pem
. Set an environment variable with this active Fulcio certificate file name:export ACTIVE_CERT_NAME=fulcio_v1.crt.pem
$ export ACTIVE_CERT_NAME=fulcio_v1.crt.pem
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the active Fulico certificate from the managed node:
rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/fulcio.pem "${ACTIVE_CERT_NAME}"
$ rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/fulcio.pem "${ACTIVE_CERT_NAME}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expire the old certificate:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the new Fulcio certificate:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a compressed archive file of the updated TUF repository:
tar -C "${WORK}" -czvf repository.tar.gz tuf-repo
$ tar -C "${WORK}" -czvf repository.tar.gz tuf-repo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the RHTAS Ansible playbook by adding the new compressed archive file content to the
tas_single_node_trust_root
variable:tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"
tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the working directory:
rm -r $WORK
$ rm -r $WORK
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Run the RHTAS Ansible Playbook to apply the changes:
ansible-playbook -i inventory play.yml
$ ansible-playbook -i inventory play.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
cosign
configuration with the updated TUF configuration:cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
$ cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Now, you are ready to sign and verify your artifacts with the new Fulcio certificate.
2.3.4. Rotating the Timestamp Authority signer key and certificate chain Copy linkLink copied to clipboard!
You can proactively rotate the Timestamp Authority (TSA) signer key and certificate chain. This procedure walks you through expiring your old TSA signer key and certificate chain, and replacing them with a new ones for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old TSA signer key and certificate chain still allows you to verify artifacts signed by the old key and certificate chain.
Prerequisites
- Installation of RHTAS running on Red Hat Enterprise Linux managed by Ansible.
-
A workstation with the
rsync
,openssl
, andcosign
binaries installed. - A SSH connection to the managed node, with root-level privileges on the managed node.
Procedure
Download the
tuftool
binary from the local command-line interface (CLI) tool download page to your workstation.NoteThe URL address is the configured node as defined by the
tas_single_node_base_hostname
variable. An example URL address would be,https://cli-server.example.com
, given that the value oftas_single_node_base_hostname
isexample.com
.ImportantCurrently, the
tuftool
binary is only available for Linux operating systems on the x86_64 architecture.- From the download page, go to the tuftool download section, and click the link for your platform.
Open a terminal on your workstation, decompress the binary
.gz
file, and set the execution bit:gunzip tuftool-amd64.gz chmod +x tuftool-amd64
$ gunzip tuftool-amd64.gz $ chmod +x tuftool-amd64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move and rename the binary to a location within your
$PATH
environment:sudo mv tuftool-amd64 /usr/local/bin/tuftool
$ sudo mv tuftool-amd64 /usr/local/bin/tuftool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Generate a new certificate chain, and a new signer key.
ImportantThe new certificate and keys must have unique file names.
Create a temporary working directory:
mkdir certs && cd certs
$ mkdir certs && cd certs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the root certificate authority (CA) private key, and set a password:
openssl req -x509 -newkey rsa:2048 -days 365 -sha256 -nodes \ -keyout rootCA.key.pem -out rootCA.crt.pem \ -passout pass:"CHANGE_ME" \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=RootCA/CN=RootCA" \ -addext "basicConstraints=CA:true" -addext "keyUsage=cRLSign, keyCertSign"
$ openssl req -x509 -newkey rsa:2048 -days 365 -sha256 -nodes \ -keyout rootCA.key.pem -out rootCA.crt.pem \ -passout pass:"CHANGE_ME" \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=RootCA/CN=RootCA" \ -addext "basicConstraints=CA:true" -addext "keyUsage=cRLSign, keyCertSign"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace CHANGE_ME with a new password.
Create the intermediate CA private key and certificate signing request (CSR), and set a password:
openssl req -newkey rsa:2048 -sha256 \ -keyout intermediateCA.key.pem -out intermediateCA.csr.pem \ -passout pass:"CHANGE_ME" \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=IntermediateCA/CN=IntermediateCA"
$ openssl req -newkey rsa:2048 -sha256 \ -keyout intermediateCA.key.pem -out intermediateCA.csr.pem \ -passout pass:"CHANGE_ME" \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=IntermediateCA/CN=IntermediateCA"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace CHANGE_ME with a new password.
Sign the intermediate CA certificate with the root CA:
openssl x509 -req -in intermediateCA.csr.pem -CA rootCA.crt.pem -CAkey rootCA.key.pem \ -CAcreateserial -out intermediateCA.crt.pem -days 365 -sha256 \ -extfile <(echo -e "basicConstraints=CA:true\nkeyUsage=cRLSign, keyCertSign\nextendedKeyUsage=critical,timeStamping") \ -passin pass:"CHANGE_ME"
$ openssl x509 -req -in intermediateCA.csr.pem -CA rootCA.crt.pem -CAkey rootCA.key.pem \ -CAcreateserial -out intermediateCA.crt.pem -days 365 -sha256 \ -extfile <(echo -e "basicConstraints=CA:true\nkeyUsage=cRLSign, keyCertSign\nextendedKeyUsage=critical,timeStamping") \ -passin pass:"CHANGE_ME"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace CHANGE_ME with the root CA private key password to sign the intermediate CA certificate.
Create the leaf CA private key and CSR, and set a password:
openssl req -newkey rsa:2048 -sha256 \ -keyout leafCA.key.pem -out leafCA.csr.pem \ -passout pass:"CHANGE_ME" \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=LeafCA/CN=LeafCA"
$ openssl req -newkey rsa:2048 -sha256 \ -keyout leafCA.key.pem -out leafCA.csr.pem \ -passout pass:"CHANGE_ME" \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=LeafCA/CN=LeafCA"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sign the leaf CA certificate with the intermediate CA:
openssl x509 -req -in leafCA.csr.pem -CA intermediateCA.crt.pem -CAkey intermediateCA.key.pem \ -CAcreateserial -out leafCA.crt.pem -days 365 -sha256 \ -extfile <(echo -e "basicConstraints=CA:false\nkeyUsage=cRLSign, keyCertSign\nextendedKeyUsage=critical,timeStamping") \ -passin pass:"CHANGE_ME"
$ openssl x509 -req -in leafCA.csr.pem -CA intermediateCA.crt.pem -CAkey intermediateCA.key.pem \ -CAcreateserial -out leafCA.crt.pem -days 365 -sha256 \ -extfile <(echo -e "basicConstraints=CA:false\nkeyUsage=cRLSign, keyCertSign\nextendedKeyUsage=critical,timeStamping") \ -passin pass:"CHANGE_ME"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace CHANGE_ME with the intermediate CA private key password to sign the leaf CA certificate.
Create the certificate chain by combining the newly created certificates together:
cat leafCA.crt.pem intermediateCA.crt.pem rootCA.crt.pem > new-tsa.certchain.pem
$ cat leafCA.crt.pem intermediateCA.crt.pem rootCA.crt.pem > new-tsa.certchain.pem
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Update the RHTAS playbook with the new certificate chain, private key, and password:
tas_single_node_tsa: certificate_chain: "{{ lookup('file', 'new-tsa.certchain.pem') }}" signer_private_key: "{{ lookup('file', 'leafCA.key.pem') }}" ca_passphrase: CHANGE_ME
tas_single_node_tsa: certificate_chain: "{{ lookup('file', 'new-tsa.certchain.pem') }}" signer_private_key: "{{ lookup('file', 'leafCA.key.pem') }}" ca_passphrase: CHANGE_ME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace CHANGE_ME with the leaf CA private key password.
NoteRed Hat recommends sourcing the passphrase either from a file or encrypted by using Ansible Vault.
Find your active TSA certificate file name, the TSA URL string, and configure your shell environment with these values:
export BASE_HOSTNAME=BASE_HOSTNAME_OF_RHTAS_SERVICE export ACTIVE_CERT_CHAIN_NAME=tsa.certchain.pem export TSA_URL=https://tsa.${BASE_HOSTNAME}/api/v1/timestamp curl $TSA_URL/certchain -o $ACTIVE_CERT_CHAIN_NAME
$ export BASE_HOSTNAME=BASE_HOSTNAME_OF_RHTAS_SERVICE $ export ACTIVE_CERT_CHAIN_NAME=tsa.certchain.pem $ export TSA_URL=https://tsa.${BASE_HOSTNAME}/api/v1/timestamp $ curl $TSA_URL/certchain -o $ACTIVE_CERT_CHAIN_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure The Update Framework (TUF) service to use the new TSA certificate chain.
Set up your shell environment:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a temporary TUF directory structure:
mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
$ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the TUF contents to the temporary TUF directory structure:
export REMOTE_KEYS_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-signing-keys" | tr -d '[:space:]') export REMOTE_TUF_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-repository" | tr -d '[:space:]') rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" cp "${TUF_REPO}/root.json" "${ROOT}"
$ export REMOTE_KEYS_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-signing-keys" | tr -d '[:space:]') $ export REMOTE_TUF_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-repository" | tr -d '[:space:]') $ rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" $ rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" $ cp "${TUF_REPO}/root.json" "${ROOT}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expire the old TSA certificate:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the new TSA certificate:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a compressed archive file of the updated TUF repository:
tar -C "${WORK}" -czvf repository.tar.gz tuf-repo
$ tar -C "${WORK}" -czvf repository.tar.gz tuf-repo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the RHTAS Ansible playbook by adding the new compressed archive file name to the
tas_single_node_trust_root
variable:tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"
tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the working directory:
rm -r $WORK
$ rm -r $WORK
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Run the RHTAS Ansible Playbook to apply the changes:
ansible-playbook -i inventory play.yml
$ ansible-playbook -i inventory play.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
cosign
configuration with the updated TUF configuration:cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
$ cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Now, you are ready to sign and verify your artifacts that uses the new TSA signer key, and certificate.
2.4. Using your own certificate authority bundle Copy linkLink copied to clipboard!
You can bring your organization’s certificate authority (CA) bundle for signing and verifying your build artifacts with Red Hat’s Trusted Artifact Signer (RHTAS) service.
Prerequisites
- Installation of RHTAS running on Red Hat Enterprise Linux managed by Ansible.
- Your CA root certificate.
Procedure
- Open the RHTAS Ansible Playbook for editing.
Under the
tas_single_node_fulcio
section, update thetrusted_ca
with your custom CA bundle file:... tas_single_node_fulcio: trusted_ca: "{{ lookup('file', 'ca-bundle.crt') }}" ...
... tas_single_node_fulcio: trusted_ca: "{{ lookup('file', 'ca-bundle.crt') }}" ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe certificate filename must be
ca-bundle.crt
.- Save, and quit the editor.
Run the RHTAS Ansible Playbook to apply the changes:
ansible-playbook -i inventory play.yml
$ ansible-playbook -i inventory play.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow