Chapter 1. Red Hat OpenShift Container Platform


1.1. Protect your signing data

As a systems administrator, protecting the signing data of your software supply chain is critical when there is data loss due to hardware failure or accidental data deletion.

The OpenShift API Data Protection (OADP) product provides data protection to applications running on Red Hat OpenShift Container Platform. By using the OADP product, this can help us get the software developers back to signing and verifying code as quickly as possible. After installing and configuring the OADP operator you can start backing up and restoring your Red Hat Trusted Artifact Signer (RHTAS) data.

The OpenShift API Data Protection (OADP) Operator gives you the ability to backup OpenShift application resources and internal container images. You can use the OADP Operator to backup and restore your Trusted Artifact Signer data.

Important

This procedure uses Amazon Web Services (AWS) Simple Storage Service (S3) to create a bucket for illustrating how to configure the OADP operator. You can choose to use a different supported S3-compatible object storage platform instead of AWS, such as Red Hat OpenShift Data Foundation.

Prerequisites

  • Red Hat OpenShift Container Platform 4.16 or later.
  • Access to the OpenShift web console with the cluster-admin role.
  • The ability to create an S3-compatible bucket.
  • A workstation with the oc, and aws binaries installed.

Procedure

  1. Open a terminal on your workstation, and log in to OpenShift:

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  2. Create a new bucket:

    export BUCKET=NEW_BUCKET_NAME
    export REGION=AWS_REGION_ID
    export USER=OADP_USER_NAME
    
    aws s3api create-bucket \
    --bucket $BUCKET \
    --region $REGION \
    --create-bucket-configuration LocationConstraint=$REGION
    $ export BUCKET=example-bucket-name
    $ export REGION=us-east-1
    $ export USER=velero
    $
    $ aws s3api create-bucket \
    --bucket $BUCKET \
    --region $REGION \
    --create-bucket-configuration LocationConstraint=$REGION
  3. Create a new user:

    $ aws iam create-user --user-name $USER
  4. Create a new policy:

    $ cat > velero-policy.json <<EOF
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:DescribeVolumes",
                    "ec2:DescribeSnapshots",
                    "ec2:CreateTags",
                    "ec2:CreateVolume",
                    "ec2:CreateSnapshot",
                    "ec2:DeleteSnapshot"
                ],
                "Resource": "*"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:GetObject",
                    "s3:DeleteObject",
                    "s3:PutObject",
                    "s3:AbortMultipartUpload",
                    "s3:ListMultipartUploadParts"
                ],
                "Resource": [
                    "arn:aws:s3:::${BUCKET}/*"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:ListBucket",
                    "s3:GetBucketLocation",
                    "s3:ListBucketMultipartUploads"
                ],
                "Resource": [
                    "arn:aws:s3:::${BUCKET}"
                ]
            }
        ]
    }
    EOF
  5. Associate this policy to the new user:

    $ aws iam put-user-policy \
    --user-name $USER \
    --policy-name velero \
    --policy-document file://velero-policy.json
  6. Create an access key:

    $ aws iam create-access-key --user-name $USER --output=json | jq -r '.AccessKey | [ "export AWS_ACCESS_KEY_ID=" + .AccessKeyId, "export AWS_SECRET_ACCESS_KEY=" + .SecretAccessKey ] | join("\n")'
  7. Create a credentials file with your AWS secret key information:

    $ cat << EOF > ./credentials-velero
    [default]
    aws_access_key_id=$AWS_ACCESS_KEY_ID
    aws_secret_access_key=$AWS_SECRET_ACCESS_KEY
    EOF
  8. Log in to the OpenShift web console with a user that has the cluster-admin role.
  9. From the Administrator perspective, expand the Operators navigation menu, and click OperatorHub.
  10. In the search field, type oadp, and click the OADP Operator tile provided by Red Hat.
  11. Click the Install button to show the operator details.
  12. Accept the default values, click Install on the Install Operator page, and wait for the installation to finish.
  13. After the operator installation finishes, from your workstation terminal, create a secret resource for OpenShift with your AWS credentials:

    $ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
  14. From the OpenShift web console, click the View Operator button.
  15. Click Create instance on the DataProtectionApplication (DPA) tile.
  16. On the Create DataProtectionApplication page, select YAML view.
  17. Edit the following values in the resource file:

    1. Under the metadata section, replace velero-sample with velero.
    2. Under the spec.configuration.nodeAgent section, replace restic with kopia.
    3. Under the spec.configuration.velero section, add resourceTimeout: 10m.
    4. Under the spec.configuration.velero.defaultPlugins section, add - csi.
    5. Under the spec.snapshotLocations section, replace the us-west-2 value with your AWS regional value.
    6. Under the spec.backupLocations section, replace the us-east-1 value with your AWS regional value.
    7. Under the spec.backupLocations.objectStorage section, replace my-bucket-name with your bucket name. Replace velero with your bucket prefix name, if you use a different prefix.
  18. Click the Create button.

With the OpenShift API Data Protection (OADP) operator installed and with an instance deployed, you can create a volume snapshot resource, and a backup resource to backup your Red Hat Trusted Artifact Signer (RHTAS) data.

Prerequisites

  • Red Hat OpenShift Container Platform 4.16 or later.
  • Access to the OpenShift web console with the cluster-admin role.
  • Installation of the OADP operator.
  • A workstation with the oc binary installed.

Procedure

  1. Open a terminal on your workstation, and log in to OpenShift:

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  2. Find and edit the VolumeSnapshotClass resource:

    $ oc get VolumeSnapshotClass -n openshift-adp
    $ oc edit VolumeSnapshotClass csi-aws-vsc -n openshift-adp
  3. Update the following values in the resource file:

    1. Under the metadata.labels section, add the velero.io/csi-volumesnapshot-class: "true" label.
    2. Save your changes, and quit the editor.
  4. Create a one-time, initial Backup job resource:

    $ cat <<EOF | oc apply -f -
    apiVersion: velero.io/v1
    kind: Backup
    metadata:
      name: rhtas-backup
      labels:
        velero.io/storage-location: velero-1
      namespace: openshift-adp
    spec:
      hooks: {}
      includedNamespaces:
      - trusted-artifact-signer
      includedResources: []
      excludedResources: []
      snapshotMoveData: true
      storageLocation: velero-1
      ttl: 720h0m0s
    EOF

    By default, all resources are backed up within the trusted-artifact-signer namespace. You can specify what resources you want to include or exclude by using the includeResources or excludedResources properties respectively.

    Important

    Depending on the storage class of the backup target, persistent volumes cannot be actively in-use for the backup to be successful.

  5. Create a new schedule for regular backups to occur:

    $ cat << EOF | oc apply -f -
    apiVersion: velero.io/v1
    kind: Schedule
    metadata:
      name: BACKUP_JOB_NAME
      namespace: openshift-adp
    spec:
      schedule: USER_DEFINED_SCHEDULE
      template:
        hooks: {}
        includedNamespaces:
        - trusted-artifact-signer
        storageLocation: velero-1
        defaultVolumesToFsBackup: true
        ttl: 720h0m0s
    EOF

    Replace BACKUP_JOB_NAME with a job name, and replace USER_DEFINED_SCHEDULE with a cron-formatted expression for the schedule. For example, using a cron-formatted schedule of */10 * * * *, this backs up the trusted-artifact-signer namespace and its resources every 10 minutes.

    1. You can verify if this schedule is enabled, and when the last backup job ran. For example:

      $ oc get schedule -n openshift-adp
      
      NAME            STATUS    SCHEDULE       LASTBACKUP   AGE   PAUSED
      rhtas-backups   Enabled   0/10 * * * *   3m11s        16m

1.1.3. Restoring your Trusted Artifact Signer data

With the Red Hat Trusted Artifact Signer (RHTAS) and OpenShift API Data Protection (OADP) operators installed, and a backup resource for RHTAS namespace, you can restore your data to an OpenShift cluster.

Prerequisites

Procedure

  1. Disable the RHTAS operator:

    $ oc scale deploy rhtas-operator-controller-manager --replicas=0 -n openshift-operators
  2. Create the Restore resource:

    $ cat <<EOF | oc apply -f -
    apiVersion: velero.io/v1
    kind: Restore
    metadata:
      name: rhtas-restore
      namespace: openshift-adp
    spec:
      backupName: rhtas-backup
      includedResources: []
      restoreStatus:
        includedResources:
          - securesign.rhtas.redhat.com
          - trillian.rhtas.redhat.com
          - ctlog.rhtas.redhat.com
          - fulcio.rhtas.redhat.com
          - rekor.rhtas.redhat.com
          - tuf.rhtas.redhat.com
          - timestampauthority.rhtas.redhat.com
      excludedResources:
      - pod
      - deployment
      - nodes
      - route
      - service
      - replicaset
      - events
      - cronjob
      - events.events.k8s.io
      - backups.velero.io
      - restores.velero.io
      - resticrepositories.velero.io
      - pods
      - deployments
      restorePVs: true
      existingResourcePolicy: update
    EOF
  3. If restoring your RHTAS data to a different OpenShift cluster, do the following steps.

    1. Delete the secret for the Trillian database:

      $ oc delete secret securesign-sample-trillian-db-tls
      $ oc delete pod trillian-db-xxx
      Note

      The RHTAS operator recreates the secret and restarts the pod.

    2. Run the restoreOwnerReferences.sh script.
  4. Enable the RHTAS operator:

    $ oc scale deploy rhtas-operator-controller-manager --replicas=1 -n openshift-operators
    Important

    Immediately starting the RHTAS operator after starting the restore ensures the claim of the persistent volume.

1.2. The Update Framework

As a systems administrator, understanding Red Hat’s implementation of The Update Framework (TUF) for Red Hat Trusted Artifact Signer (RHTAS) is important in helping you to maintaining a secure coding environment for developers. You can refresh TUF’s root and non-root metadata periodically to help prevent mix-and-match attacks on a code base. Refreshing the TUF metadata gives clients the ability to detect and reject outdated or tampered-with files.

Starting with Red Hat Trusted Artifact Signer (RHTAS) version 1.1, we implemented The Update Framework (TUF) as a trust root to store public keys, and certificates used by RHTAS services. The Update Framework is a sophisticated framework for securing software update systems, and this makes it ideal for securing shipped artifacts. The Update Framework refers to the RHTAS services as trusted root targets. There are four trusted targets, one for each RHTAS service: Fulcio, Certificate Transparency (CT) log, Rekor, and Timestamp Authority (TSA). Client software, such as cosign, use the RHTAS trust root targets to sign and verify artifact signatures. A simple HTTP server distributes the public keys and certificates to the client software. This simple HTTP server has the TUF repository of the individual targets.

By default, when deploying RHTAS on Red Hat OpenShift or Red Hat Enterprise Linux, we create a TUF repository, and prepopulate the individual targets. By default, the expiration date of all metadata files is 52 weeks from the time you deploy the RHTAS service. Red Hat recommends choosing shorter expiration periods, and rotating your public keys and certificates often. Doing these maintenance tasks regularly can help prevent attacks on your code base.

Starting with RHTAS version 1.4, we introduced a signing configuration URL mode. This mode gives you the ability to use either an external URL address, such as an ingress or route, or an internal URL address, such as a Kubernetes service. This is controlled by the signingConfigURLMode field for a Custom Resource (CR), or by the tas_single_node_signing_config_url_mode option in an Ansible Playbook. For deployments on Red Hat OpenShift and Red Hat Enterprise Linux, by default, the signing configuration URL mode is set to external.

Red Hat OpenShift

...
spec:
  tuf:
    signingConfigURLMode: external
...

Red Hat Enterprise Linux

...
tas_single_node_signing_config_url_mode: external
...

By default, The Update Framework (TUF) metadata files expire after 52 weeks from the Red Hat Trusted Artifact Signer (RHTAS) deployment date. At a minimum, you have to update the TUF metadata files at least once every 52 weeks before they expire. Red Hat recommends updating the metadata files more often than once a year.

This procedure walks you through refreshing the root, and non-root metadata files.

Prerequisites

  • Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • A workstation with the oc binary installed.

Procedure

  1. Download the tuftool binary from the OpenShift cluster to your workstation.

    Important

    Currently, the tuftool binary is only available for Linux operating systems on the x86_64 architecture.

    1. From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
    2. Open a terminal on your workstation, decompress the binary .gz file, and set the execution bit:

      $ gunzip tuftool-amd64.gz
      $ chmod +x tuftool-amd64
    3. Move and rename the binary to a location within your $PATH environment:

      $ sudo mv tuftool-amd64 /usr/local/bin/tuftool
  2. Log in to OpenShift from the command line:

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  3. Switch to the RHTAS project:

    $ oc project trusted-artifact-signer
  4. Configure your shell environment:

    $ export WORK="${HOME}/trustroot-example"
    $ export ROOT="${WORK}/root/root.json"
    $ export KEYDIR="${WORK}/keys"
    $ export INPUT="${WORK}/input"
    $ export TUF_REPO="${WORK}/tuf-repo"
    $ export TUF_SERVER_POD="$(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")"
    $ export TIMESTAMP_EXPIRATION="in 10 days"
    $ export SNAPSHOT_EXPIRATION="in 26 weeks"
    $ export TARGETS_EXPIRATION="in 26 weeks"
    $ export ROOT_EXPIRATION="in 26 weeks"

    Set the expiration durations according to your requirements.

  5. Create a temporary TUF directory structure:

    $ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
  6. Download the TUF contents to the temporary TUF directory structure:

    $ oc extract --to "${KEYDIR}/" secret/tuf-root-keys
    $ oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}"
    $ cp "${TUF_REPO}/root.json" "${ROOT}"
  7. You can update the timestamp, snapshot, and targets metadata all in one command:

    $ tuftool update \
      --root "${ROOT}" \
      --key "${KEYDIR}/timestamp.pem" \
      --key "${KEYDIR}/snapshot.pem" \
      --key "${KEYDIR}/targets.pem" \
      --timestamp-expires "${TIMESTAMP_EXPIRATION}" \
      --snapshot-expires "${SNAPSHOT_EXPIRATION}" \
      --targets-expires "${TARGETS_EXPIRATION}" \
      --outdir "${TUF_REPO}" \
      --metadata-url "file://${TUF_REPO}"
    Note

    You can also run the TUF metadata update on a subset of TUF metadata files. For example, the timestamp.json metadata file expires more often than the other metadata files. Therefore, you can just update the timestamp metadata file by running the following command:

    $ tuftool update \
      --root "${ROOT}" \
      --key "${KEYDIR}/timestamp.pem" \
      --timestamp-expires "${TIMESTAMP_EXPIRATION}" \
      --outdir "${TUF_REPO}" \
      --metadata-url "file://${TUF_REPO}"
  8. Only update the root expiration date if it is about to expire:

    $ tuftool root expire "${ROOT}" "${ROOT_EXPIRATION}"
    Note

    You can skip this step if the root file is not close to expiring.

  9. Update the root version:

    $ tuftool root bump-version "${ROOT}"
  10. Sign the root metadata file again:

    $ tuftool root sign "${ROOT}" -k "${KEYDIR}/root.pem"
  11. Set the new root version, and copy the root metadata file in place:

    $ export NEW_ROOT_VERSION=$(cat "${ROOT}" | jq -r ".signed.version")
    $ cp "${ROOT}" "${TUF_REPO}/root.json"
    $ cp "${ROOT}" "${TUF_REPO}/${NEW_ROOT_VERSION}.root.json"
  12. Upload these changes to the TUF server:

    $ oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"

After upgrading Red Hat Trusted Artifact Signer (RHTAS) from version 1.3.x to version 1.4, an automated job runs to update the The Update Framework (TUF) repository. This auto-update job will fail if you store your signing keys outside of Red Hat OpenShift. In this case, you must update and migrate TUF repository manually with this procedure.

Important

This update process leaves all the cryptographic keys untouched, preserving the expired signer keys. Artifacts signed before and after upgrading RHTAS still remain verifiable. This procedure only modifies the TUF distribution metadata, and the trusted_root.json file.

Important

This procedure requires downtime for the TUF service, and any signed artifact verification will fail during this downtime.

Prerequisites

  • The RHTAS Operator running version 1.4.
  • Access to your Red Hat OpenShift cluster with a user that has the cluster-admin role.
  • A workstation with the oc, cosign 3.04 or later, and tuftool binaries installed.
  • A full backup of your TUF repository, including the metadata and the cryptographic keys.
  • Access to your signing keys.

Procedure

  1. Verify if you need to update and migrate the TUF repository manually.

    1. Check the TUF resource status, and the tuf-version annotation:

      $ oc get tuf securesign-sample

      If you see a status of Ready, then this means no manual update is needed. If you see a status of Failure, then a manual update is needed.

      $ oc get tuf securesign-sample -o jsonpath='{.metadata.annotations}'

      If you see rhtas.redhat.com/tuf-version":"v1, then this means no manual update is needed. If the annotation is missing, or has a different value, then this means a manual update is needed.

    2. Check for TUF error messages:

      $ oc get tuf securesign-sample -o jsonpath='{.status.conditions[?(@.type=="Ready")].message}'

      If you see error messages, such as the following, then you must update the TUF repository manually:

      cannot migrate TUF: root key secret not found

      or

      cannot migrate TUF: root key secret not specified
      Note

      Seeing these error messages does not mean a system outage. The TUF server continues to operate normally, and still handles requests. These error messages means that the TUF resource is not automatically reconciled by the RHTAS Operator, until the manual update and migration is done.

  2. Configure your shell environment:

    $ export TUF_URL=$(oc get tuf -o jsonpath='{.items[0].status.url}')
    $ export REKOR_URL=$(oc get rekor -o jsonpath='{.items[0].status.url}')
    $ export FULCIO_URL=$(oc get fulcio -o jsonpath='{.items[0].status.url}')
    $ export TSA_URL=$(oc get timestampauthority -o jsonpath='{.items[0].status.url}')/api/v1/timestamp
    $ export CTLOG_URL=$(oc get svc ctlog -o jsonpath='{"https://"}{.metadata.name}{"."}{.metadata.namespace}{".svc/"}')"$(oc get securesign securesign-sample  -o jsonpath='{.spec.tuf.ctlog.prefix}')"
    $ export OIDC_ISSUERS=$(oc get securesign securesign-sample -o go-template='{{range $i, $val := .spec.fulcio.config.OIDCIssuers}}{{if $i}},{{end}}{{$val.Issuer}}{{end}}')
    $ export OPERATOR_NAME="rhtas.redhat.com"
    $ export WORKDIR="${HOME}/trustroot-migration"
    $ export TUF_REPO="${WORKDIR}/tuf-repo"
    $ export TUF_SERVER_POD="$(oc get pods -l app.kubernetes.io/component=tuf,\!job-name -o jsonpath='{.items[0].metadata.name}')"
    $ export KEYDIR="PATH_TO_YOUR_KEYS"

    Replase PATH_TO_YOUR_KEYS with the path to your signing keys.

  3. Make a backup of your TUF repository:

    $ mkdir -p /tmp/tuf-backup
    $ oc rsync $TUF_SERVER_POD:/var/www/html/ /tmp/tuf-backup/
  4. Download the TUF repository.

    1. Create a temporary TUF directory structure:

      $ mkdir -p "${TUF_REPO}"
    2. Copy the TUF contents to the temporary TUF directory structure:

      $ oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}"
  5. Create the update and migration script.

    1. Create an empty file:

      $ touch tuf_migration.sh
    2. Open the file for editing.
    3. Copy the script contents and paste them into the tuf_migration.sh file.
    4. Save and close the file.
    5. Make the script executable:

      $ chmod +x tuf_migration.sh
    6. Run the script to update the TUF repository:

      $ ./tuf_migration.sh

      Wait for the script to complete. Once it finishes, you should see the message Re-signing and upload complete.

  6. Upload the changes back to the TUF server:

    $ oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"
  7. Add an annotation to the TUF resource:

    $ oc annotate tuf securesign-sample rhtas.redhat.com/tuf-version=v1

    Now the RHTAS Operator can automatically reconcile the TUF deployment.

  8. Verify that the TUF resource is in a Ready status:

    $ oc get tuf securesign-sample

1.3. Rotate your certificates and keys

As a systems administrator, you can proactively rotate the certificates and signer keys used by the Red Hat Trusted Artifact Signer (RHTAS) service running on Red Hat OpenShift. Rotating your keys regularly can prevent key tampering, and theft. These procedures guide you through expiring your old certificates and signer keys, and replacing them with a new certificate and signer key for the underlying services that make up RHTAS. You can rotate keys and certificates for the following services:

  • Rekor
  • Certificate Transparency log
  • Fulcio
  • Timestamp Authority

1.3.1. Rotating the Rekor signer key

You can proactively rotate Rekor’s signer key by using the sharding feature to freeze the log tree, and create a new log tree with a new signer key. This procedure walks you through expiring your old Rekor signer key, and replacing it with a new signer key for Red Hat Trusted Artifact Signer (RHTAS) to use. When expiring your old Rekor signer key you can still verify artifacts signed by the old key.

Important

This procedure requires downtime to the Rekor service.

Prerequisites

  • Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • A workstation with the oc, openssl, and cosign binaries installed.

Procedure

  1. Download the rekor-cli binary from the OpenShift cluster to your workstation.

    1. Login to the OpenShift web console. From the home page, click the ? icon, click Command line tools, go to the rekor-cli download section, and click the link for your platform.
    2. Open a terminal on your workstation, decompress the binary .gz file, and set the execute bit:

      $ gunzip rekor-cli-amd64.gz
      $ chmod +x rekor-cli-amd64
    3. Move and rename the binary to a location within your $PATH environment:

      $ sudo mv rekor-cli-amd64 /usr/local/bin/rekor-cli
  2. Download the tuftool binary from the OpenShift cluster to your workstation.

    Important

    The tuftool binary is only available for Linux operating systems.

    1. From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
    2. From a terminal on your workstation, decompress the binary .gz file, and set the execute bit:

      $ gunzip tuftool-amd64.gz
      $ chmod +x tuftool-amd64
    3. Move and rename the binary to a location within your $PATH environment:

      $ sudo mv tuftool-amd64 /usr/local/bin/tuftool
  3. Log in to OpenShift from the command line:

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  4. Switch to the RHTAS project:

    $ oc project trusted-artifact-signer
  5. Get the Rekor URL:

    $ export REKOR_URL=$(oc get rekor -o jsonpath='{.items[0].status.url}')
  6. Get the log tree identifier for the active shard:

    $ export OLD_TREE_ID=$(rekor-cli loginfo --rekor_server $REKOR_URL --format json | jq -r .TreeID)
  7. Set the log tree to the DRAINING state:

    $ oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver.trusted-artifact-signer.svc:8091 --tree_id=${OLD_TREE_ID} --tree_state=DRAINING --tls_cert_file=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt

    While draining, the tree log will not accept any new entries. Watch and wait for the queue to empty.

    Important

    You must wait for the queues to be empty before proceeding to the next step. If leaves are still integrating while draining, then freezing the log tree during this process can cause the log path to exceed the maximum merge delay (MMD) threshold.

  8. Freeze the log tree:

    $ oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver.trusted-artifact-signer.svc:8091 --tree_id=${OLD_TREE_ID} --tree_state=FROZEN --tls_cert_file=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
  9. Get the length of the frozen log tree:

    $ export OLD_SHARD_LENGTH=$(rekor-cli loginfo --rekor_server $REKOR_URL --format json | jq -r .ActiveTreeSize)
  10. Get Rekor’s public key for the old shard:

    $ export OLD_PUBLIC_KEY=$(curl -s $REKOR_URL/api/v1/log/publicKey | base64 | tr -d '\n')
  11. Create a new log tree:

    $ export NEW_TREE_ID=$(oc run createtree --image registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- -logtostderr=false --admin_server=trillian-logserver.trusted-artifact-signer.svc:8091 --display_name=rekor-tree --tls_cert_file=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt)

    Now you have two log trees, one frozen tree, and a new tree that will become the active shard.

  12. Create a new private key:

    $ openssl ecparam -genkey -name prime256v1 -noout -out new-rekor.pem
    Important

    The new key must have a unique file name.

  13. Create a new secret resource with the new signer key:

    $ oc create secret generic rekor-signer-key --from-file=private=new-rekor.pem
  14. Update the Securesign Rekor configuration with the new tree identifier and the old sharding information:

    $ read -r -d '' SECURESIGN_PATCH_1 <<EOF
    [
        {
            "op": "replace",
            "path": "/spec/rekor/treeID",
            "value": $NEW_TREE_ID
        },
        {
            "op": "add",
            "path": "/spec/rekor/sharding/-",
            "value": {
                "treeID": $OLD_TREE_ID,
                "treeLength": $OLD_SHARD_LENGTH,
                "encodedPublicKey": "$OLD_PUBLIC_KEY"
            }
        },
        {
            "op": "replace",
            "path": "/spec/rekor/signer/keyRef",
            "value": {"name": "rekor-signer-key", "key": "private"}
        }
    ]
    EOF
    Note

    If you have /spec/rekor/signer/keyPasswordRef set with a value, then create a new separate update to remove it:

    $ read -r -d '' SECURESIGN_PATCH_2 <<EOF
    [
        {
            "op": "remove",
            "path": "/spec/rekor/signer/keyPasswordRef"
        }
    ]
    EOF

    Apply this update after applying the first update.

  15. Update the Securesign instance:

    $ oc patch Securesign securesign-sample --type='json' -p="$SECURESIGN_PATCH_1"
  16. Wait for the Rekor server to redeploy with the new signer key:

    $ oc wait pod -l app.kubernetes.io/name=rekor-server --for=condition=Ready
  17. Get the new public key:

    $ export NEW_KEY_NAME=new-rekor.pub
    $ curl $(oc get rekor -o jsonpath='{.items[0].status.url}')/api/v1/log/publicKey -o $NEW_KEY_NAME
  18. Configure The Update Framework (TUF) service to use the new Rekor public key.

    1. Configure your shell environment:

      $ export WORK="${HOME}/trustroot-example"
      $ export ROOT="${WORK}/root/root.json"
      $ export KEYDIR="${WORK}/keys"
      $ export INPUT="${WORK}/input"
      $ export TUF_REPO="${WORK}/tuf-repo"
      $ export TUF_SERVER_POD="$(oc get pods -l app.kubernetes.io/component=tuf,\!job-name -o jsonpath='{.items[0].metadata.name}')"
    2. Create a temporary TUF directory structure:

      $ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
    3. Download the TUF contents to the temporary TUF directory structure:

      $ oc extract --to "${KEYDIR}/" secret/tuf-root-keys
      $ oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}"
      $ cp "${TUF_REPO}/root.json" "${ROOT}"
    4. Find the active Rekor signer key file name. Open the latest target file, for example, 1.target.json, within the local TUF repository. In this file you will find the active Rekor signer key file name, for example, rekor.pub. Set an environment variable with this active Rekor signer key file name:

      $ export ACTIVE_KEY_NAME=rekor.pub
    5. Update the Rekor signer key with the old public key:

      $ echo $OLD_PUBLIC_KEY | base64 -d > $ACTIVE_KEY_NAME
    6. Expire the old Rekor signer key:

      $ tuftool rhtas \
        --root "${ROOT}" \
        --key "${KEYDIR}/snapshot.pem" \
        --key "${KEYDIR}/targets.pem" \
        --key "${KEYDIR}/timestamp.pem" \
        --set-rekor-target "${ACTIVE_KEY_NAME}" \
        --rekor-uri "${REKOR_URL}" \
        --rekor-status "Expired" \
        --outdir "${TUF_REPO}" \
        --metadata-url "file://${TUF_REPO}"
    7. Add the new Rekor signer key:

      $ tuftool rhtas \
        --root "${ROOT}" \
        --key "${KEYDIR}/snapshot.pem" \
        --key "${KEYDIR}/targets.pem" \
        --key "${KEYDIR}/timestamp.pem" \
        --set-rekor-target "${NEW_KEY_NAME}" \
        --rekor-uri "${REKOR_URL}" \
        --outdir "${TUF_REPO}" \
        --metadata-url "file://${TUF_REPO}"
    8. Upload these changes to the TUF server:

      $ oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"
    9. Delete the working directory:

      $ rm -r $WORK
  19. Update the cosign configuration with the updated TUF configuration:

    $ cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json

    Now, you are ready to sign and verify your artifacts with the new Rekor signer key.

You can proactively rotate Certificate Transparency (CT) log signer key by using the sharding feature to freeze the log tree, and create a new log tree with a new signer key. This procedure walks you through expiring your old CT log signer key, and replacing it with a new signer key for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old CT log signer key still allows you to verify artifacts signed by the old key.

Prerequisites

  • Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • A workstation with the oc, openssl, and cosign binaries installed.

Procedure

  1. Download the tuftool binary from the OpenShift cluster to your workstation.

    Important

    Currently, the tuftool binary is only available for Linux operating systems on the x86_64 architecture.

    1. From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
    2. Open a terminal on your workstation, decompress the binary .gz file, and set the execution bit:

      $ unzip tuftool-amd64.gz
      $ chmod +x tuftool-amd64
    3. Move and rename the binary to a location within your $PATH environment:

      $ sudo mv tuftool-amd64 /usr/local/bin/tuftool
  2. Log in to OpenShift from the command line:

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  3. Switch to the RHTAS project:

    $ oc project trusted-artifact-signer
  4. Make a backup of the current CT log configuration, and keys:

    $ export SERVER_CONFIG_NAME=$(oc get ctlog -o jsonpath='{.items[0].status.serverConfigRef.name}')
    $ oc get secret $SERVER_CONFIG_NAME -o jsonpath="{.data.config}" | base64 --decode > config.txtpb
    $ oc get secret $SERVER_CONFIG_NAME -o jsonpath="{.data.fulcio-0}" | base64 --decode > fulcio-0.pem
    $ oc get secret $SERVER_CONFIG_NAME -o jsonpath="{.data.private}" | base64 --decode > private.pem
    $ oc get secret $SERVER_CONFIG_NAME -o jsonpath="{.data.public}" | base64 --decode > public.pem
  5. Capture the current tree identifier:

    $ export OLD_TREE_ID=$(oc get ctlog -o jsonpath='{.items[0].status.treeID}')
  6. Set the log tree to the DRAINING state:

    $ oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver.trusted-artifact-signer.svc:8091 --tree_id=${OLD_TREE_ID} --tree_state=DRAINING --tls_cert_file=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt

    While draining, the tree log will not accept any new entries. Watch and wait for the queue to empty.

    Important

    You must wait for the queues to be empty before proceeding to the next step. If leaves are still integrating while draining, then freezing the log tree during this process can cause the log path to exceed the maximum merge delay (MMD) threshold.

  7. Once the queue has been fully drained, freeze the log:

    $ oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver.trusted-artifact-signer.svc:8091 --tree_id=${OLD_TREE_ID} --tree_state=FROZEN --tls_cert_file=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
  8. Create a new Merkle tree, and capture the new tree identifier:

    $ export NEW_TREE_ID=$(kubectl run createtree --image registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- -logtostderr=false --admin_server=trillian-logserver.trusted-artifact-signer.svc:8091 --display_name=ctlog-tree --tls_cert_file=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt)
  9. Generate a new certificate, along with new public and private keys:

    $ openssl ecparam -genkey -name prime256v1 -noout -out new-ctlog.pem
    $ openssl ec -in new-ctlog.pem -pubout -out new-ctlog-public.pem
    $ openssl ec -in new-ctlog.pem -out new-ctlog.pass.pem -des3 -passout pass:"CHANGE_ME"

    Replace CHANGE_ME with a new password.

    Important

    The certificate and new keys must have unique file names.

  10. Update the CT log configuration.

    1. Open the config.txtpb file for editing.
    2. For the frozen log, add the not_after_limit field to the frozen log entry, rename the prefix value to a unique name, and replace the old path to the private key with /ctfe-keys/private-0:

      ...
      log_configs:{
        # frozen log
        config:{
          log_id:2066075212146181968
          prefix:"trusted-artifact-signer-0"
          roots_pem_file:"/ctfe-keys/fulcio-0"
          private_key:{[type.googleapis.com/keyspb.PEMKeyFile]:{path:"/ctfe-keys/private-0" password:"Example123"}}
          public_key:{der:"0Y0\x13\x06\x07*\x86H\xce=\x02\x01\x06\x08*\x86H\xce=\x03\x01\x07\x03B\x00\x04)'.\xffUJ\xe2s)\xefR\x8a\xfcO\xdcewȶy\xa7\x9d<\x13\xb0\x1c\x99\x96\xe4'\xe3v\x07:\xc8I+\x08J\x9d\x8a\xed\x06\xe4\xaeI:q\x98\xf4\xbc<o4VD\x0cr\xf9\x9c\xecxT\x84"}
          not_after_limit:{seconds:1728056285 nanos:012111000}
          ext_key_usages:"CodeSigning"
          log_backend_name:"trillian"
        }
      Note

      You can get the current time value for seconds and nanoseconds, by running the following commands: date +%s, and date +%N.

      Important

      The not_after_limit field defines the end of the timestamp range for the frozen log only. Certificates beyond this point in time are no longer accepted for inclusion in this log.

    3. Copy and paste the frozen log config block, appending it to the configuration file to create a new entry.
    4. Change the following lines in the new config block. Set the log_id to the new tree identifier, change the prefix to trusted-artifact-signer, change the private_key path to /ctfe-keys/private, remove the public_key line, and change not_after_limit to not_after_start and set the timestamp range:

      ...
      log_configs:{
        # frozen log
        ...
        # new active log
        config:{
      	  log_id: NEW_TREE_ID
      	  prefix:"trusted-artifact-signer"
      	  roots_pem_file:"/ctfe-keys/fulcio-0"
      	  private_key:{[type.googleapis.com/keyspb.PEMKeyFile]:{path:"/ctfe-keys/private" password:"CHANGE_ME"}}
      	  ext_key_usages:"CodeSigning"
      	  not_after_start:{seconds:1713201754 nanos:155663000}
      	  log_backend_name:"trillian"
        }

      Add the NEW_TREE_ID, and replace CHANGE_ME with the new private key password. The password here must match the password used for generating the new private and public keys.

      Important

      The not_after_start field defines the beginning of the timestamp range inclusively. This means the log will start accepting certificates at this point in time.

  11. Create a new secret resource:

    $ oc create secret generic ctlog-config \
    --from-file=config=config.txtpb \
    --from-file=private=new-ctlog.pass.pem \
    --from-file=public=new-ctlog-public.pem \
    --from-file=fulcio-0=fulcio-0.pem \
    --from-file=private-0=private.pem \
    --from-file=public-0=public.pem \
    --from-literal=password=CHANGE_ME

    Replace CHANGE_ME with the new private key password.

  12. Configure The Update Framework (TUF) service to use the new CT log public key.

    1. Configure your shell environment:

      $ export WORK="${HOME}/trustroot-example"
      $ export ROOT="${WORK}/root/root.json"
      $ export KEYDIR="${WORK}/keys"
      $ export INPUT="${WORK}/input"
      $ export TUF_REPO="${WORK}/tuf-repo"
      $ export TUF_SERVER_POD="$(oc get pods -l app.kubernetes.io/component=tuf,\!job-name -o jsonpath='{.items[0].metadata.name}')"
    2. Create a temporary TUF directory structure:

      $ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
    3. Download the TUF contents to the temporary TUF directory structure:

      $ oc extract --to "${KEYDIR}/" secret/tuf-root-keys
      $ oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}"
      $ cp "${TUF_REPO}/root.json" "${ROOT}"
    4. Find the active CT log public key file name. Open the latest target file, for example, 1.targets.json, within the local TUF repository. In this target file you will find the active CT log public key file name, for example, ctfe.pub. Set an environment variable with this active CT log public key file name:

      $ export ACTIVE_CTFE_NAME=ctfe.pub
    5. Extract the active CT log public key from OpenShift:

      $ oc get secret $(oc get ctlog securesign-sample -o jsonpath='{.status.publicKeyRef.name}') -o jsonpath='{.data.public}' | base64 -d > $ACTIVE_CTFE_NAME
    6. Expire the old CT log signer key:

      $ tuftool rhtas \
        --root "${ROOT}" \
        --key "${KEYDIR}/snapshot.pem" \
        --key "${KEYDIR}/targets.pem" \
        --key "${KEYDIR}/timestamp.pem" \
        --set-ctlog-target "$ACTIVE_CTFE_NAME" \
        --ctlog-uri "https://ctlog.rhtas" \
        --ctlog-status "Expired" \
        --outdir "${TUF_REPO}" \
        --metadata-url "file://${TUF_REPO}"
    7. Add the new CT log signer key:

      $ tuftool rhtas \
        --root "${ROOT}" \
        --key "${KEYDIR}/snapshot.pem" \
        --key "${KEYDIR}/targets.pem" \
        --key "${KEYDIR}/timestamp.pem" \
        --set-ctlog-target "new-ctlog-public.pem" \
        --ctlog-uri "https://ctlog.rhtas" \
        --outdir "${TUF_REPO}" \
        --metadata-url "file://${TUF_REPO}"
    8. Upload these changes to the TUF server:

      $ oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"
  13. Update the Securesign CT log configuration with the new tree identifier:

    read -r -d '' SECURESIGN_PATCH <<EOF
    [
    	{
        	"op": "replace",
        	"path": "/spec/ctlog/serverConfigRef",
        	"value": {"name": "ctlog-config"}
    	},
        {
            "op": "replace",
            "path": "/spec/ctlog/treeID",
            "value": $NEW_TREE_ID
        },
    	{
        	"op": "replace",
        	"path": "/spec/ctlog/privateKeyRef",
        	"value": {"name": "ctlog-config", "key": "private"}
    	},
        {
            "op": "replace",
            "path": "/spec/ctlog/privateKeyPasswordRef",
            "value": {"name": "ctlog-config", "key": "password"}
        },
    	{
        	"op": "replace",
        	"path": "/spec/ctlog/publicKeyRef",
        	"value": {"name": "ctlog-config", "key": "public"}
    	}
    ]
    EOF
  14. Patch the Securesign instance:

    $ oc patch Securesign securesign-sample --type='json' -p="$SECURESIGN_PATCH"
  15. Wait for the CT log server to redeploy:

    $ oc wait pod -l app.kubernetes.io/name=ctlog --for=condition=Ready
  16. Delete the working directory:

    $ rm -r $WORK
  17. Update the cosign configuration with the updated TUF configuration:

    $ cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json

    Now, you are ready to sign and verify your artifacts with the new CT log signer key.

1.3.3. Rotating the Fulcio certificate

You can proactively rotate the certificate used by the Fulcio service. This procedure walks you through expiring your old Fulcio certificate, and replacing it with a new certificate for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old Fulcio certificate still allows you to verify artifacts signed by the old certificate.

Prerequisites

  • Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • A workstation with the oc, openssl, and cosign binaries installed.

Procedure

  1. Download the tuftool binary from the OpenShift cluster to your workstation.

    Important

    Currently, the tuftool binary is only available for Linux operating systems on the x86_64 architecture.

    1. From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
    2. Open a terminal on your workstation, decompress the binary .gz file, and set the execution bit:

      $ gunzip tuftool-amd64.gz
      $ chmod +x tuftool-amd64
    3. Move and rename the binary to a location within your $PATH environment:

      $ sudo mv tuftool-amd64 /usr/local/bin/tuftool
  2. Log in to OpenShift from the command line:

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  3. Switch to the RHTAS project:

    $ oc project trusted-artifact-signer
  4. Generate a new certificate, along with new public and private keys:

    $ openssl ecparam -genkey -name prime256v1 -noout -out new-fulcio.pem
    $ openssl ec -in new-fulcio.pem -pubout -out new-fulcio-public.pem
    $ openssl ec -in new-fulcio.pem -out new-fulcio.pass.pem -des3 -passout pass:"CHANGE_ME"
    $ openssl req -new -x509 -key new-fulcio.pass.pem -out new-fulcio.cert.pem

    Replace CHANGE_ME with a new password.

    Important

    The certificate and new keys must have unique file names.

  5. Create a new secret:

    $ oc create secret generic fulcio-config \
    --from-file=private=new-fulcio.pass.pem \
    --from-file=cert=new-fulcio.cert.pem \
    --from-literal=password=CHANGE_ME

    Replace CHANGE_ME with a new password.

    Note

    The password here must match the password used for generating the new private and public keys.

  6. Configure The Update Framework (TUF) service to use the new Fulcio certificate.

    1. Set up your shell environment:

      $ export WORK="${HOME}/trustroot-example"
      $ export ROOT="${WORK}/root/root.json"
      $ export KEYDIR="${WORK}/keys"
      $ export INPUT="${WORK}/input"
      $ export FULCIO_URL=$(oc get fulcio securesign-sample -o jsonpath='{.status.url}')
      $ export TUF_REPO="${WORK}/tuf-repo"
      $ export TUF_SERVER_POD="$(oc get pods -l app.kubernetes.io/component=tuf,\!job-name -o jsonpath='{.items[0].metadata.name}')"
    2. Create a temporary TUF directory structure:

      $ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
    3. Download the TUF contents to the temporary TUF directory structure:

      $ oc extract --to "${KEYDIR}/" secret/tuf-root-keys
      $ oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}"
      $ cp "${TUF_REPO}/root.json" "${ROOT}"
    4. Find the active Fulcio certificate file name. Open the latest target file, for example, 1.targets.json, within the local TUF repository. In this file you will find the active Fulcio certificate file name, for example, fulcio_v1.crt.pem. Set an environment variable with this active Fulcio certificate file name:

      $ export ACTIVE_CERT_NAME=fulcio_v1.crt.pem
    5. Extract the active Fulcio certificate from OpenShift:

      $ oc get secret $(oc get fulcio securesign-sample -o jsonpath='{.status.certificate.caRef.name}') -o jsonpath='{.data.cert}' | base64 -d > $ACTIVE_CERT_NAME
    6. Expire the old certificate:

      $ tuftool rhtas \
        --root "${ROOT}" \
        --key "${KEYDIR}/snapshot.pem" \
        --key "${KEYDIR}/targets.pem" \
        --key "${KEYDIR}/timestamp.pem" \
        --set-fulcio-target "$ACTIVE_CERT_NAME" \
        --fulcio-uri "${FULCIO_URL}" \
        --fulcio-status "Expired" \
        --outdir "${TUF_REPO}" \
        --metadata-url "file://${TUF_REPO}"
    7. Add the new Fulcio certificate:

      $ tuftool rhtas \
        --root "${ROOT}" \
        --key "${KEYDIR}/snapshot.pem" \
        --key "${KEYDIR}/targets.pem" \
        --key "${KEYDIR}/timestamp.pem" \
        --set-fulcio-target "new-fulcio.cert.pem" \
        --fulcio-uri "${FULCIO_URL}" \
        --oidc-uri "${OIDC_URI}" \
        --outdir "${TUF_REPO}" \
        --metadata-url "file://${TUF_REPO}"
    8. Upload these changes to the TUF server:

      $ oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"
    9. Delete the working directory:

      $ rm -r $WORK
  7. Update the Securesign Fulcio configuration:

    $ read -r -d '' SECURESIGN_PATCH <<EOF
    [
    {
        "op": "replace",
        "path": "/spec/fulcio/certificate/privateKeyRef",
        "value": {"name": "fulcio-config", "key": "private"}
    },
    {
        "op": "replace",
        "path": "/spec/fulcio/certificate/privateKeyPasswordRef",
        "value": {"name": "fulcio-config", "key": "password"}
    },
    {
        "op": "replace",
        "path": "/spec/fulcio/certificate/caRef",
        "value": {"name": "fulcio-config", "key": "cert"}
    },
    {
        "op": "replace",
        "path": "/spec/ctlog/rootCertificates",
        "value": [{"name": "fulcio-config", "key": "cert"}]
    }
    ]
    EOF
  8. Patch the Securesign instance:

    $ oc patch Securesign securesign-sample --type='json' -p="$SECURESIGN_PATCH"
  9. Wait for the Fulcio server to redeploy:

    $ oc wait pod -l app.kubernetes.io/name=fulcio-server --for=condition=Ready
    $ oc wait pod -l app.kubernetes.io/name=ctlog --for=condition=Ready
  10. Update the cosign configuration with the updated TUF configuration:

    $ cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json

    Now, you are ready to sign and verify your artifacts with the new Fulcio certificate.

You can proactively rotate the Timestamp Authority (TSA) signer key and certificate chain. This procedure walks you through expiring your old TSA signer key and certificate chain, and replacing them with a new ones for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old TSA signer key and certificate chain still allows you to verify artifacts signed by the old key and certificate chain.

Prerequisites

  • Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • A workstation with the oc and openssl binaries installed.

Procedure

  1. Download the tuftool binary from the OpenShift cluster to your workstation.

    Important

    Currently, the tuftool binary is only available for Linux operating systems on the x86_64 architecture.

    1. From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
    2. Open a terminal on your workstation, decompress the binary .gz file, and set the execution bit:

      $ gunzip tuftool-amd64.gz
      $ chmod +x tuftool-amd64
    3. Move and rename the binary to a location within your $PATH environment:

      $ sudo mv tuftool-amd64 /usr/local/bin/tuftool
  2. Log in to OpenShift from the command line:

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  3. Switch to the RHTAS project:

    $ oc project trusted-artifact-signer
  4. Generate a new certificate chain, and a new signer key.

    Important

    The new certificate and keys must have unique file names.

    1. Create a temporary working directory:

      $ mkdir certs && cd certs
    2. Create the root certificate authority (CA) private key, and set a password:

      $ openssl req -x509 -newkey rsa:2048 -days 365 -sha256 -nodes \
      -keyout rootCA.key.pem -out rootCA.crt.pem \
      -passout pass:"CHANGE_ME" \
      -subj "/C=CC/ST=state/L=Locality/O=RH/OU=RootCA/CN=RootCA" \
      -addext "basicConstraints=CA:true" -addext "keyUsage=cRLSign, keyCertSign"

      Replace CHANGE_ME with a new password.

    3. Create the intermediate CA private key and certificate signing request (CSR), and set a password:

      $ openssl req -newkey rsa:2048 -sha256 \
      -keyout intermediateCA.key.pem -out intermediateCA.csr.pem \
      -passout pass:"CHANGE_ME" \
      -subj "/C=CC/ST=state/L=Locality/O=RH/OU=IntermediateCA/CN=IntermediateCA"

      Replace CHANGE_ME with a new password.

    4. Sign the intermediate CA certificate with the root CA:

      $ openssl x509 -req -in intermediateCA.csr.pem -CA rootCA.crt.pem -CAkey rootCA.key.pem \
      -CAcreateserial -out intermediateCA.crt.pem -days 365 -sha256 \
      -extfile <(echo -e "basicConstraints=CA:true\nkeyUsage=cRLSign, keyCertSign\nextendedKeyUsage=critical,timeStamping") \
      -passin pass:"CHANGE_ME"

      Replace CHANGE_ME with the root CA private key password to sign the intermediate CA certificate.

    5. Create the leaf CA private key and CSR, and set a password:

      $ openssl req -newkey rsa:2048 -sha256 \
      -keyout leafCA.key.pem -out leafCA.csr.pem \
      -passout pass:"CHANGE_ME" \
      -subj "/C=CC/ST=state/L=Locality/O=RH/OU=LeafCA/CN=LeafCA"
    6. Sign the leaf CA certificate with the intermediate CA:

      $ openssl x509 -req -in leafCA.csr.pem -CA intermediateCA.crt.pem -CAkey intermediateCA.key.pem \
        -CAcreateserial -out leafCA.crt.pem -days 365 -sha256 \
        -extfile <(echo -e "basicConstraints=CA:false\nkeyUsage=cRLSign, keyCertSign\nextendedKeyUsage=critical,timeStamping") \
        -passin pass:"CHANGE_ME"

      Replace CHANGE_ME with the intermediate CA private key password to sign the leaf CA certificate.

    7. Create the certificate chain by combining the newly created certificates together:

      $ cat leafCA.crt.pem intermediateCA.crt.pem rootCA.crt.pem > new-tsa.certchain.pem
  5. Create a new secret resource with the signer key:

    $ oc create secret generic rotated-signer-key --from-file=rotated-signer-key=certs/leafCA.key.pem
  6. Create a new secret resource with the new certificate chain:

    $ oc create secret generic rotated-cert-chain --from-file=rotated-cert-chain=certs/new-tsa.certchain.pem
  7. Create a new secret resource with for the password:

    $ oc create secret generic rotated-password --from-literal=rotated-password=CHANGE_ME

    Replace CHANGE_ME with the intermediate CA private key password.

  8. Find your active TSA certificate file name, the TSA URL string, and configure your shell environment with these values:

    $ export ACTIVE_CERT_CHAIN_NAME=tsa.certchain.pem
    $ export TSA_URL=$(oc get timestampauthority securesign-sample -o jsonpath='{.status.url}')/api/v1/timestamp
    $ curl $TSA_URL/certchain -o $ACTIVE_CERT_CHAIN_NAME
  9. Update the Securesign TSA configuration:

    $ read -r -d '' SECURESIGN_PATCH <<EOF
    [
        {
            "op": "replace",
            "path": "/spec/tsa/signer/certificateChain",
            "value": {
                "certificateChainRef" : {"name": "rotated-cert-chain", "key": "rotated-cert-chain"}
            }
        },
        {
            "op": "replace",
            "path": "/spec/tsa/signer/file",
            "value": {
                    "privateKeyRef": {"name": "rotated-signer-key", "key": "rotated-signer-key"},
                    "passwordRef": {"name": "rotated-password", "key": "rotated-password"}
                }
        }
    ]
    EOF
  10. Patch the Securesign instance:

    $ oc patch Securesign securesign-sample --type='json' -p="$SECURESIGN_PATCH"
  11. Wait for the TSA server to redeploy with the new signer key and certificate chain:

    $ oc get pods -w -l app.kubernetes.io/name=tsa-server
  12. Get the new certificate chain:

    $ export NEW_CERT_CHAIN_NAME=new-tsa.certchain.pem
    $ curl $TSA_URL/certchain -o $NEW_CERT_CHAIN_NAME
  13. Configure The Update Framework (TUF) service to use the new TSA certificate chain.

    1. Set up your shell environment:

      $ export WORK="${HOME}/trustroot-example"
      $ export ROOT="${WORK}/root/root.json"
      $ export KEYDIR="${WORK}/keys"
      $ export INPUT="${WORK}/input"
      $ export TUF_REPO="${WORK}/tuf-repo"
      $ export TUF_SERVER_POD="$(oc get pods -l app.kubernetes.io/component=tuf,\!job-name -o jsonpath='{.items[0].metadata.name}')"
    2. Create a temporary TUF directory structure:

      $ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"
    3. Download the TUF contents to the temporary TUF directory structure:

      $ oc extract --to "${KEYDIR}/" secret/tuf-root-keys
      $ oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}"
      $ cp "${TUF_REPO}/root.json" "${ROOT}"
    4. Expire the old TSA certificate:

      $ tuftool rhtas \
        --root "${ROOT}" \
        --key "${KEYDIR}/snapshot.pem" \
        --key "${KEYDIR}/targets.pem" \
        --key "${KEYDIR}/timestamp.pem" \
        --set-tsa-target "$ACTIVE_CERT_CHAIN_NAME" \
        --tsa-uri "$TSA_URL" \
        --tsa-status "Expired" \
        --outdir "${TUF_REPO}" \
        --metadata-url "file://${TUF_REPO}"
    5. Add the new TSA certificate:

      $ tuftool rhtas \
        --root "${ROOT}" \
        --key "${KEYDIR}/snapshot.pem" \
        --key "${KEYDIR}/targets.pem" \
        --key "${KEYDIR}/timestamp.pem" \
        --set-tsa-target "$NEW_CERT_CHAIN_NAME" \
        --tsa-uri "$TSA_URL" \
        --outdir "${TUF_REPO}" \
        --metadata-url "file://${TUF_REPO}"
    6. Upload these changes to the TUF server:

      $ oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"
    7. Delete the working directory:

      $ rm -r $WORK
  14. Update the cosign configuration with the updated TUF configuration:

    $ cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json

    Now, you are ready to sign and verify your artifacts that uses the new TSA signer key, and certificate.

1.4. The Policy Controller

As a systems administrator, it is important to control how and when objects get created within your OpenShift Container Platform environment, and within your software supply chain. Starting with Red Hat Trusted Artifact Signer (RHTAS) 1.3, you can run the Policy Controller admission controller to enforce policies by using verifiable supply-chain metadata. Once you install the Policy Controller Operator and create the required resources, you can start enforcing your security policies, and your software supply chain.

The Red Hat Trusted Artifact Signer (RHTAS) Policy Controller Operator is an Red Hat OpenShift Container Platform admission controller designed to enforce policies by using supply-chain metadata. Essentially, the RHTAS Policy Controller acts as a gatekeeper for your Red Hat OpenShift cluster by making deployed workloads adhere to your security policies.

The RHTAS Policy Controller has these key features:

Easy integration with the RHTAS service
The Policy Controller Operator uses the established, trusted, and transparent services provided by RHTAS, such as Rekor’s transparency log, and Fulcio’s short-lived certificates for stronger signature validation. You can also take advantage of Trusted Artifact Signer’s secure Trust Root as a source of public keys and certificates used in artifact verification, along with auditing Rekor’s transparency log.
Verification of container image signatures
The RHTAS Policy Controller resolves container image tags to validate that the container image being ran does not differ from what was signed by RHTAS service. You can automatically verify signatures and attestations for container images, these can be enforced on a per-namespace basis, and you can create multiple policies to fit your security needs. You can create custom resources, such as, ClusterImagePolicy to define the rules for validating container images.
Defining and enforcing workload policies
You can define and enforce policies to restrict what container images can run in your Red Hat OpenShift cluster. One such requirement could be, to only allow specified images to run that match a certain signing key, and to verify attestations. You can chose to enforce strict policies, or use warning mode to better understand how a policy will impact your environment. You can also define and enforce policies based on other supply chain metadata.

1.4.2. Installing the Policy Controller Operator

Before you can start creating policies, and enforcing them, you need to install the Policy Controller Operator by using the Operator Lifecycle Manager (OLM).

Prerequisites

  • Access to the OpenShift web console with the cluster-admin role.

Procedure

  1. Log in to the OpenShift web console with a user that has the cluster-admin role.
  2. From the Administrator perspective, expand the Operators navigation menu, and click OperatorHub.
  3. In the search field, type policy-controller, and click the Policy Controller Operator tile provided by Red Hat.
  4. Click the Install button to show the operator details.
  5. Accept the default values, click Install on the Install Operator page, and wait for the installation to finish.
  6. Once the installation finishes, you can create the Policy Controller resources.

1.4.3. Creating the Policy Controller resources

After installing the Red Hat Trusted Artifact Signer (RHTAS) Policy Controller Operator, you need to create three new resources. These resources are: the base Policy Controller resource, the cluster image policy resource, and the Trust Root resource. This procedure guides you on creating a basic set of these resources.

Note

By default the Policy Controller resyncs the cluster image policies every 10 hours.

Prerequisites

  • Installation of the RHTAS Policy Controller Operator.
  • A workstation with the oc, curl, and tuftool binaries installed.

Procedure

  1. Open a terminal on your workstation, and log in to OpenShift:

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  2. Create and switch to the policy-controller-operator namespace:

    $ oc new-project policy-controller-operator ; oc project policy-controller-operator
  3. Create a basic Policy Controller resource.

    1. Configure the Policy Controller to watch your namespaces that match the defined label selector under spec.policy-controller.webhook.namespaceSelector.matchExpressions:

      ...
      spec:
        policy-controller:
          ...
          webhook:
            ...
            namespaceSelector:
              matchExpressions:
                - key: policy.rhtas.com/include
                  operator: In
                  values: ["true"]
      ...
      $ cat <<EOF | oc apply -f -
      apiVersion: rhtas.charts.redhat.com/v1alpha1
      kind: PolicyController
      metadata:
        name: policycontroller-sample
      spec:
        policy-controller:
          cosign:
            webhookName: "policy.rhtas.com"
          webhook:
            name: webhook
            extraArgs:
              webhook-name: policy.rhtas.com
              mutating-webhook-name: defaulting.clusterimagepolicy.rhtas.com
              validating-webhook-name: validating.clusterimagepolicy.rhtas.com
            failurePolicy: Fail
            namespaceSelector:
              matchExpressions:
                - key: policy.rhtas.com/include
                  operator: In
                  values: ["true"]
            webhookNames:
              defaulting: "defaulting.clusterimagepolicy.rhtas.com"
              validating: "validating.clusterimagepolicy.rhtas.com"
      EOF
      Important

      You must create this resource in the policy-controller-operator namespace.

    2. Add the policy.rhtas.com/include: "true" label to the namespace that you want watched by the Policy Controller:

      apiVersion: v1
      kind: Namespace
      metadata:
        labels:
          policy.rhtas.com/include: "true"
        name: example-namespace
    3. If you have a custom Certificate Authority (CA) bundle or self-signed certificates, then you can add your ConfigMap name and key under the spec.policy-controller.webhook.registryCaBundle section of the Policy Controller resource:

      ...
      spec:
        policy-controller:
          ...
          webhook:
            registryCaBundle:
              name: CONFIGMAP_NAME
              key: CA_BUNDLE_KEY
      ...
  4. Create a Trust Root resource. You have three options for creating the Trust Root resource: a custom TUF repository, using your own keys, or using a serialized TUF root.

    1. Configure these environment variables from the RHTAS services:

      $ export TUF_URL="$(oc -n trusted-artifact-signer get tuf -o jsonpath='{.items[0].status.url}')"
      $ export BASE64_TUF_ROOT="$(curl -fsSL "$TUF_URL/root.json" | base64 -w0)"
      $ export FULCIO_URL="$(oc -n trusted-artifact-signer get fulcio -o jsonpath='{.items[0].status.url}')"
      $ export CTLOG_URL="http://ctlog.trusted-artifact-signer.svc.cluster.local"
      $ export REKOR_URL="$(oc -n trusted-artifact-signer get rekor -o jsonpath='{.items[0].status.url}')"
      $ export TSA_URL="$(oc -n trusted-artifact-signer get timestampAuthorities -o jsonpath='{.items[0].status.url}')"
    2. Option 1. Create the TrustRoot resource for a custom TUF repository:

      $ cat <<EOF | oc apply -f -
      apiVersion: policy.sigstore.dev/v1alpha1
      kind: TrustRoot
      metadata:
        name: trust-root
      spec:
        remote:
          mirror: $TUF_URL
          root: |
            $BASE64_TUF_ROOT
      EOF
    3. Option 2. Create a Trust Root with your own keys.

      1. Create and apply the TrustRoot resource using this template:

        apiVersion: policy.sigstore.dev/v1alpha1
        kind: TrustRoot
        metadata:
          name: trust-root
        spec:
          sigstoreKeys:
            certificateAuthorities:
            - subject:
                organization: fulcio-organization
                commonName: fulcio-common-name
              uri: $FULCIO_URL
              certChain: |-
                FULCIO_CERT_CHAIN
            ctLogs:
            - baseURL: $CTLOG_URL
              hashAlgorithm: sha-256
              publicKey: |-
                CTFE_PUBLIC_KEY
            tLogs:
            - baseURL: $REKOR_URL
              hashAlgorithm: sha-256
              publicKey: |-
                REKOR_PUBLIC_KEY
            timestampAuthorities:
            - subject:
                organization: tsa-organization
                commonName: tsa-common-name
              uri: $TSA_URL
              certChain: |-
                TSA_CERT_CHAIN
        Note

        Substitute the public keys and certificate chain values with your specific values for your RHTAS environment.

    4. Option 3. Create a Trust Root for a serialized TUF root.

      1. Create a temporary directory to contain a clone of your TUF root:

        $ mkdir -p tuf-repo
      2. Download and clone the TUF repository:

        $ curl -s $TUF_URL/root.json > root.json
        $ tuftool clone --metadata-url=$TUF_URL --metadata-dir=tuf-repo --targets-url=$TUF_URL/targets --targets-dir=tuf-repo/targets --root=root.json
      3. Archive and encode the TUF repository:

        $ tar -C ./tuf-repo -czf tuf-repo.tgz .
        $ export MIRROR_FS=$(base64 -w0 tuf-repo.tgz)
      4. Create the TrustRoot resource:

        $ cat <<EOF | oc apply -f -
        apiVersion: policy.sigstore.dev/v1alpha1
        kind: TrustRoot
        metadata:
          name: trust-root
        spec:
          repository:
            root: |-
              $BASE64_TUF_ROOT
            mirrorFS: |-
              $MIRROR_FS
        EOF
  5. Create a basic Policy Controller cluster image policy resource.

    1. Configure these environment variables for Fulcio, Rekor, the Trust Root, and the OpenID Connect (OIDC) issuer and subject:

      $ export FULCIO_URL="$(oc -n trusted-artifact-signer get fulcio -o jsonpath='{.items[0].status.url}')"
      $ export REKOR_URL="$(oc -n trusted-artifact-signer get rekor -o jsonpath='{.items[0].status.url}')"
      $ export TRUST_ROOT_RESOURCE="trust-root"
      $ export OIDC_ISSUER_URL="https://ISSUER_URL"
      $ export OIDC_SUBJECT="SUBJECT"
    2. Create the ClusterImagePolicy resource:

      $ cat <<EOF | oc apply -f -
      apiVersion: policy.sigstore.dev/v1beta1
      kind: ClusterImagePolicy
      metadata:
        name: cluster-image-policy
      spec:
        images:
          - glob: "**"
        authorities:
          - keyless:
              url: $FULCIO_URL
              trustRootRef: $TRUST_ROOT_RESOURCE
              identities:
                - issuer: $OIDC_ISSUER_URL
                  subject: $OIDC_SUBJECT
            ctlog:
              url: $REKOR_URL
              trustRootRef: $TRUST_ROOT_RESOURCE
            rfc3161timestamp:
              trustRootRef: $TRUST_ROOT_RESOURCE
      EOF
      Note

      The glob value of ** evaluates all container images.

1.5. Signing and verifying AI/ML models

As a systems administrator, you can use Red Hat Trusted Artifact Signer (RHTAS) to sign and verify artificial intelligence (AI) and machine learning (ML) models. You can integrate AI/ML model signing and verification into your Continuous Integration and Continuous Deployment (CI/CD) pipelines, or by using the command-line interface (CLI). Doing this can enhance the security of your software supply chain workloads when running on Red Hat OpenShift by only using valid AI/ML models.

Important

Signing and verifying AI/ML models by using the CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details.

Before signing artificial intelligence (AI) and machine learning (ML) models with Red Hat Trusted Artifact Signer (RHTAS), you need to generate a client trust configuration that uses The Update Framework (TUF) Trust Root for your RHTAS environment.

Important

On RHTAS 1.2 and below, the Rekor key is configured to use SHA384 encryption. You must rotate the Rekor signer key to use SHA256. If you do not change the encryption type for Rekor, then verifying artifacts will cause mismatch errors.

For more information about this issue, see the RHTAS Release Notes.

Prerequisites

  • Access to the OpenShift web console with the cluster-admin role.
  • Installation of RHTAS running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • A workstation with the oc binary installed.

Procedure

  1. Open a terminal on your workstation, log in to OpenShift from the command line:

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  2. Configure your shell environment:

    $ export WORK="${HOME}/trustroot-example"
    $ export SIGNED_TRUST_ROOT="${WORK}/root/trusted_root.json"
    $ export TUF_REPO="${WORK}/tuf-repo"
    $ export TUF_SERVER_POD="$(oc get pods -l app.kubernetes.io/component=tuf,\!job-name -o jsonpath='{.items[0].metadata.name}' -n trusted-artifact-signer)"
    $ export CA_URL=$(oc get fulcio -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer)
    $ export TLOG_URL=$(oc get rekor -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer)
    $ export OIDC_URL="OIDC_ISSUER_URL"

    Replace OIDC_ISSUER_URL with your OIDC provider’s URL address.

  3. Create the temporary TUF directories:

    $ mkdir -p "${WORK}/root/" "${TUF_REPO}"
  4. Download the signed target trust root file to the temporary TUF directories:

    $ oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}" -n trusted-artifact-signer
    $ cp "${TUF_REPO}/targets/DIGEST.trusted_root.json" "${SIGNED_TRUST_ROOT}"

    An example signed target trust root file name looks similar to this format, c03afd04e353889093e5b16b019656b23a57.trusted_root.json, where your DIGEST value would be different.

  5. Create a script for making the client trust configuration used by the CLI:

    $ cat > make_trust_config.sh <<'EOF'
    #!/bin/bash
    
    # Usage: ./make-trust-config.sh <trusted_root_input.json> <output.json> [caUrl] [oidcUrl] [tlogUrl]
    
    if [ "$#" -lt 2 ]; then
        echo "Usage: $0 <trusted_root_input.json> <output.json> [caUrl] [oidcUrl] [tlogUrl]"
        exit 1
    fi
    
    INPUT_FILE="$1"
    OUTPUT_FILE="$2"
    CA_URL=${3:-${CA_URL:-""}}
    OIDC_URL=${4:-${OIDC_URL:-""}}
    TLOG_URL=${5:-${TLOG_URL:-""}}
    
    # Check for jq
    if ! command -v jq &> /dev/null; then
        echo "Error: 'jq' is required but not installed."
        exit 1
    fi
    
    jq -n \
      --argjson trustedRoot "$(cat $INPUT_FILE)" \
      --arg caUrl "$CA_URL" \
      --arg oidcUrl "$OIDC_URL" \
      --arg tlogUrl "$TLOG_URL" \
      '{
        mediaType: "application/vnd.dev.sigstore.clienttrustconfig.v0.1+json",
        trustedRoot: $trustedRoot,
        signingConfig: {
          caUrl: $caUrl,
          oidcUrl: $oidcUrl,
          tlogUrls: [$tlogUrl]
        }
      }' > "$OUTPUT_FILE"
    EOF
  6. Make the script executable:

    $ chmod u+x make_trust_config.sh
  7. Run the make_trust_config.sh script:

    $ ./make_trust_config.sh $SIGNED_TRUST_ROOT trust_config.json

    A new trust_config.json file is created in the current working directory.

  8. You can now start signing and verifying AL/ML models by using command-line interface.

The Model Validation Operator gives you the ability to verify signed artificial intelligence (AI) and machine learning (ML) models at runtime for Red Hat OpenShift environments. This Operator allows you to create a ModelValidation custom resource (CR) in a project namespace, and then you can add a label to your pod for validation. You can use a mutating admission webhook that injects a short-lived step to validate an AI/ML model and signature that uses Red Hat Trusted Artifact Signer (RHTAS), and The Update Framework (TUF) to verify the signature, identity, and the issuer. If the validation process succeeds, then the pod proceeds, but if validation fails, then the pod admission is denied.

Prerequisites

  • Red Hat OpenShift Container Platform 4.16 or later.
  • Access to the OpenShift web console with the cluster-admin role.
  • Installation of RHTAS running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • A workstation with the oc binary installed.

Procedure

  1. Log in to the OpenShift web console with a user that has the cluster-admin role.
  2. From the Administrator perspective, expand the Operators navigation menu, and click OperatorHub.
  3. In the search field, type Model Validation Operator, and click the tile that is displayed.
  4. Click the Install button to show the operator details.
  5. Accept the default values, click Install on the Install Operator page, and wait for the installation to finish.
  6. Once the installation finishes, click View Operator.
  7. Add the AI/ML model, its signature, and your signed Trust Root configuration to the namespace you want validation done. This is typically done by creating a Persistent Volume Claim (PVC) on the OpenShift cluster, and copying these files to the PVC.
  8. Create a ModelValidation CR in the namespace you want validation done.

    1. Click the modelvalidation tab, and click the Create modelvalidation button.
    2. On the Create modelvalidation page, select YAML view. Update the YAML file accordingly:

      apiVersion: ml.sigstore.dev/v1alpha1
      kind: ModelValidation
      metadata:
        name: model-validation-example
        namespace: NAMESPACE
      spec:
        config:
          sigstoreConfig:
            certificateIdentity: "IDENTITY"
            certificateOidcIssuer: "OIDC_ISSUER"
          clientTrustConfig:
            trustConfigPath: SIGNED_TRUST_ROOT
        model:
          path: PATH_TO_MODEL
          signaturePath: PATH_TO_MODEL_SIGNATURE
        continuousValidation:
          enabled: true
          interval: "10m"

      Replace NAMESPACE with the same namespace where your workloads run.

      Replace IDENTITY with the signer’s email address.

      Replace OIDC_ISSUER with your OIDC provider’s issuer URL address.

      Replace SIGNED_TRUST_ROOT with the signed Trust Root target file, for example, /data/trust-config.json.

      Replace PATH_TO_MODEL with the path to the model file, for example, /data/model.onnx.

      Replace PATH_TO_MODEL_SIGNATURE with the path to the model’s signature file, for example, /data/model.sig.

      The interval field supports a duration format in seconds (s), minutes (m), or hours (h). In this example, the validation check runs every 10 minutes.

    3. Click the Create button.
  9. From your terminal session, create a new pod CR in the namespace you want to trigger a validation check. Update this example YAML file with your specific information:

    apiVersion: v1
    kind: Pod
    metadata:
      name: model-validation-pod-example
      namespace: NAMESPACE
      labels:
        validation.ml.sigstore.dev/ml: "model-validation-example"
    spec:
      containers:
      - name: app
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
        - name: model-storage-example
          mountPath: PATH_TO_WORKLOAD_VOLUME
      volumes:
      - name: model-storage-example
        persistentVolumeClaim:
          claimName: PVC_NAME

    You configure the webhook by using this label key, validation.ml.sigstore.dev/ml, with the value of the ModelValidation CR name created earlier, surrounded by double quotes.

    Replace NAMESPACE, PATH_TO_WORKLOAD_VOLUME, and PVC_NAME with values appropriate to your environment.

    1. Create the new pod by applying the CR file:

      oc apply -f _PATH_TO_CR_FILE_
  10. Now the webhook can intercept pod create and update requests. Next, the Model Validation Operator injects validation steps that reads the AI/ML model and its signature, and checks it against your Trust Root for RHTAS. If the validation check succeeds, then the pod creation or modification proceeds on.

With Red Hat Trusted Artifact Signer (RHTAS), you can sign, and verify signatures on artificial intelligence (AI) and machine learning (ML) models by using the model-transparency command-line interface (CLI). For the CLI to sign and verify the AI and ML models it must know about your Trust Root. The signing and verifying commands run inside a container image, and does not require a locally installed binary.

Important

Signing and verifying AI/ML models by using the CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details.

Important

On RHTAS 1.2 and below, the Rekor key is configured to use SHA384 encryption. You must rotate the Rekor signer key to use SHA256. If you do not change the encryption type for Rekor, then verifying artifacts will cause mismatch errors.

For more information about this issue, see the RHTAS Release Notes.

Prerequisites

  • Installation of RHTAS running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • An OpenID Connect (OIDC) identity for retrieving tokens or client credentials.
  • A workstation with the podman binary installed.

Procedure

  1. Configure your shell environment:

    $ export OIDC_ISSUER="OIDC_ISSUER_URL"
    $ export MODEL_IMAGE="registry.redhat.io/rhtas/model-transparency-rhel9@sha256:6db7fa2b956875a6f507811166b47b164d463dea78ab4403c6d7648d838b8acb"
    $ export MODEL_DIR="PATH_TO_MODEL_DIRECTORY"
    $ export TRUST_CFG="$(pwd)/trust_config.json"
    $ export SIG_PATH="$MODEL_DIR/model.sig"

    Replace OIDC_ISSUER_URL with your OIDC provider URL address.

    Replace PATH_TO_MODEL_DIRECTORY with the absolute path to the directory containing the AI/ML models.

  2. There are two options for signing a model by using the CLI. You can use an identity token and a client identifier, or just the client identifier. Using an identity token is the non-interactive way, where as using a client identifier is the interactive way.

    Note

    When using self-signed certificates or a custom Certificate Authority (CA), you have to pass those certificates to the container to successfully sign an AI/ML model.

    1. Option 1. Signing AI/ML models with an identity token:

      $ podman run --rm \
      --userns=keep-id --user "$(id -u)":"$(id -g)" --group-add keep-groups \
      -v "$MODEL_DIR":/model:Z,U \
      -v "$TRUST_CFG":/trust_config.json:Z,ro \
      -w /model "$MODEL_IMAGE" \
      sign sigstore \
      --trust_config /trust_config.json \
      --signature /model/model.sig \
      --identity_token "OIDC_TOKEN" \
      --client_id CLIENT_ID \
      /model

      Replace OIDC_TOKEN with your OIDC authentication token.

      Replace CLIENT_ID with your OIDC client identifier.

    2. Option 2. Signing AI/ML models by using a client identifier:

      $ podman run --rm -it \
      --userns=keep-id --user "$(id -u)":"$(id -g)" --group-add keep-groups \
      -v "$MODEL_DIR":/model:Z,U \
      -v "$TRUST_CFG":/trust_config.json:Z,ro \
      -w /model "$MODEL_IMAGE" \
      sign sigstore \
      --trust_config "/trust_config.json" \
      --signature "/model/model.sig" \
      --client_id CLIENT_ID \
      /model

      Replace CLIENT_ID with your OIDC client identifier.

  3. Verify a model signature that uses the CLI:

    $ podman run --rm -it \
    --userns=keep-id --user "$(id -u)":"$(id -g)" --group-add keep-groups \
    -v "$MODEL_DIR":/model:Z,U \
    -v "$TRUST_CFG":/trust_config.json:Z,ro \
    -w /model "$MODEL_IMAGE" \
    verify sigstore \
    --trust_config "/trust_config.json" \
    --signature "/model/model.sig" \
    --identity IDENTITY \
    --identity_provider "$OIDC_ISSUER" \
    /model

    Replace IDENTITY with an email address or with a SPIFFE or URI subject.

You can sign and verify artificial intelligence and machine learning (AI/ML) models by using the model transparency command-line interface (CLI) to ensure the integrity and authenticity of your models. This procedure outlines the steps necessary to do signing and verification operations by using a variety of ways. You can sign and verify your ML models in a local directory or in an Open Container Initiative (OCI) image registry, such as Quay.io.

Prerequisites

  • Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • Your OpenID Connect (OIDC) issuer URL, and user credentials for signing and verifying your ML models.
  • Python 3.10 or later installed on your workstation.
  • A workstation with the oc, pip, and curl binaries installed.

Procedure

  1. Open a terminal, and install the model transparency CLI Python package to your workstation:

    $ pip install rh-model-signing
  2. Download a copy of the trust root from your OpenShift cluster to your workstation:

    $ export TUF_URL=$(oc get tuf -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer)
    $ curl -o root.json $TUF_URL/root.json
  3. Bootstrap the trust root before signing or verifying any models:

    $ rh_model_signing trust-instance $TUF_URL root.json
  4. Different ways to sign your AI/ML models:

    Sign a local ML model directory
    $ rh_model_signing sign sigstore PATH_TO_MODEL --instance $TUF_URL --client-id trusted-artifact-signer

    Replace PATH_TO_MODEL with the path to your local ML model directory.

    Sign an OCI model artifact or image with a registry attachment
    $ rh_model_signing sign sigstore PATH_TO_REGISTRY:TAG --instance $TUF_URL --client-id trusted-artifact-signer

    Replace PATH_TO_REGISTRY:TAG with the URL to your OCI image and the tag for the image.

    Sign an OCI model artifact or image but outputting the signature as a separate file
    $ rh_model_signing sign sigstore PATH_TO_REGISTRY:TAG --output-mode file --signature PATH_TO_MODEL_SIGNATURE --instance $TUF_URL --client-id trusted-artifact-signer

    Replace PATH_TO_REGISTRY:TAG with the URL to your OCI image and the tag for the image. Replace PATH_TO_MODEL_SIGNATURE with the path to the file containing the signature for the ML model.

    Sign and attach by using the tag-based method
    $ rh_model_signing sign sigstore PATH_TO_REGISTRY:TAG --attachment-mode tag --instance $TUF_URL --client-id trusted-artifact-signer

    Replace PATH_TO_REGISTRY:TAG with the URL to your OCI image and the tag for the image.

  5. Different ways to verify your AI/ML models:

    Verify a local AI/ML model directory
    $ rh_model_signing verify sigstore PATH_TO_MODEL --signature model.sig --identity "jdoe@example.com" --identity-provider "OIDC_ISSUER_URL" --instance $TUF_URL

    Replace PATH_TO_MODEL with the path to your local ML model directory. Replace OIDC_ISSUER_URL with the URL to your OpenID Connect (OIDC) issuer.

    Verify an OCI image from a registry
    $ rh_model_signing verify sigstore PATH_TO_REGISTRY:TAG --identity "jdoe@example.com" --identity-provider “OIDC_ISSUER_URL--instance $TUF_URL

    Replace PATH_TO_REGISTRY:TAG with the URL to your OCI image and the tag for the image. Replace OIDC_ISSUER_URL with the URL to your OIDC issuer.

    Verify an OCI image with a local signature file
    $ rh_model_signing verify sigstore PATH_TO_REGISTRY:TAG --signature PATH_TO_MODEL_SIGNATURE --identity "jdoe@example.com" --identity-provider "OIDC_ISSUER_URL" --instance $TUF_URL

    Replace PATH_TO_REGISTRY:TAG with the URL to your OCI image and the tag for the image. Replace PATH_TO_MODEL_SIGNATURE with the path to the file containing the signature for the ML model. Replace OIDC_ISSUER_URL with the URL to your OIDC issuer.

    Verify and attach by using the tag-based method
    $ rh_model_signing verify sigstore PATH_TO_REGISTRY:TAG --attachment-mode tag --identity "jdoe@example.com" --identity-provider "OIDC_ISSUER_URL" --instance $TUF_URL

    Replace PATH_TO_REGISTRY:TAG with the URL to your OCI image and the tag for the image. Replace OIDC_ISSUER_URL with the URL to your OIDC issuer.

1.6. Using your own certificate authority bundle

You can bring your organization’s certificate authority (CA) bundle for signing and verifying your build artifacts with Red Hat’s Trusted Artifact Signer (RHTAS) service.

Prerequisites

  • Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
  • A running Securesign instance.
  • Your CA root certificate.
  • A workstation with the oc binary installed.

Procedure

  1. Log in to OpenShift from the command line:

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  2. Switch to the RHTAS project:

    $ oc project trusted-artifact-signer
  3. Create a new ConfigMap by using your organization’s CA root certificate bundle:

    $ oc create configmap custom-ca-bundle --from-file=ca-bundle.crt
    Important

    The certificate filename must be ca-bundle.crt.

  4. Open the Securesign resource for editing:

    $ oc edit Securesign securesign-sample
    1. Add the rhtas.redhat.com/trusted-ca under the metadata.annotations section:

      apiVersion: rhtas.redhat.com/v1alpha1
      kind: Securesign
      metadata:
        name: example-instance
        annotations:
      	rhtas.redhat.com/trusted-ca: custom-ca-bundle
      spec:
      ...
    2. Save, and quit the editor.
  5. Open the Fulcio resource for editing:

    $ oc edit Fulcio securesign-sample
    1. Add the rhtas.redhat.com/trusted-ca under the metadata.annotations section:

      apiVersion: rhtas.redhat.com/v1alpha1
      kind: Fulcio
      metadata:
        name: example-instance
        annotations:
          rhtas.redhat.com/trusted-ca: custom-ca-bundle
      spec:
      ...
    2. Save, and quit the editor.
  6. Wait for the RHTAS operator to reconfigure before signing and verifying artifacts.

1.7. High availability

As a systems administrator, you might want to configure Red Hat Trusted Artifact Signer (RHTAS) for a high availability (HA) environment to take advantage of the reliability and uptime even when components fail.

1.7.1. Prerequisites

  • A Red Hat OpenShift cluster version 4.17 or higher, with a minimum of three worker nodes.
  • Access to the OpenShift web console, and command-line tools with the cluster-admin role.
  • Production-ready storage solutions available.
  • Production-ready database solutions available.

1.7.2. High availability overview

Before you deploy Red Hat Trusted Artifact Signer (RHTAS) in an high availability (HA) environment, there are some requirements for you to consider. The base requirements for running an HA deployment of RHTAS is Red Hat OpenShift, and access to production-ready storage and database solutions.

Your OpenShift cluster must have a minimum of three worker nodes, and Red Hat recommends having a minimum of three replicas for each RHTAS component. The RHTAS components are: Rekor, Fulcio, Certificate Transparency Log (CTLog), Timestamp Authority (TSA), The Update Framework (TUF), and Trillian.

For HA RHTAS environments, Red Hat recommends using production-ready storage solutions for running external databases, storing objects, and for a Redis instance. Examples of production-ready storage solutions would be Amazon Simple Storage Service (S3), Google Cloud Storage, or Microsoft Azure Blob. You can also consider using Persistent Volume Claims (PVC) with the ReadWriteMany access mode.

It is important to distribute the RHTAS pods to different nodes in your OpenShift cluster to minimize a single point of failure. For HA RHTAS environments, Red Hat recommends using the soft anti-affinity option (affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution) to control pod scheduling across your OpenShift cluster for each RHTAS component. Soft anti-affinity allows pod scheduling if resource constraints cannot be met. You can set resource requirements for each RHTAS component by defining their limits for CPU and memory usage. Along with scheduling pods to run on nodes with specific tolerations and taints, you can dedicate nodes to running RHTAS-only workloads.

For an example of what the HA RHTAS resource specifications file looks like, see the appendix.

1.7.3. Storage requirements for high availability

Deploying Red Hat Trusted Artifact Signer (RHTAS) in a high availability (HA) environment you can use file-based storage and cloud-based object storage solutions. Using file-based storage requires the ReadWriteMany (RWX) storage access mode for Persistent Volumes Claims (PVC). The Update Framework (TUF) must use the RWX storage access mode, and Rekor attestations storage can also use RWX storage access mode, but can also use cloud-based object storage solutions, such as Amazon Simple Storage Service (S3) or Google Cloud Storage.

The Update Framework

The Update Framework stores cryptographic metadata, and must be accessible by all replicas when running more than one TUF replica. The RHTAS Operator validates that the access mode is RWX when the replicas are greater than one in an HA environment.

Important

Running a single replica of TUF is not recommended in production environments.

Rekor attestation
Rekor can use file-based storage for attestations when running more than one Rekor replica. The RHTAS Operator validates that the access mode is RWX when the replicas are greater than one when in an HA environment. The recommended approach is to use cloud-based object storage, such as Amazon S3 or Google Cloud Storage, for Rekor’s attestation storage. When using cloud-based object storage, this removes the requirement for using the RWX storage access mode, as the RHTAS Operator does not need to validate the access mode.
RWX configuration example

In this example, you can append the HA storage configuration to the Securesign custom resource (CR), under the spec.tuf and spec.rekor sections.

apiVersion: rhtas.redhat.com/v1alpha1
kind: Securesign
metadata:
  name: securesign-ha
  namespace: trusted-artifact-signer
spec:
  tuf:
    ...
    replicas: 3
    pvc:
      name: tuf-pvc
      size: "100Mi"
      retain: true
      accessModes:
        - ReadWriteMany
      storageClass: "ocs-storagecluster-cephfs"

  rekor:
    ...
    replicas: 3
    attestations:
      enabled: true
      url: "file:///var/run/attestations?no_tmp_dir=true"
    pvc:
      size: "5Gi"
      retain: true
      accessModes:
        - ReadWriteMany
      storageClass: "ocs-storagecluster-cephfs"
Understanding how the PVC name and `retain`fields work

When specifying a PVC name in the name field, the RHTAS Operator does not create or manage the PVC. The RHTAS Operator uses the specified PVC name directly. You must create and configure before deploying RHTAS in an HA environment. The other PVC fields, such as size, accessModes, and storageClass, are ignored by the RHTAS Operator.

When not specifying a PVC name, the RHTAS Operator generates a default name. If this PVC name is already in use, then the RHTAS Operator discovers and uses it, overwriting any existing data on that PVC. If no PVC exists, then the RHTAS Operator creates the PVC with the specified size, accessModes, and storageClass.

The retain field controls the deletion of the PVC when the Securesign custom resource (CR) is deleted. When retain is set to true, the RHTAS Operator does not delete the PVC, and you must manage the lifecycle of the PVC separately from the Securesign CR. When retain is set to false, the RHTAS Operator deletes the PVC when the Securesign CR is deleted.

With this procedure, you can replace the Red Hat Trusted Artifact Signer (RHTAS) default database for Trillian with a MySQL or PostgreSQL database instance managed by Amazon’s Relational Database Service (RDS).

Prerequisites

  • An Amazon Web Service (AWS) account with access to the Amazon RDS console.

    • MySQL version 8 or later.
    • PostgreSQL version 13 or later.
  • A workstation with the oc, curl, and the mysql or psql binaries installed.
  • Access to the OpenShift web console with the cluster-admin role.
  • Command-line access with privileges to create a database and populate the database instance.

Procedure

  1. Open the Amazon RDS console, and create a new MariaDB instance.

    1. Wait for the MariaDB instance to be deployed, and is available.
  2. From your workstation, log in to the new database by providing the regional URL address, the port, and the user name:

    mysql -h FQDN_FOR_RDS_HOSTNAME -P 3306 -u USER_NAME -p
    $ mysql -h exampledb.1234.us-east-1.rds.amazonaws.com -P 3306 -u admin -p
  3. Create a new database named trillian:

    create database trillian;
  4. Switch to the newly created database:

    use trillian;
  5. Create a new database user named trillian, and set a PASSWORD for the newly created user:

    CREATE USER trillian@'%' IDENTIFIED BY 'PASSWORD';
    GRANT ALL PRIVILEGES ON trillian.* TO 'trillian'@'%';
    FLUSH PRIVILEGES;
  6. Disconnect from the database:

    EXIT
  7. Download the database configuration file:

    $ curl -o dbconfig.sql https://raw.githubusercontent.com/securesign/trillian/main/storage/mysql/schema/storage.sql
  8. Apply the database configuration to the new database:

    mysql -h FQDN_FOR_RDS_HOSTNAME -P 3306 -u USER_NAME -p -D DB_NAME < PATH_TO_CONFIG_FILE
    $ mysql -h exampledb.1234.us-east-1.rds.amazonaws.com -P 3306 -u trillian -p -D trillian < dbconfig.sql
  9. Open a terminal on your workstation, and log in to OpenShift:

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  10. Create a new Secret containing the credentials for the Trillian database within the MariaDB instance which was created previously:

    oc create secret generic OBJECT_NAME \
    --from-literal=mysql-database=trillian \
    --from-literal=mysql-host=FQDN_FOR_RDS_HOSTNAME \
    --from-literal=mysql-password=PASSWORD \
    --from-literal=mysql-port=PORT_NUMBER \
    --from-literal=mysql-root-password=PASSWORD \
    --from-literal=mysql-user=USER_NAME
    $ oc create secret generic trillian-db \
    --from-literal=mysql-database=trillian \
    --from-literal=mysql-host=exampledb.1234.us-east-1.rds.amazonaws.com \
    --from-literal=mysql-password=mypassword123 \
    --from-literal=mysql-port=3306 \
    --from-literal=mysql-root-password=myrootpassword123 \
    --from-literal=mysql-user=trillian
    Important

    Verify that your security group rules allow access to the database from the OpenShift cluster.

  11. You can now deploy the Trusted Artifact Signer service to use this database. If you were following the Trusted Artifact Signer installation procedure, then you can proceed to the next step.

With this procedure, you can replace the Red Hat Trusted Artifact Signer (RHTAS) default database for Trillian with an operator-managed MySQL instance managed by Red Hat OpenShift.

Important

Red Hat recommends using a highly available MariaDB database for production workloads.

Prerequisites

  • Permissions to create an OpenShift project, and deploy a database instance from the OpenShift samples catalog.
  • A workstation with the oc, curl, and the mysql binaries installed.
  • Access to the OpenShift web console with the cluster-admin role.
  • Command-line access with privileges to create a database and populate the database instance.

Procedure

  1. Log in to the OpenShift web console where you are deploying the RHTAS service.
  2. Select the trusted-artifact-signer project, if the project already exists, else create a new project for the database:

    1. To create a new project, click the drop-down project menu, and click the Create Project button.
    2. Name the new project trusted-artifact-signer, and click the Create button.
  3. From the navigation menu, expand Ecosystem, click Software Catalog, then click Databases from the software catalog menu.
  4. Select MariaDB, and click the Instantiate Template button.

    Important

    Do not select MariaDB (Ephemeral).

  5. On the Instantiate Template page, configure the following fields:

    1. In the MariaDB Database Name field, enter trillian.
    2. In the Volume Capacity field, enter 5Gi.
    3. Click the Create button.
  6. Begin a remote shell session:

    1. On the Topology page, selecting the MariaDB pod brings up a side panel, click the Resources tab.
    2. Under the Pods section, click on the MariaDB pod name.
    3. Click the Terminal tab to start a remote shell session to the MariaDB pod.
  7. In the remote shell session, verify that you can connect to the Trillian database:

    $ mysql -u $MYSQL_USER -p$MYSQL_PASSWORD -D$MYSQL_DATABASE
    Note

    Credentials are stored in a secret object with the service name (mariadb), and contains the name of the database, and user name, along with the database root password. Make a note of these credentials as they will be used later on when creating the database secret object.

  8. Disconnect from the database:

    EXIT
  9. Download the database configuration file:

    $ curl -o dbconfig.sql https://raw.githubusercontent.com/securesign/trillian/main/storage/mysql/schema/storage.sql
  10. Apply the database configuration to the new database:

    $ mysql -u $MYSQL_USER -p$MYSQL_PASSWORD -D$MYSQL_DATABASE < dbconfig.sql
  11. Open a terminal on your workstation, and log in to OpenShift:

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  12. Create a new Secret containing the credentials for the Trillian database within the MariaDB instance which was created previously:

    oc create secret generic OBJECT_NAME \
    --from-literal=mysql-database=trillian \
    --from-literal=mysql-host=FQDN_or_SERVICE_ADDR \
    --from-literal=mysql-password=PASSWORD \
    --from-literal=mysql-port=PORT_NUMBER \
    --from-literal=mysql-root-password=PASSWORD \
    --from-literal=mysql-user=USER_NAME
    $ oc create secret generic trillian-db \
    --from-literal=mysql-database=trillian \
    --from-literal=mysql-host=mariadb.trusted-artifact-signer.svc.cluster.local \
    --from-literal=mysql-password=mypassword123 \
    --from-literal=mysql-port=3306 \
    --from-literal=mysql-root-password=myrootpassword123 \
    --from-literal=mysql-user=trillian

    You can use an OpenShift internal service name for the MariaDB instance.

  13. You can now deploy the Trusted Artifact Signer service to use this database. If you were following the Trusted Artifact Signer installation procedure, then you can proceed to the next step.

With this procedure, you can replace the Red Hat Trusted Artifact Signer (RHTAS) default database for Trillian with a production-ready PostgreSQL database instance.

Prerequisites

  • A workstation with the oc, curl, and the psql binaries installed.
  • Access to the OpenShift web console with the cluster-admin role.
  • Command-line access with privileges to create a database and populate the database instance.

Procedure

  1. From your workstation, log in to PostgreSQL by providing the hostname, the port, and the user name:

    psql -h HOSTNAME -p 5432 -U USER_NAME
    $ psql -h db.example.com -p 5432 -U postgres
  2. Create a new database named trillian:

    CREATE DATABASE trillian;
  3. Create a new database user named trillian, and set a PASSWORD for the newly created user:

    CREATE USER trillian WITH PASSWORD 'PASSWORD';
    ALTER DATABASE trillian OWNER TO trillian;
    GRANT ALL ON SCHEMA public TO trillian;
  4. Disconnect from the database:

    exit
  5. Download the database configuration file:

    $ curl -o dbconfig.sql https://raw.githubusercontent.com/securesign/trillian/main/storage/postgresql/schema/storage.sql
  6. Apply the database configuration to the new database:

    psql -h FQDN_or_SERVICE_ADDR -p 5432 -U USER_NAME -d DB_NAME -f PATH_TO_CONFIG_FILE
    $ psql -h db.example.com -p 5432 -U trillian -d trillian -f dbconfig.sql
  7. Open a terminal on your workstation, and log in to OpenShift:

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  8. Create a new Secret containing the credentials for the Trillian database within the PostgreSQL database instance which was created previously:

    oc create secret generic trillian-db-credentials \
      --from-literal=postgresql-host=FQDN \
      --from-literal=postgresql-port=5432 \
      --from-literal=postgresql-user=trillian \
      --from-literal=postgresql-password=PASSWORD \
      --from-literal=postgresql-database=trillian \
      -n trusted-artifact-signer
    $ oc create secret generic trillian-db-credentials \
      --from-literal=postgresql-host=db.example.com \
      --from-literal=postgresql-port=5432 \
      --from-literal=postgresql-user=trillian \
      --from-literal=postgresql-password=mypassword123 \
      --from-literal=postgresql-database=trillian \
      -n trusted-artifact-signer
  9. Before deploying the Trusted Artifact Signer service, update the Trillian resource accordingly:

    apiVersion: rhtas.redhat.com/v1alpha1
    kind: Trillian
    metadata:
      name: trillian
      namespace: trusted-artifact-signer
    spec:
      database:
        create: false
        provider: postgresql
        uri: "postgresql://$(DB_USER):$(DB_PASSWORD)@$(DB_HOST):$(DB_PORT)/$(DB_NAME)"
      auth:
        env:
          - name: DB_HOST
            valueFrom:
              secretKeyRef:
                name: trillian-db-credentials
                key: postgresql-host
          - name: DB_PORT
            valueFrom:
              secretKeyRef:
                name: trillian-db-credentials
                key: postgresql-port
          - name: DB_USER
            valueFrom:
              secretKeyRef:
                name: trillian-db-credentials
                key: postgresql-user
          - name: DB_PASSWORD
            valueFrom:
              secretKeyRef:
                name: trillian-db-credentials
                key: postgresql-password
          - name: DB_NAME
            valueFrom:
              secretKeyRef:
                name: trillian-db-credentials
                key: postgresql-database

    If you were following the Trusted Artifact Signer installation procedure, then you can proceed to the next step.

1.7.7. Configuring attestation storage for Rekor

Rekor attestation storage gives you more details about the attestations for your software supply chain, and is essential for Supply-chain Levels for Software Artifacts (SLSA) compliance. You can configure Rekor attestations by using either file-based storage or cloud-based object storage, such as Amazon Simple Storage Service (S3). Attestation storage is an optional feature, but gives you full attestation data separately from the transparency log entries, and enables you to retrieve the original attestation data when querying the transparency log.

Only specific entry types are stored in the attestation payload, these are the intoto and cose entry types. Attestations by default have a maximum size of 100 KB, but you can customize the size by using the maxSize field in the attestations configuration. If setting a custom size, this size must not exceed Rekor server’s maximum request body size (maxRequestBodySize), which is 10 MB by default. Any attestations larger than the configured maximum size are skipped.

Important

Red Hat recommends using Amazon S3 object storage for attestation storage in production environments.

Important

Once you enable attestation storage, you cannot disable it.

Prerequisites

  • An Amazon Web Services (AWS) account with access to create an S3 bucket.

    • Proper credentials with permissions to read and write to the S3 bucket.
  • A network connection from Red Hat OpenShift to the S3 bucket endpoint.
  • A workstation with the oc binary installed.

Procedure

  1. Open a terminal on your workstation, and log in to OpenShift:

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Note

    You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.

  2. Create a secret for your Amazon S3 credentials:

    $ oc create secret generic rekor-s3-credentials \
      --from-literal=AWS_ACCESS_KEY_ID="YOUR_ACCESS_KEY_ID" \
      --from-literal=AWS_SECRET_ACCESS_KEY="YOUR_SECRET_ACCESS_KEY" \
      -n trusted-artifact-signer

    Replace YOUR_ACCESS_KEY_ID and YOUR_SECRET_ACCESS_KEY with your actual AWS access key ID and secret access key.

  3. Update the Rekor resource to enable attestation storage by setting spec.attestations.enabled to true, configure the attestation storage url, and set the appropriate environment variables for AWS authentication:

    apiVersion: rhtas.redhat.com/v1alpha1
    kind: Rekor
    metadata:
      name: rekor
      namespace: trusted-artifact-signer
    spec:
      replicas: 3
      attestations:
        enabled: true
        url: "s3://ATTESTATION_BUCKET_NAME?region=AWS_REGION_VALUE"
        maxSize: "100Ki"
      auth:
        env:
          - name: AWS_ACCESS_KEY_ID
            valueFrom:
              secretKeyRef:
                name: rekor-s3-credentials
                key: AWS_ACCESS_KEY_ID
          - name: AWS_SECRET_ACCESS_KEY
            valueFrom:
              secretKeyRef:
                name: rekor-s3-credentials
                key: AWS_SECRET_ACCESS_KEY

    Replace ATTESTATION_BUCKET_NAME and AWS_REGION_VALUE with your actual S3 bucket name and AWS region.

Red Hat recommends using an external Redis instance for running a highly availability (HA) Red Hat Trusted Artifact Signer (RHTAS) service. You can use an external Redis instance from a major cloud provider, such as, Amazon Web Services (AWS) ElastiCache, Google Cloud Memorystore, or Azure Cache for Redis.

An external Redis instance for Rekor gives you more control over the search index and can provide better performance and scalability for your RHTAS deployment. Rekor communicates with the Redis instance to store and retrieve search index data, which allows for faster search queries and improved performance compared to using an operator-managed Redis instance.

Important

Using a operator-managed Redis instance is not recommended for production environments. Operator-manages Redis instances runs a single replica without persistence guarantees, which is not suitable for production environments.

Important

Red Hat recommends using a PostgreSQL database for the Rekor backend in production environments. Using a MySQL database for Redis with RHTAS HA is a Technical Preview feature, and it is not recommended for production environments.

Prerequisites

  • A production-ready Redis instance available.
  • Redis credentials for authenticating.
  • Access to the OpenShift web console with the cluster-admin role.

Procedure

  1. Create a secret to store your Redis credentials:

    $ oc create secret generic redis-credentials --from-literal=password=PASSWORD -n trusted-artifact-signer

    Replace PASSWORD with the password for your Redis user.

  2. Create a Rekor resource to reference the external Redis instance:

    $ cat <<EOF | oc apply -f -
    apiVersion: rhtas.redhat.com/v1alpha1
    kind: Rekor
    metadata:
      name: rekor
      namespace: trusted-artifact-signer
    spec:
      searchIndex:
        create: false
        provider: redis
        url: "redis://:$(REDIS_PASSWORD)@redis.example.com:6379"
      auth:
        env:
          - name: REDIS_PASSWORD
            valueFrom:
              secretKeyRef:
                name: redis-credentials
                key: password
    EOF

    Update the url field to match the connection details of your Redis instance, including the hostname and port. The URL format for Redis is redis://[username:password@]host:port[/database].

    1. Optional. If you are using Transport Layer Security (TLS) with Redis, then you must start the URL string with rediss://, and you must reference your cloud provider’s certificate authority (CA) certificate under the spec section of the Rekor resource:

      apiVersion: rhtas.redhat.com/v1alpha1
      kind: Rekor
      metadata:
        name: rekor
        namespace: trusted-artifact-signer
      spec:
        searchIndex:
          create: false
          provider: redis
          url: "rediss://:$(REDIS_PASSWORD)@redis.example.com:6379"
        auth:
          env:
            - name: REDIS_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: redis-credentials
                  key: password
        trustedCA:
          name: redis-ca-bundle  # ConfigMap containing the CA certificate.
  3. Set the Rekor backfill Cron job under the spec section:

    apiVersion: rhtas.redhat.com/v1alpha1
    kind: Rekor
    metadata:
      name: rekor
      namespace: trusted-artifact-signer
    spec:
      backFillRedis:
        enabled: true
        schedule: "0 0 * * *"  # Runs daily at midnight.

    The Rekor backfill Cron job ensures the search index stays synchronized with the transparency log.

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top