此内容没有您所选择的语言版本。
Chapter 1. Protect your signing data
As a systems administrator, protecting the signing data of your software supply chain is critical when there is data loss due to hardware failure or accidental data deletion. The OpenShift API Data Protection (OADP) product provides data protection to applications running on Red Hat OpenShift. By using the OADP product, this can help us get the software developers back to signing and verifying code as quickly as possible. After installing and configuring the OADP operator you can start backing up and restoring your Red Hat Trusted Artifact Signer (RHTAS) data.
1.1. Installing and configuring the OADP operator 复制链接链接已复制到粘贴板!
The OpenShift API Data Protection (OADP) operator gives you the ability to backup OpenShift application resources and internal container images. You can use the OADP operator to backup and restore your Trusted Artifact Signer data.
This procedure uses Amazon Web Services (AWS) Simple Storage Service (S3) to create a bucket for illustrating how to configure the OADP operator. You can choose to use a different supported S3-compatible object storage platform instead of AWS, such as Red Hat OpenShift Data Foundation.
Prerequisites
- Red Hat OpenShift Container Platform version 4.13 or later.
-
Access to the OpenShift web console with the
cluster-adminrole. - The ability to create an S3-compatible bucket.
-
A workstation with the
oc, andawsbinaries installed.
Procedure
Open a terminal on your workstation, and log in to OpenShift:
Syntax
oc login --token=TOKEN --server=SERVER_URL_AND_PORTExample
$ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443NoteYou can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.
Create a new bucket:
Syntax
export BUCKET=NEW_BUCKET_NAME export REGION=AWS_REGION_ID export USER=OADP_USER_NAME aws s3api create-bucket \ --bucket $BUCKET \ --region $REGION \ --create-bucket-configuration LocationConstraint=$REGIONExample
$ export BUCKET=example-bucket-name $ export REGION=us-east-1 $ export USER=velero $ $ aws s3api create-bucket \ --bucket $BUCKET \ --region $REGION \ --create-bucket-configuration LocationConstraint=$REGIONCreate a new user:
Example
$ aws iam create-user --user-name $USERCreate a new policy:
Example
$ cat > velero-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVolumes", "ec2:DescribeSnapshots", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::${BUCKET}/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads" ], "Resource": [ "arn:aws:s3:::${BUCKET}" ] } ] } EOFAssociate this policy to the new user:
Example
$ aws iam put-user-policy \ --user-name $USER \ --policy-name velero \ --policy-document file://velero-policy.jsonCreate an access key:
Example
$ aws iam create-access-key --user-name $USER --output=json | jq -r '.AccessKey | [ "export AWS_ACCESS_KEY_ID=" + .AccessKeyId, "export AWS_SECRET_ACCESS_KEY=" + .SecretAccessKey ] | join("\n")'Create a credentials file with your AWS secret key information:
Syntax
cat << EOF > ./credentials-velero [default] aws_access_key_id=$AWS_ACCESS_KEY_ID aws_secret_access_key=$AWS_SECRET_ACCESS_KEY EOF-
Log in to the OpenShift web console with a user that has the
cluster-adminrole. - From the Administrator perspective, expand the Operators navigation menu, and click OperatorHub.
- In the search field, type oadp, and click the OADP Operator tile provided by Red Hat.
- Click the Install button to show the operator details.
- Accept the default values, click Install on the Install Operator page, and wait for the installation to finish.
After the operator installation finishes, from your workstation terminal, create a secret resource for OpenShift with your AWS credentials:
Example
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero- From the OpenShift web console, click the View Operator button.
- Click Create instance on the DataProtectionApplication (DPA) tile.
- On the Create DataProtectionApplication page, select YAML view.
Edit the following values in the resource file:
-
Under the
metadatasection, replacevelero-samplewithvelero. -
Under the
spec.configuration.nodeAgentsection, replaceresticwithkopia. -
Under the
spec.configuration.velerosection, addresourceTimeout: 10m. -
Under the
spec.configuration.velero.defaultPluginssection, add- csi. -
Under the
spec.snapshotLocationssection, replace theus-west-2value with your AWS regional value. -
Under the
spec.backupLocationssection, replace theus-east-1value with your AWS regional value. -
Under the
spec.backupLocations.objectStoragesection, replacemy-bucket-namewith your bucket name. Replacevelerowith your bucket prefix name, if you use a different prefix.
-
Under the
- Click the Create button.
1.2. Backing up your Trusted Artifact Signer data 复制链接链接已复制到粘贴板!
With the OpenShift API Data Protection (OADP) operator installed and an instance deployed, you can create a volume snapshot resource, and a backup resource to backup your Red Hat Trusted Artifact Signer data.
Prerequisites
- Red Hat OpenShift Container Platform version 4.13 or later.
-
Access to the OpenShift web console with the
cluster-adminrole. - Installation of the OADP operator.
-
A workstation with the
ocbinary installed.
Procedure
Open a terminal on your workstation, and log in to OpenShift:
Syntax
oc login --token=TOKEN --server=SERVER_URL_AND_PORTExample
$ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443NoteYou can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.
Find and edit the
VolumeSnapshotClassresource:Example
$ oc get VolumeSnapshotClass -n openshift-adp $ oc edit VolumeSnapshotClass csi-aws-vsc -n openshift-adpUpdate the following values in the resource file:
-
Under the
metadata.labelssection, add thevelero.io/csi-volumesnapshot-class: "true"label. - Save your changes, and quit the editor.
-
Under the
Create a
Backupresource:Example
$ cat <<EOF | oc apply -f - apiVersion: velero.io/v1 kind: Backup metadata: name: rhtas-backup labels: velero.io/storage-location: velero-1 namespace: openshift-adp spec: schedule: 0 7 * * * hooks: {} includedNamespaces: - trusted-artifact-signer includedResources: [] excludedResources: [] snapshotMoveData: true storageLocation: velero-1 ttl: 720h0m0s EOFAdd the schedule property to enable Cron scheduling for running this backup. In the example, this backup resource runs everyday at 7:00 a.m.
By default, all resources are backed up within the trusted-artifact-signer namespace. You can specify what resources you want to include or exclude by using the includeResources or excludedResources properties respectively.
ImportantDepending on the storage class of the backup target, persistent volumes cannot be actively in-use for the backup to be successful.
1.3. Restoring your Trusted Artifact Signer data 复制链接链接已复制到粘贴板!
With the Red Hat Trusted Artifact Signer (RHTAS) and OpenShift API Data Protection (OADP) operators installed, and a backup resource for RHTAS namespace, you can restore your data to an OpenShift cluster.
Prerequisites
- Red Hat OpenShift Container Platform version 4.13 or later.
-
Access to the OpenShift web console with the
cluster-adminrole. - Installation of the RHTAS operator.
- Installation of the OADP operator.
-
A backup resource of the
trusted-artifact-signernamespace structure. -
A workstation with the
ocbinary installed.
Procedure
Disable the RHTAS operator:
Example
$ oc scale deploy rhtas-operator-controller-manager --replicas=0 -n openshift-operatorsCreate the
Restoreresource:Example
$ cat <<EOF | oc apply -f - apiVersion: velero.io/v1 kind: Restore metadata: name: rhtas-restore namespace: openshift-adp spec: backupName: rhtas-backup includedResources: [] restoreStatus: includedResources: - securesign.rhtas.redhat.com - trillian.rhtas.redhat.com - ctlog.rhtas.redhat.com - fulcio.rhtas.redhat.com - rekor.rhtas.redhat.com - tuf.rhtas.redhat.com - timestampauthority.rhtas.redhat.com excludedResources: - pod - deployment - nodes - route - service - replicaset - events - cronjob - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - pods - deployments restorePVs: true existingResourcePolicy: update EOFIf restoring your RHTAS data to a different OpenShift cluster, do the following steps.
Delete the secret for the Trillian database:
Example
$ oc delete secret securesign-sample-trillian-db-tls $ oc delete pod trillian-db-xxxNoteThe RHTAS operator recreates the secret and restarts the pod.
-
Run the
restoreOwnerReferences.shscript.
Enable the RHTAS operator:
Example
$ oc scale deploy rhtas-operator-controller-manager --replicas=1 -n openshift-operatorsImportantImmediately starting the RHTAS operator after starting the restore ensures the claim of the persistent volume.
1.4. Restore owner references script 复制链接链接已复制到粘贴板!
This Bash script is for restoring the ownerReferences when restoring Red Hat Trusted Artifact Signer (RHTAS) data to a different OpenShift cluster.
#!/bin/bash
# List of resources to check
RESOURCES=("Fulcio" "Rekor" "Trillian" "TimestampAuthority" "CTlog" "Tuf")
function validate_owner() {
local RESOURCE=$1
local ITEM=$2
local OWNER_NAME=$3
# Check all the labels exist and are the same
LABELS=("app.kubernetes.io/instance" "app.kubernetes.io/part-of" "velero.io/backup-name" "velero.io/restore-name")
for LABEL in "${LABELS[@]}"; do
PARENT_LABEL=$(oc get Securesign "$OWNER_NAME" -o json | jq -r ".metadata.labels[\"$LABEL\"]")
CHILD_LABEL=$(oc get $RESOURCE "$ITEM" -o json | jq -r ".metadata.labels[\"$LABEL\"]")
if [[ -z "$CHILD_LABEL" || $CHILD_LABEL == "null" ]]; then
echo " $LABEL label missing in $RESOURCE"
return 1
elif [[ -z "$PARENT_LABEL" || $PARENT_LABEL == "null" ]]; then
echo " $LABEL label missing in Securesign"
return 1
elif [[ "$CHILD_LABEL" != "$PARENT_LABEL" ]]; then
echo " $LABEL labels not matching: $CHILD_LABEL != $PARENT_LABEL"
return 1
fi
done
return 0
}
for RESOURCE in "${RESOURCES[@]}"; do
echo "Checking $RESOURCE ..."
# Get all resources missing ownerReferences
MISSING_REFS=$(oc get $RESOURCE -o json | jq -r '.items[] | select(.metadata.ownerReferences == null) | .metadata.name')
for ITEM in $MISSING_REFS; do
echo " Missing ownerReferences in $RESOURCE/$ITEM"
# Find the expected owner based on labels
OWNER_NAME=$(oc get $RESOURCE "$ITEM" -o json | jq -r '.metadata.labels["app.kubernetes.io/name"]')
if [[ -z "$OWNER_NAME" || "$OWNER_NAME" == "null" ]]; then
echo " Skipping $RESOURCE/$ITEM: name not found in labels"
continue
fi
if ! validate_owner $RESOURCE $ITEM $OWNER_NAME; then
echo " Skipping ..."
continue
fi
# Try to get the owner's UID from Securesign
OWNER_UID=$(oc get Securesign "$OWNER_NAME" -o jsonpath='{.metadata.uid}' 2>/dev/null)
if [[ -z "$OWNER_UID" || "$OWNER_UID" == "null" ]]; then
echo " Failed to find Securesign/$OWNER_NAME UID, skipping ..."
continue
fi
echo " Found owner: Securesign/$OWNER_NAME (UID: $OWNER_UID)"
# Patch the object with the restored ownerReference
oc patch $RESOURCE "$ITEM" --type='merge' -p "{
\"metadata\": {
\"ownerReferences\": [
{
\"apiVersion\": \"rhtas.redhat.com/v1alpha1\",
\"kind\": \"Securesign\",
\"name\": \"$OWNER_NAME\",
\"uid\": \"$OWNER_UID\",
\"controller\": true,
\"blockOwnerDeletion\": true
}
]
}
}"
echo "Restored ownerReferences for $RESOURCE/$ITEM"
done
done
echo "Done"