Deployment Guide
Deploying the Trusted Profile Analyzer service on Red Hat Enterprise Linux and Red Hat OpenShift
Abstract
Preface
Welcome to the Red Hat Trusted Profile Analyzer (RHTPA) Deployment Guide!
This guide helps you with deploying the Red Hat Trusted Profile Analyzer (RHTPA) software stack on Red Hat OpenShift Container Platform or on Red Hat Enterprise Linux. For new RHTPA deployments, start by choosing your target installation platform.
If you are upgrading RHTPA to version 1.2, start with Chapter 1, Migrating your data before an upgrade.
Chapter 1. Migrating your data before an upgrade
With the release of Red Hat Trusted Profile Analyzer (RHTPA) version 1.2, we implemented a new schema for ingested software bill of materials (SBOM) and vulnerability exploitability exchange (VEX) data. Before upgrading, you must configure the RHTPA 1.2 values file to do a data migration to this new schema for your SBOM and VEX data. This data migration happens during the upgrade process to RHTPA version 1.2.
Prerequisites
- Installation of RHTPA 1.1.2 on Red Hat OpenShift.
- A new PostgreSQL database.
-
A workstation with the
oc
, andhelm
binaries installed.
Procedure
On your workstation, open a terminal, and log in to OpenShift by using the command-line interface:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc login --token=TOKEN --server=SERVER_URL_AND_PORT
oc login --token=TOKEN --server=SERVER_URL_AND_PORT
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
$ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
NoteYou can find your login token and URL from the OpenShift web console to use on the command line. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, and click Display Token to view the command.
Export the RHTPA project namespace:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export NAMESPACE=RHTPA_NAMESPACE
export NAMESPACE=RHTPA_NAMESPACE
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export NAMESPACE=trusted_profile_analyzer
$ export NAMESPACE=trusted_profile_analyzer
Verify that the RHTPA 1.1.2 installation is in the project namespace:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow helm list -n $NAMESPACE
$ helm list -n $NAMESPACE
Uninstall RHTPA 1.1.2:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow helm uninstall redhat-trusted-profile-analyzer -n $NAMESPACE
$ helm uninstall redhat-trusted-profile-analyzer -n $NAMESPACE
Open for editing the RHTPA 1.2 values file, and change the following things:
- Reference the new PostgreSQL database instance.
- Reference the same simple storage service (S3) storage used for version 1.1.2.
- Reference the same messaging queues used for version 1.1.2.
Set the
modules.vexinationCollector.recollectVEX
andmodules.bombasticCollector.recollectSBOM
options to a value oftrue
.NoteSee the Deployment Guide appendixes for value file templates used with RHTPA deployments on OpenShift.
Start the upgrade by using the updated RHTPA 1.2 Helm chart for OpenShift:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow helm install redhat-trusted-profile-analyzer openshift-helm-charts/redhat-trusted-profile-analyzer -n $NAMESPACE --values PATH_TO_VALUES_FILE --set-string appDomain=$APP_DOMAIN_URL
helm install redhat-trusted-profile-analyzer openshift-helm-charts/redhat-trusted-profile-analyzer -n $NAMESPACE --values PATH_TO_VALUES_FILE --set-string appDomain=$APP_DOMAIN_URL
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow helm install redhat-trusted-profile-analyzer openshift-helm-charts/redhat-trusted-profile-analyzer -n $NAMESPACE --values values-rhtpa.yaml --set-string appDomain=$APP_DOMAIN_URL
$ helm install redhat-trusted-profile-analyzer openshift-helm-charts/redhat-trusted-profile-analyzer -n $NAMESPACE --values values-rhtpa.yaml --set-string appDomain=$APP_DOMAIN_URL
NoteYou can run this Helm chart many times to apply the currently configured state from the values file.
Verify the data migration was successful.
View the SBOM and VEX indexer logs, looking for the
Reindexing all documents
andReindexing finished
messages:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc logs bombastic-indexer -n $NAMESPACE oc logs vexination-indexer -n $NAMESPACE
$ oc logs bombastic-indexer -n $NAMESPACE $ oc logs vexination-indexer -n $NAMESPACE
You will also see the following error messages:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Error syncing index: Open("Schema error: 'An index exists but the schema does not match.'"), keeping old Error loading initial index: Open("Schema error: 'An index exists but the schema does not match.'")
Error syncing index: Open("Schema error: 'An index exists but the schema does not match.'"), keeping old Error loading initial index: Open("Schema error: 'An index exists but the schema does not match.'")
Because of this schema mismatch, the
bombastic-collector
andvexination-collector
pods start the recollect containers to gather all the existing SBOM and VEX data. Bothrecollect-sbom
andrecollect-vex
init-containers should complete and stop successfully. Once the migration finishes, you can see all your existing SBOM and VEX data in RHTPA console.
Chapter 2. Select your installation platform
As as systems administrator, you can select two different installation platforms to run Red Hat Trusted Profile Analyzer (RHTPA). You can deploy RHTPA to Red Hat OpenShift Container Platform using Amazon Web Services (AWS) or other service providers with a Helm chart from Red Hat. You can also deploy RHTPA to Red Hat Enterprise Linux by using Ansible.
Deploying RHTPA to Red Hat Enterprise Linux is currently a Technical Preview feature.
Select your target installation platform:
2.1. Installing Trusted Profile Analyzer by using Ansible
You can install the Red Hat Trusted Profile Analyzer (RHTPA) on Red Hat Enterprise Linux by using a Red Hat provided Ansible Playbook. This Ansible deployment of RHTPA allows you to specify your own PostgreSQL database, OpenID Connect (OIDC) provider, Simple Storage Service (S3), and Simple Queue Service (SQS) infrastructure.
Deploying RHTPA on Red Hat Enterprise Linux by using Ansible is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details.
Prerequisites
- Red Hat Enterprise Linux version 9.3 or later.
- A Red Hat user account to access the Red Hat Hybrid Cloud Console.
Procedure
- Log in to the Red Hat Hybrid Cloud Console with your Red Hat credentials.
- From the home page, click the Services drop-down menu, and click Red Hat Ansible Automation Platform.
- From the navigational menu, expand Automation Hub, and click Collections.
- In the search field type rhtpa and press enter.
- Click the trusted_profile_analyzer link on the Red Hat Trusted Profile Analyzer tile.
Click the Documentation tab, and follow the steps there to complete the installation of RHTPA on Red Hat Enterprise Linux.
NoteFor a detailed overview of all the configuration parameters, click the tpa_single_node link under the Roles section.
2.2. Installing Trusted Profile Analyzer by using Helm with Amazon Web Services
You can install Red Hat’s Trusted Profile Analyzer (RHTPA) service on OpenShift by using a Helm chart from Red Hat. This procedure guides you on integrating Amazon Web Services (AWS) with RHTPA by using a customized values file for Helm.
If the secret values change after the installation, OpenShift redeploys RHTPA.
Prerequisites
A Red Hat OpenShift Container Platform cluster running version 4.14 or later.
- Support for the Ingress resource to serve publicly trusted certificates that use HTTPS.
- The ability to provision Transport Layer Security (TLS) certificates for Helm.
An AWS account with access to the following services:
- Simple Storage Service (S3)
- Simple Queue Service (SQS)
- Relational Database Service (RDS) using a PostgreSQL database instance.
- Cognito with an existing Cognito domain.
Have the following unversioned S3 bucket names created:
-
bombastic-UNIQUE_ID
-
vexination-UNIQUE_ID
v11y-UNIQUE_ID
ImportantThese bucket names must be unique across all AWS accounts in all AWS regions within the same partition. See Amazon’s S3 documentation for more information on bucket naming rules.
-
Have the following standard SQS queue names created:
-
bombastic-failed-default
-
bombastic-indexed-default
-
bombastic-stored-default
-
vexination-failed-default
-
vexination-indexed-default
-
vexination-stored-default
-
v11y-failed-default
-
v11y-indexed-default
-
v11y-stored-default
-
-
Access to the OpenShift web console with the
cluster-admin
role. -
A workstation with the
oc
, and thehelm
binaries installed.
Procedure
On your workstation, open a terminal, and log in to OpenShift by using the command-line interface:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc login --token=TOKEN --server=SERVER_URL_AND_PORT
oc login --token=TOKEN --server=SERVER_URL_AND_PORT
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
$ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
NoteYou can find your login token and URL from the OpenShift web console to use on the command line. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, and click Display Token to view the command.
Create a new project for the RHTPA deployment:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc new-project PROJECT_NAME
oc new-project PROJECT_NAME
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc new-project trusted-profile-analyzer
$ oc new-project trusted-profile-analyzer
Open a new file for editing:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow vi values-rhtpa-aws.yaml
$ vi values-rhtpa-aws.yaml
-
Copy and paste the RHTPA values file template into the new
values-rhtpa-aws.yaml
file. Update the
values-rhtpa-aws.yaml
file with your relevant AWS information.- Replace REGIONAL_ENDPOINT with your Amazon S3 storage, and Amazon SQS endpoint URLs.
- Replace COGNITO_DOMAIN_URL with your Amazon Cognito URL. You can find this information in the AWS Cognito Console, under the App Integration tab.
- Replace REGION, USER_POOL_ID, and FRONTEND_CLIENT_ID and WALKER_CLIENT_ID with your relevant Amazon Cognito information. You can find this information in the AWS Cognito Console, in the User pool overview section, and in the App clients and analytics section under the App Integration tab.
-
Replace UNIQUE_ID with your unique bucket names for
bombastic-
,vexination-
, andv11y-
. - Save the file, and quit the editor.
Create the S3 storage secret resource by using your AWS credentials:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: v1 kind: Secret metadata: name: storage-credentials namespace: PROJECT_NAME type: Opaque data: aws_access_key_id: AWS_ACCESS_KEY aws_secret_access_key: AWS_SECRET_KEY
apiVersion: v1 kind: Secret metadata: name: storage-credentials namespace: PROJECT_NAME type: Opaque data: aws_access_key_id: AWS_ACCESS_KEY aws_secret_access_key: AWS_SECRET_KEY
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: storage-credentials namespace: trusted-profile-analyzer type: Opaque data: aws_access_key_id: RHTPASTORAGE1EXAMPLE aws_secret_access_key: xBalrKUtnFEMI/K7RDENG/aPxRfzCYEXAMPLEKEY EOF
$ cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: storage-credentials namespace: trusted-profile-analyzer type: Opaque data: aws_access_key_id: RHTPASTORAGE1EXAMPLE aws_secret_access_key: xBalrKUtnFEMI/K7RDENG/aPxRfzCYEXAMPLEKEY EOF
Create the SQS event bus secret resource by using your AWS credentials:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: v1 kind: Secret metadata: name: event-bus-credentials namespace: PROJECT_NAME type: Opaque data: aws_access_key_id: AWS_ACCESS_KEY aws_secret_access_key: AWS_SECRET_KEY
apiVersion: v1 kind: Secret metadata: name: event-bus-credentials namespace: PROJECT_NAME type: Opaque data: aws_access_key_id: AWS_ACCESS_KEY aws_secret_access_key: AWS_SECRET_KEY
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: event-bus-credentials namespace: trusted-profile-analyzer type: Opaque data: aws_access_key_id: RHTPAEVENTBS1EXAMPLE aws_secret_access_key: mBaliKUtnFEMI/K6RDENG/aPxRfzCYEXAMPLEKEY EOF
$ cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: event-bus-credentials namespace: trusted-profile-analyzer type: Opaque data: aws_access_key_id: RHTPAEVENTBS1EXAMPLE aws_secret_access_key: mBaliKUtnFEMI/K6RDENG/aPxRfzCYEXAMPLEKEY EOF
Create a OpenID Connect (OIDC) walker client secret resource:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: v1 kind: Secret metadata: name: oidc-walker namespace: PROJECT_NAME type: Opaque data: client-secret: SECRET
apiVersion: v1 kind: Secret metadata: name: oidc-walker namespace: PROJECT_NAME type: Opaque data: client-secret: SECRET
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: oidc-walker namespace: trusted-profile-analyzer type: Opaque data: client-secret: 5460cc91-4e20-4edd-881c-b15b169f8a79 EOF
$ cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: oidc-walker namespace: trusted-profile-analyzer type: Opaque data: client-secret: 5460cc91-4e20-4edd-881c-b15b169f8a79 EOF
Create two PostgreSQL database secret resources by using your Amazon RDS credentials.
A PostgreSQL standard user secret resource:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: v1 kind: Secret metadata: name: postgresql-credentials namespace: PROJECT_NAME type: Opaque data: db.host: DB_HOST db.name: DB_NAME db.user: USERNAME db.password: PASSWORD db.port: PORT
apiVersion: v1 kind: Secret metadata: name: postgresql-credentials namespace: PROJECT_NAME type: Opaque data: db.host: DB_HOST db.name: DB_NAME db.user: USERNAME db.password: PASSWORD db.port: PORT
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: postgresql-credentials namespace: trusted-profile-analyzer type: Opaque data: data: db.host: rds.us-east-1.amazonaws.com db.name: rhtpadb db.user: jdoe db.password: example1234 db.port: 5432 EOF
$ cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: postgresql-credentials namespace: trusted-profile-analyzer type: Opaque data: data: db.host: rds.us-east-1.amazonaws.com db.name: rhtpadb db.user: jdoe db.password: example1234 db.port: 5432 EOF
A PostgreSQL administrator secret resource:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: v1 kind: Secret metadata: name: postgresql-admin-credentials namespace: PROJECT_NAME type: Opaque data: db.host: DB_HOST db.name: DB_NAME db.user: USERNAME db.password: PASSWORD db.port: PORT
apiVersion: v1 kind: Secret metadata: name: postgresql-admin-credentials namespace: PROJECT_NAME type: Opaque data: db.host: DB_HOST db.name: DB_NAME db.user: USERNAME db.password: PASSWORD db.port: PORT
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: postgresql-admin-credentials namespace: trusted-profile-analyzer type: Opaque data: data: db.host: rds.us-east-1.amazonaws.com db.name: rhtpadb db.user: admin db.password: example1234 db.port: 5432 EOF
$ cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: postgresql-admin-credentials namespace: trusted-profile-analyzer type: Opaque data: data: db.host: rds.us-east-1.amazonaws.com db.name: rhtpadb db.user: admin db.password: example1234 db.port: 5432 EOF
- From the AWS Management Console, configure the Amazon Virtual Private Cloud (VPC) security group to allow port 5432.
Set up your shell environment:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export NAMESPACE=PROJECT_NAME export APP_DOMAIN_URL=-$NAMESPACE.$(oc -n openshift-ingress-operator get ingresscontrollers.operator.openshift.io default -o jsonpath='{.status.domain}')
export NAMESPACE=PROJECT_NAME export APP_DOMAIN_URL=-$NAMESPACE.$(oc -n openshift-ingress-operator get ingresscontrollers.operator.openshift.io default -o jsonpath='{.status.domain}')
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export NAMESPACE=trusted-profile-analyzer export APP_DOMAIN_URL=-$NAMESPACE.$(oc -n openshift-ingress-operator get ingresscontrollers.operator.openshift.io default -o jsonpath='{.status.domain}')
$ export NAMESPACE=trusted-profile-analyzer $ export APP_DOMAIN_URL=-$NAMESPACE.$(oc -n openshift-ingress-operator get ingresscontrollers.operator.openshift.io default -o jsonpath='{.status.domain}')
Add the OpenShift Helm chart repository:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow helm repo add openshift-helm-charts https://charts.openshift.io/
$ helm repo add openshift-helm-charts https://charts.openshift.io/
Get the latest chart information from the Helm chart repositories:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow helm repo update
$ helm repo update
Run the Helm chart:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow helm install redhat-trusted-profile-analyzer openshift-helm-charts/redhat-trusted-profile-analyzer -n $NAMESPACE --values PATH_TO_VALUES_FILE --set-string appDomain=$APP_DOMAIN_URL
helm install redhat-trusted-profile-analyzer openshift-helm-charts/redhat-trusted-profile-analyzer -n $NAMESPACE --values PATH_TO_VALUES_FILE --set-string appDomain=$APP_DOMAIN_URL
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow helm install redhat-trusted-profile-analyzer openshift-helm-charts/redhat-trusted-profile-analyzer -n $NAMESPACE --values values-rhtpa-aws.yaml --set-string appDomain=$APP_DOMAIN_URL
$ helm install redhat-trusted-profile-analyzer openshift-helm-charts/redhat-trusted-profile-analyzer -n $NAMESPACE --values values-rhtpa-aws.yaml --set-string appDomain=$APP_DOMAIN_URL
NoteYou can run this Helm chart many times to apply the currently configured state from the values file.
Once the installation finishes, you can log in to the RHTPA console by using a user’s credentials from the Cognito user pool. You can find the RHTPA console URL by running the following command:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n $NAMESPACE get route --selector app.kubernetes.io/name=spog-ui -o jsonpath='https://{.items[0].status.ingress[0].host}{"\n"}'
$ oc -n $NAMESPACE get route --selector app.kubernetes.io/name=spog-ui -o jsonpath='https://{.items[0].status.ingress[0].host}{"\n"}'
A scheduled Cron job runs each day to gather the latest Common Vulnerabilities and Exposures (CVE) data for RHTPA. Instead of waiting, you can manually start this Cron job by running the following command:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n $NAMESPACE create job --from=cronjob/v11y-walker v11y-walker-now
$ oc -n $NAMESPACE create job --from=cronjob/v11y-walker v11y-walker-now
Once the Cron job finishes, delete this Cron job:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n $NAMESPACE delete job v11y-walker-now
$ oc -n $NAMESPACE delete job v11y-walker-now
Additional resources
- Amazon Simple Storage Service (S3) endpoints and quota documentation.
- Amazon Simple Queue Service (SQS) documentation.
- Amazon Cognito documentation.
- Amazon Relational Database Service (RDS) documentation.
- Creating an Amazon S3 bucket.
- Creating a standard Amazon SQS queue.
2.3. Installing Trusted Profile Analyzer by using Helm with other services
You can install Red Hat’s Trusted Profile Analyzer (RHTPA) service on OpenShift by using a Helm chart from Red Hat. You need to have a Simple Storage Service (S3) compatible storage infrastructure, an OpenID Connect (OIDC) provider, a PostgreSQL database, and use Red Hat AMQ Streams for OpenShift. This procedure guides you on integrating these various services with RHTPA by using a customized values file for Helm.
If the secret values change after the installation, OpenShift redeploys RHTPA.
Prerequisites
A Red Hat OpenShift Container Platform cluster running version 4.14 or later.
- Support for the Ingress resource to serve publicly trusted certificates that use HTTPS.
Have the following unversioned S3 bucket names created:
-
bombastic-default
-
vexination-default
-
v11y-default
-
The AMQ Streams on OpenShift service with the following topic names created:
-
bombastic-failed-default
-
bombastic-indexed-default
-
bombastic-stored-default
-
vexination-failed-default
-
vexination-indexed-default
-
vexination-stored-default
-
v11y-failed-default
-
v11y-indexed-default
-
v11y-stored-default
-
- An OIDC provider for authentication.
- A new PostgreSQL database.
-
Access to the OpenShift web console with the
cluster-admin
role. -
A workstation with the
oc
, and thehelm
binaries installed.
Procedure
On your workstation, open a terminal, and log in to OpenShift by using the command-line interface:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc login --token=TOKEN --server=SERVER_URL_AND_PORT
oc login --token=TOKEN --server=SERVER_URL_AND_PORT
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
$ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
NoteYou can find your login token and URL from the OpenShift web console to use on the command line. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, and click Display Token to view the command.
Create a new project for the RHTPA deployment:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc new-project PROJECT_NAME
oc new-project PROJECT_NAME
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc new-project trusted-profile-analyzer
$ oc new-project trusted-profile-analyzer
Open a new file for editing:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow vi values-rhtpa.yaml
$ vi values-rhtpa.yaml
-
Copy and paste the RHTPA values file template into the new
values-rhtpa.yaml
file. Update the
values-rhtpa.yaml
file with your information.- Replace S3_ENDPOINT_URL with your relevant S3 storage information.
- Replace AMQ_ENDPOINT_URL, and USER_NAME with your relevant AMQ Streams information.
- Replace OIDC_ISSUER_URL, FRONTEND_CLIENT_ID and WALKER_CLIENT_ID with your relevant OIDC information.
- Save the file, and quit the editor.
Create the S3 storage secret resource with your credentials:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: v1 kind: Secret metadata: name: s3-credentials namespace: PROJECT_NAME type: Opaque data: user: USER_NAME password: PASSWORD
apiVersion: v1 kind: Secret metadata: name: s3-credentials namespace: PROJECT_NAME type: Opaque data: user: USER_NAME password: PASSWORD
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: s3-credentials namespace: trusted-profile-analyzer type: Opaque data: user: root password: example123 EOF
$ cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: s3-credentials namespace: trusted-profile-analyzer type: Opaque data: user: root password: example123 EOF
Create the AMQ Streams secret resource with your credentials:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: v1 kind: Secret metadata: name: kafka-credentials namespace: PROJECT_NAME type: Opaque data: client_password: PASSWORD
apiVersion: v1 kind: Secret metadata: name: kafka-credentials namespace: PROJECT_NAME type: Opaque data: client_password: PASSWORD
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: kafka-credentials namespace: trusted-profile-analyzer type: Opaque data: client_password: example123 EOF
$ cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: kafka-credentials namespace: trusted-profile-analyzer type: Opaque data: client_password: example123 EOF
Create a OIDC walker client secret resource:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: v1 kind: Secret metadata: name: oidc-walker namespace: PROJECT_NAME type: Opaque data: client-secret: SECRET
apiVersion: v1 kind: Secret metadata: name: oidc-walker namespace: PROJECT_NAME type: Opaque data: client-secret: SECRET
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: oidc-walker namespace: trusted-profile-analyzer type: Opaque data: client-secret: 5460cc91-4e20-4edd-881c-b15b169f8a79 EOF
$ cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: oidc-walker namespace: trusted-profile-analyzer type: Opaque data: client-secret: 5460cc91-4e20-4edd-881c-b15b169f8a79 EOF
Create the two PostgreSQL database secret resources with your database credentials.
A PostgreSQL standard user secret resource:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: v1 kind: Secret metadata: name: postgresql-credentials namespace: PROJECT_NAME type: Opaque data: db.host: DB_HOST db.name: DB_NAME db.user: USERNAME db.password: PASSWORD db.port: PORT
apiVersion: v1 kind: Secret metadata: name: postgresql-credentials namespace: PROJECT_NAME type: Opaque data: db.host: DB_HOST db.name: DB_NAME db.user: USERNAME db.password: PASSWORD db.port: PORT
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: postgresql-credentials namespace: trusted-profile-analyzer type: Opaque data: data: db.host: postgresql.example.com db.name: rhtpadb db.user: jdoe db.password: example1234 db.port: 5432 EOF
$ cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: postgresql-credentials namespace: trusted-profile-analyzer type: Opaque data: data: db.host: postgresql.example.com db.name: rhtpadb db.user: jdoe db.password: example1234 db.port: 5432 EOF
A PostgreSQL administrator secret resource:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: v1 kind: Secret metadata: name: postgresql-admin-credentials namespace: PROJECT_NAME type: Opaque data: db.host: DB_HOST db.name: DB_NAME db.user: USERNAME db.password: PASSWORD db.port: PORT
apiVersion: v1 kind: Secret metadata: name: postgresql-admin-credentials namespace: PROJECT_NAME type: Opaque data: db.host: DB_HOST db.name: DB_NAME db.user: USERNAME db.password: PASSWORD db.port: PORT
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: postgresql-admin-credentials namespace: trusted-profile-analyzer type: Opaque data: data: db.host: postgresql.example.com db.name: rhtpadb db.user: admin db.password: example1234 db.port: 5432 EOF
$ cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: postgresql-admin-credentials namespace: trusted-profile-analyzer type: Opaque data: data: db.host: postgresql.example.com db.name: rhtpadb db.user: admin db.password: example1234 db.port: 5432 EOF
Set up your shell environment:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export NAMESPACE=PROJECT_NAME export APP_DOMAIN_URL=-$NAMESPACE.$(oc -n openshift-ingress-operator get ingresscontrollers.operator.openshift.io default -o jsonpath='{.status.domain}')
export NAMESPACE=PROJECT_NAME export APP_DOMAIN_URL=-$NAMESPACE.$(oc -n openshift-ingress-operator get ingresscontrollers.operator.openshift.io default -o jsonpath='{.status.domain}')
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export NAMESPACE=trusted-profile-analyzer export APP_DOMAIN_URL=-$NAMESPACE.$(oc -n openshift-ingress-operator get ingresscontrollers.operator.openshift.io default -o jsonpath='{.status.domain}')
$ export NAMESPACE=trusted-profile-analyzer $ export APP_DOMAIN_URL=-$NAMESPACE.$(oc -n openshift-ingress-operator get ingresscontrollers.operator.openshift.io default -o jsonpath='{.status.domain}')
Add the OpenShift Helm chart repository:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow helm repo add openshift-helm-charts https://charts.openshift.io/
$ helm repo add openshift-helm-charts https://charts.openshift.io/
Get the latest chart information from the Helm chart repositories:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow helm repo update
$ helm repo update
Run the Helm chart:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow helm install redhat-trusted-profile-analyzer openshift-helm-charts/redhat-trusted-profile-analyzer -n $NAMESPACE --values PATH_TO_VALUES_FILE --set-string appDomain=$APP_DOMAIN_URL
helm install redhat-trusted-profile-analyzer openshift-helm-charts/redhat-trusted-profile-analyzer -n $NAMESPACE --values PATH_TO_VALUES_FILE --set-string appDomain=$APP_DOMAIN_URL
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow helm install redhat-trusted-profile-analyzer openshift-helm-charts/redhat-trusted-profile-analyzer -n $NAMESPACE --values values-rhtpa.yaml --set-string appDomain=$APP_DOMAIN_URL
$ helm install redhat-trusted-profile-analyzer openshift-helm-charts/redhat-trusted-profile-analyzer -n $NAMESPACE --values values-rhtpa.yaml --set-string appDomain=$APP_DOMAIN_URL
NoteYou can run this Helm chart many times to apply the currently configured state from the values file.
Once the installation finishes, you can log in to the RHTPA console by using a user’s credentials from your OIDC provider. You can find the RHTPA console URL by running the following command:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n $NAMESPACE get route --selector app.kubernetes.io/name=spog-ui -o jsonpath='https://{.items[0].status.ingress[0].host}{"\n"}'
$ oc -n $NAMESPACE get route --selector app.kubernetes.io/name=spog-ui -o jsonpath='https://{.items[0].status.ingress[0].host}{"\n"}'
A scheduled Cron job runs each day to gather the latest Common Vulnerabilities and Exposures (CVE) data for RHTPA. Instead of waiting, you can manually start this Cron job by running the following command:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n $NAMESPACE create job --from=cronjob/v11y-walker v11y-walker-now
$ oc -n $NAMESPACE create job --from=cronjob/v11y-walker v11y-walker-now
Once the Cron job finishes, delete this Cron job:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n $NAMESPACE delete job v11y-walker-now
$ oc -n $NAMESPACE delete job v11y-walker-now
Appendix A. Red Hat Trusted Profile Analyzer with AWS values file template
Red Hat’s Trusted Profile Analyzer (RHTPA) with Amazon Web Services (AWS) values file template for use by the RHTPA Helm chart.
Template
appDomain: $APP_DOMAIN_URL tracing: {} ingress: className: openshift-default storage: region: REGIONAL_ENDPOINT accessKey: valueFrom: secretKeyRef: name: storage-credentials key: aws_access_key_id secretKey: valueFrom: secretKeyRef: name: storage-credentials key: aws_secret_access_key eventBus: type: sqs region: REGIONAL_ENDPOINT accessKey: valueFrom: secretKeyRef: name: event-bus-credentials key: aws_access_key_id secretKey: valueFrom: secretKeyRef: name: event-bus-credentials key: aws_secret_access_key authenticator: type: cognito cognitoDomainUrl: COGNITO_DOMAIN_URL oidc: issuerUrl: https://cognito-idp.REGION.amazonaws.com/USER_POOL_ID clients: frontend: clientId: FRONTEND_CLIENT_ID walker: clientId: WALKER_CLIENT_ID clientSecret: valueFrom: secretKeyRef: name: oidc-walker key: client-secret bombastic: bucket: bombastic-UNIQUE_ID topics: failed: bombastic-failed-default indexed: bombastic-indexed-default stored: bombastic-stored-default vexination: bucket: vexination-UNIQUE_ID topics: failed: vexination-failed-default indexed: vexination-indexed-default stored: vexination-stored-default v11y: bucket: v11y-UNIQUE_ID topics: failed: v11y-failed-default indexed: v11y-indexed-default stored: v11y-stored-default guac: database: name: valueFrom: secretKeyRef: name: postgresql-credentials key: db.name host: valueFrom: secretKeyRef: name: postgresql-credentials key: db.host port: valueFrom: secretKeyRef: name: postgresql-credentials key: db.port username: valueFrom: secretKeyRef: name: postgresql-credentials key: db.user password: valueFrom: secretKeyRef: name: postgresql-credentials key: db.password initDatabase: name: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.name host: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.host port: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.port username: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.user password: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.password
appDomain: $APP_DOMAIN_URL
tracing: {}
ingress:
className: openshift-default
storage:
region: REGIONAL_ENDPOINT
accessKey:
valueFrom:
secretKeyRef:
name: storage-credentials
key: aws_access_key_id
secretKey:
valueFrom:
secretKeyRef:
name: storage-credentials
key: aws_secret_access_key
eventBus:
type: sqs
region: REGIONAL_ENDPOINT
accessKey:
valueFrom:
secretKeyRef:
name: event-bus-credentials
key: aws_access_key_id
secretKey:
valueFrom:
secretKeyRef:
name: event-bus-credentials
key: aws_secret_access_key
authenticator:
type: cognito
cognitoDomainUrl: COGNITO_DOMAIN_URL
oidc:
issuerUrl: https://cognito-idp.REGION.amazonaws.com/USER_POOL_ID
clients:
frontend:
clientId: FRONTEND_CLIENT_ID
walker:
clientId: WALKER_CLIENT_ID
clientSecret:
valueFrom:
secretKeyRef:
name: oidc-walker
key: client-secret
bombastic:
bucket: bombastic-UNIQUE_ID
topics:
failed: bombastic-failed-default
indexed: bombastic-indexed-default
stored: bombastic-stored-default
vexination:
bucket: vexination-UNIQUE_ID
topics:
failed: vexination-failed-default
indexed: vexination-indexed-default
stored: vexination-stored-default
v11y:
bucket: v11y-UNIQUE_ID
topics:
failed: v11y-failed-default
indexed: v11y-indexed-default
stored: v11y-stored-default
guac:
database:
name:
valueFrom:
secretKeyRef:
name: postgresql-credentials
key: db.name
host:
valueFrom:
secretKeyRef:
name: postgresql-credentials
key: db.host
port:
valueFrom:
secretKeyRef:
name: postgresql-credentials
key: db.port
username:
valueFrom:
secretKeyRef:
name: postgresql-credentials
key: db.user
password:
valueFrom:
secretKeyRef:
name: postgresql-credentials
key: db.password
initDatabase:
name:
valueFrom:
secretKeyRef:
name: postgresql-admin-credentials
key: db.name
host:
valueFrom:
secretKeyRef:
name: postgresql-admin-credentials
key: db.host
port:
valueFrom:
secretKeyRef:
name: postgresql-admin-credentials
key: db.port
username:
valueFrom:
secretKeyRef:
name: postgresql-admin-credentials
key: db.user
password:
valueFrom:
secretKeyRef:
name: postgresql-admin-credentials
key: db.password
Appendix B. Red Hat Trusted Profile Analyzer with other services values file template
Red Hat’s Trusted Profile Analyzer (RHTPA) with other services values file template for use by the RHTPA Helm chart.
Template
appDomain: $APP_DOMAIN_URL tracing: {} ingress: className: openshift-default storage: endpoint: S3_ENDPOINT_URL accessKey: valueFrom: secretKeyRef: name: s3-credentials key: user secretKey: valueFrom: secretKeyRef: name: s3-credentials key: password eventBus: type: kafka bootstrapServers: AMQ_ENDPOINT_URL:9092 config: securityProtocol: SASL_PLAINTEXT username: “USER_NAME” password: valueFrom: secretKeyRef: name: kafka-credentials key: client_password mechanism: SCRAM-SHA-512 oidc: issuerUrl: OIDC_ISSUER_URL clients: frontend: clientId: FRONTEND_CLIENT_ID walker: clientId: WALKER_CLIENT_ID clientSecret: valueFrom: secretKeyRef: name: oidc-walker key: client-secret bombastic: bucket: bombastic-default topics: failed: bombastic-failed-default indexed: bombastic-indexed-default stored: bombastic-stored-default vexination: bucket: vexination-default topics: failed: vexination-failed-default indexed: vexination-indexed-default stored: vexination-stored-default v11y: bucket: v11y-default topics: failed: v11y-failed-default indexed: v11y-indexed-default stored: v11y-stored-default guac: database: name: valueFrom: secretKeyRef: name: postgresql-credentials key: db.name host: valueFrom: secretKeyRef: name: postgresql-credentials key: db.host port: valueFrom: secretKeyRef: name: postgresql-credentials key: db.port username: valueFrom: secretKeyRef: name: postgresql-credentials key: db.user password: valueFrom: secretKeyRef: name: postgresql-credentials key: db.password initDatabase: name: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.name host: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.host port: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.port username: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.user password: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.password
appDomain: $APP_DOMAIN_URL
tracing: {}
ingress:
className: openshift-default
storage:
endpoint: S3_ENDPOINT_URL
accessKey:
valueFrom:
secretKeyRef:
name: s3-credentials
key: user
secretKey:
valueFrom:
secretKeyRef:
name: s3-credentials
key: password
eventBus:
type: kafka
bootstrapServers: AMQ_ENDPOINT_URL:9092
config:
securityProtocol: SASL_PLAINTEXT
username: “USER_NAME”
password:
valueFrom:
secretKeyRef:
name: kafka-credentials
key: client_password
mechanism: SCRAM-SHA-512
oidc:
issuerUrl: OIDC_ISSUER_URL
clients:
frontend:
clientId: FRONTEND_CLIENT_ID
walker:
clientId: WALKER_CLIENT_ID
clientSecret:
valueFrom:
secretKeyRef:
name: oidc-walker
key: client-secret
bombastic:
bucket: bombastic-default
topics:
failed: bombastic-failed-default
indexed: bombastic-indexed-default
stored: bombastic-stored-default
vexination:
bucket: vexination-default
topics:
failed: vexination-failed-default
indexed: vexination-indexed-default
stored: vexination-stored-default
v11y:
bucket: v11y-default
topics:
failed: v11y-failed-default
indexed: v11y-indexed-default
stored: v11y-stored-default
guac:
database:
name:
valueFrom:
secretKeyRef:
name: postgresql-credentials
key: db.name
host:
valueFrom:
secretKeyRef:
name: postgresql-credentials
key: db.host
port:
valueFrom:
secretKeyRef:
name: postgresql-credentials
key: db.port
username:
valueFrom:
secretKeyRef:
name: postgresql-credentials
key: db.user
password:
valueFrom:
secretKeyRef:
name: postgresql-credentials
key: db.password
initDatabase:
name:
valueFrom:
secretKeyRef:
name: postgresql-admin-credentials
key: db.name
host:
valueFrom:
secretKeyRef:
name: postgresql-admin-credentials
key: db.host
port:
valueFrom:
secretKeyRef:
name: postgresql-admin-credentials
key: db.port
username:
valueFrom:
secretKeyRef:
name: postgresql-admin-credentials
key: db.user
password:
valueFrom:
secretKeyRef:
name: postgresql-admin-credentials
key: db.password