Chapter 2. Select your installation platform


As as systems administrator, you can select two different installation platforms to run Red Hat Trusted Profile Analyzer (RHTPA). You can deploy RHTPA to Red Hat OpenShift Container Platform using Amazon Web Services (AWS) or other service providers with a Helm chart from Red Hat. You can also deploy RHTPA to Red Hat Enterprise Linux by using Ansible.

Important

Deploying RHTPA to Red Hat Enterprise Linux is currently a Technical Preview feature.

Select your target installation platform:

2.1. Installing Trusted Profile Analyzer by using Ansible

You can install the Red Hat Trusted Profile Analyzer (RHTPA) on Red Hat Enterprise Linux by using a Red Hat provided Ansible Playbook. This Ansible deployment of RHTPA allows you to specify your own PostgreSQL database, OpenID Connect (OIDC) provider, Simple Storage Service (S3), and Simple Queue Service (SQS) infrastructure.

Important

Deploying RHTPA on Red Hat Enterprise Linux by using Ansible is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details.

Prerequisites

  • Red Hat Enterprise Linux version 9.3 or later.
  • A Red Hat user account to access the Red Hat Hybrid Cloud Console.

Procedure

  1. Log in to the Red Hat Hybrid Cloud Console with your Red Hat credentials.
  2. From the home page, click the Services drop-down menu, and click Red Hat Ansible Automation Platform.
  3. From the navigational menu, expand Automation Hub, and click Collections.
  4. In the search field type rhtpa and press enter.
  5. Click the trusted_profile_analyzer link on the Red Hat Trusted Profile Analyzer tile.
  6. Click the Documentation tab, and follow the steps there to complete the installation of RHTPA on Red Hat Enterprise Linux.

    Note

    For a detailed overview of all the configuration parameters, click the tpa_single_node link under the Roles section.

2.2. Installing Trusted Profile Analyzer by using Helm with Amazon Web Services

You can install Red Hat’s Trusted Profile Analyzer (RHTPA) service on OpenShift by using a Helm chart from Red Hat. This procedure guides you on integrating Amazon Web Services (AWS) with RHTPA by using a customized values file for Helm.

Important

If the secret values change after the installation, OpenShift redeploys RHTPA.

Prerequisites

  • A Red Hat OpenShift Container Platform cluster running version 4.14 or later.

    • Support for the Ingress resource to serve publicly trusted certificates that use HTTPS.
  • The ability to provision Transport Layer Security (TLS) certificates for Helm.
  • An AWS account with access to the following services:

    • Simple Storage Service (S3)
    • Simple Queue Service (SQS)
    • Relational Database Service (RDS) using a PostgreSQL database instance.
    • Cognito with an existing Cognito domain.
  • Have the following unversioned S3 bucket names created:

    • bombastic-UNIQUE_ID
    • vexination-UNIQUE_ID
    • v11y-UNIQUE_ID

      Important

      These bucket names must be unique across all AWS accounts in all AWS regions within the same partition. See Amazon’s S3 documentation for more information on bucket naming rules.

  • Have the following standard SQS queue names created:

    • bombastic-failed-default
    • bombastic-indexed-default
    • bombastic-stored-default
    • vexination-failed-default
    • vexination-indexed-default
    • vexination-stored-default
    • v11y-failed-default
    • v11y-indexed-default
    • v11y-stored-default
  • Access to the OpenShift web console with the cluster-admin role.
  • A workstation with the oc, and the helm binaries installed.

Procedure

  1. On your workstation, open a terminal, and log in to OpenShift by using the command-line interface:

    Syntax

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    Copy to Clipboard

    Example

    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Copy to Clipboard

    Note

    You can find your login token and URL from the OpenShift web console to use on the command line. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, and click Display Token to view the command.

  2. Create a new project for the RHTPA deployment:

    Syntax

    oc new-project PROJECT_NAME
    Copy to Clipboard

    Example

    $ oc new-project trusted-profile-analyzer
    Copy to Clipboard

  3. Open a new file for editing:

    Example

    $ vi values-rhtpa-aws.yaml
    Copy to Clipboard

  4. Copy and paste the RHTPA values file template into the new values-rhtpa-aws.yaml file.
  5. Update the values-rhtpa-aws.yaml file with your relevant AWS information.

    1. Replace REGIONAL_ENDPOINT with your Amazon S3 storage, and Amazon SQS endpoint URLs.
    2. Replace COGNITO_DOMAIN_URL with your Amazon Cognito URL. You can find this information in the AWS Cognito Console, under the App Integration tab.
    3. Replace REGION, USER_POOL_ID, and FRONTEND_CLIENT_ID and WALKER_CLIENT_ID with your relevant Amazon Cognito information. You can find this information in the AWS Cognito Console, in the User pool overview section, and in the App clients and analytics section under the App Integration tab.
    4. Replace UNIQUE_ID with your unique bucket names for bombastic-, vexination-, and v11y-.
    5. Save the file, and quit the editor.
  6. Create the S3 storage secret resource by using your AWS credentials:

    Syntax

    apiVersion: v1
    kind: Secret
    metadata:
      name: storage-credentials
      namespace: PROJECT_NAME
    type: Opaque
    data:
      aws_access_key_id: AWS_ACCESS_KEY
      aws_secret_access_key: AWS_SECRET_KEY
    Copy to Clipboard

    Example

    $ cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: storage-credentials
      namespace: trusted-profile-analyzer
    type: Opaque
    data:
      aws_access_key_id: RHTPASTORAGE1EXAMPLE
      aws_secret_access_key: xBalrKUtnFEMI/K7RDENG/aPxRfzCYEXAMPLEKEY
    EOF
    Copy to Clipboard

  7. Create the SQS event bus secret resource by using your AWS credentials:

    Syntax

    apiVersion: v1
    kind: Secret
    metadata:
      name: event-bus-credentials
      namespace: PROJECT_NAME
    type: Opaque
    data:
      aws_access_key_id: AWS_ACCESS_KEY
      aws_secret_access_key: AWS_SECRET_KEY
    Copy to Clipboard

    Example

    $ cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: event-bus-credentials
      namespace: trusted-profile-analyzer
    type: Opaque
    data:
      aws_access_key_id: RHTPAEVENTBS1EXAMPLE
      aws_secret_access_key: mBaliKUtnFEMI/K6RDENG/aPxRfzCYEXAMPLEKEY
    EOF
    Copy to Clipboard

  8. Create a OpenID Connect (OIDC) walker client secret resource:

    Syntax

    apiVersion: v1
    kind: Secret
    metadata:
      name: oidc-walker
      namespace: PROJECT_NAME
    type: Opaque
    data:
      client-secret: SECRET
    Copy to Clipboard

    Example

    $ cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: oidc-walker
      namespace: trusted-profile-analyzer
    type: Opaque
    data:
      client-secret: 5460cc91-4e20-4edd-881c-b15b169f8a79
    EOF
    Copy to Clipboard

  9. Create two PostgreSQL database secret resources by using your Amazon RDS credentials.

    1. A PostgreSQL standard user secret resource:

      Syntax

      apiVersion: v1
      kind: Secret
      metadata:
        name: postgresql-credentials
        namespace: PROJECT_NAME
      type: Opaque
      data:
        db.host: DB_HOST
        db.name: DB_NAME
        db.user: USERNAME
        db.password: PASSWORD
        db.port: PORT
      Copy to Clipboard

      Example

      $ cat <<EOF | oc apply -f -
      apiVersion: v1
      kind: Secret
      metadata:
        name: postgresql-credentials
        namespace: trusted-profile-analyzer
      type: Opaque
      data:
        data:
        db.host: rds.us-east-1.amazonaws.com
        db.name: rhtpadb
        db.user: jdoe
        db.password: example1234
        db.port: 5432
      EOF
      Copy to Clipboard

    2. A PostgreSQL administrator secret resource:

      Syntax

      apiVersion: v1
      kind: Secret
      metadata:
        name: postgresql-admin-credentials
        namespace: PROJECT_NAME
      type: Opaque
      data:
        db.host: DB_HOST
        db.name: DB_NAME
        db.user: USERNAME
        db.password: PASSWORD
        db.port: PORT
      Copy to Clipboard

      Example

      $ cat <<EOF | oc apply -f -
      apiVersion: v1
      kind: Secret
      metadata:
        name: postgresql-admin-credentials
        namespace: trusted-profile-analyzer
      type: Opaque
      data:
        data:
        db.host: rds.us-east-1.amazonaws.com
        db.name: rhtpadb
        db.user: admin
        db.password: example1234
        db.port: 5432
      EOF
      Copy to Clipboard

    3. From the AWS Management Console, configure the Amazon Virtual Private Cloud (VPC) security group to allow port 5432.
  10. Set up your shell environment:

    Syntax

    export NAMESPACE=PROJECT_NAME
    export APP_DOMAIN_URL=-$NAMESPACE.$(oc -n openshift-ingress-operator get ingresscontrollers.operator.openshift.io default -o jsonpath='{.status.domain}')
    Copy to Clipboard

    Example

    $ export NAMESPACE=trusted-profile-analyzer
    $ export APP_DOMAIN_URL=-$NAMESPACE.$(oc -n openshift-ingress-operator get ingresscontrollers.operator.openshift.io default -o jsonpath='{.status.domain}')
    Copy to Clipboard

  11. Add the OpenShift Helm chart repository:

    Example

    $ helm repo add openshift-helm-charts https://charts.openshift.io/
    Copy to Clipboard

  12. Get the latest chart information from the Helm chart repositories:

    Example

    $ helm repo update
    Copy to Clipboard

  13. Run the Helm chart:

    Syntax

    helm install redhat-trusted-profile-analyzer openshift-helm-charts/redhat-trusted-profile-analyzer -n $NAMESPACE --values PATH_TO_VALUES_FILE --set-string appDomain=$APP_DOMAIN_URL
    Copy to Clipboard

    Example

    $ helm install redhat-trusted-profile-analyzer openshift-helm-charts/redhat-trusted-profile-analyzer -n $NAMESPACE --values values-rhtpa-aws.yaml --set-string appDomain=$APP_DOMAIN_URL
    Copy to Clipboard

    Note

    You can run this Helm chart many times to apply the currently configured state from the values file.

  14. Once the installation finishes, you can log in to the RHTPA console by using a user’s credentials from the Cognito user pool. You can find the RHTPA console URL by running the following command:

    Example

    $ oc -n $NAMESPACE get route --selector app.kubernetes.io/name=spog-ui -o jsonpath='https://{.items[0].status.ingress[0].host}{"\n"}'
    Copy to Clipboard

  15. A scheduled Cron job runs each day to gather the latest Common Vulnerabilities and Exposures (CVE) data for RHTPA. Instead of waiting, you can manually start this Cron job by running the following command:

    Example

    $ oc -n $NAMESPACE create job --from=cronjob/v11y-walker v11y-walker-now
    Copy to Clipboard

    Once the Cron job finishes, delete this Cron job:

    Example

    $ oc -n $NAMESPACE delete job v11y-walker-now
    Copy to Clipboard

2.3. Installing Trusted Profile Analyzer by using Helm with other services

You can install Red Hat’s Trusted Profile Analyzer (RHTPA) service on OpenShift by using a Helm chart from Red Hat. You need to have a Simple Storage Service (S3) compatible storage infrastructure, an OpenID Connect (OIDC) provider, a PostgreSQL database, and use Red Hat AMQ Streams for OpenShift. This procedure guides you on integrating these various services with RHTPA by using a customized values file for Helm.

Important

If the secret values change after the installation, OpenShift redeploys RHTPA.

Prerequisites

  • A Red Hat OpenShift Container Platform cluster running version 4.14 or later.

    • Support for the Ingress resource to serve publicly trusted certificates that use HTTPS.
  • Have the following unversioned S3 bucket names created:

    • bombastic-default
    • vexination-default
    • v11y-default
  • The AMQ Streams on OpenShift service with the following topic names created:

    • bombastic-failed-default
    • bombastic-indexed-default
    • bombastic-stored-default
    • vexination-failed-default
    • vexination-indexed-default
    • vexination-stored-default
    • v11y-failed-default
    • v11y-indexed-default
    • v11y-stored-default
  • An OIDC provider for authentication.
  • A new PostgreSQL database.
  • Access to the OpenShift web console with the cluster-admin role.
  • A workstation with the oc, and the helm binaries installed.

Procedure

  1. On your workstation, open a terminal, and log in to OpenShift by using the command-line interface:

    Syntax

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT
    Copy to Clipboard

    Example

    $ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
    Copy to Clipboard

    Note

    You can find your login token and URL from the OpenShift web console to use on the command line. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, and click Display Token to view the command.

  2. Create a new project for the RHTPA deployment:

    Syntax

    oc new-project PROJECT_NAME
    Copy to Clipboard

    Example

    $ oc new-project trusted-profile-analyzer
    Copy to Clipboard

  3. Open a new file for editing:

    Example

    $ vi values-rhtpa.yaml
    Copy to Clipboard

  4. Copy and paste the RHTPA values file template into the new values-rhtpa.yaml file.
  5. Update the values-rhtpa.yaml file with your information.

    1. Replace S3_ENDPOINT_URL with your relevant S3 storage information.
    2. Replace AMQ_ENDPOINT_URL, and USER_NAME with your relevant AMQ Streams information.
    3. Replace OIDC_ISSUER_URL, FRONTEND_CLIENT_ID and WALKER_CLIENT_ID with your relevant OIDC information.
    4. Save the file, and quit the editor.
  6. Create the S3 storage secret resource with your credentials:

    Syntax

    apiVersion: v1
    kind: Secret
    metadata:
      name: s3-credentials
      namespace: PROJECT_NAME
    type: Opaque
    data:
      user: USER_NAME
      password: PASSWORD
    Copy to Clipboard

    Example

    $ cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: s3-credentials
      namespace: trusted-profile-analyzer
    type: Opaque
    data:
      user: root
      password: example123
    EOF
    Copy to Clipboard

  7. Create the AMQ Streams secret resource with your credentials:

    Syntax

    apiVersion: v1
    kind: Secret
    metadata:
      name: kafka-credentials
      namespace: PROJECT_NAME
    type: Opaque
    data:
      client_password: PASSWORD
    Copy to Clipboard

    Example

    $ cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: kafka-credentials
      namespace: trusted-profile-analyzer
    type: Opaque
    data:
      client_password: example123
    EOF
    Copy to Clipboard

  8. Create a OIDC walker client secret resource:

    Syntax

    apiVersion: v1
    kind: Secret
    metadata:
      name: oidc-walker
      namespace: PROJECT_NAME
    type: Opaque
    data:
      client-secret: SECRET
    Copy to Clipboard

    Example

    $ cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: oidc-walker
      namespace: trusted-profile-analyzer
    type: Opaque
    data:
      client-secret: 5460cc91-4e20-4edd-881c-b15b169f8a79
    EOF
    Copy to Clipboard

  9. Create the two PostgreSQL database secret resources with your database credentials.

    1. A PostgreSQL standard user secret resource:

      Syntax

      apiVersion: v1
      kind: Secret
      metadata:
        name: postgresql-credentials
        namespace: PROJECT_NAME
      type: Opaque
      data:
        db.host: DB_HOST
        db.name: DB_NAME
        db.user: USERNAME
        db.password: PASSWORD
        db.port: PORT
      Copy to Clipboard

      Example

      $ cat <<EOF | oc apply -f -
      apiVersion: v1
      kind: Secret
      metadata:
        name: postgresql-credentials
        namespace: trusted-profile-analyzer
      type: Opaque
      data:
        data:
        db.host: postgresql.example.com
        db.name: rhtpadb
        db.user: jdoe
        db.password: example1234
        db.port: 5432
      EOF
      Copy to Clipboard

    2. A PostgreSQL administrator secret resource:

      Syntax

      apiVersion: v1
      kind: Secret
      metadata:
        name: postgresql-admin-credentials
        namespace: PROJECT_NAME
      type: Opaque
      data:
        db.host: DB_HOST
        db.name: DB_NAME
        db.user: USERNAME
        db.password: PASSWORD
        db.port: PORT
      Copy to Clipboard

      Example

      $ cat <<EOF | oc apply -f -
      apiVersion: v1
      kind: Secret
      metadata:
        name: postgresql-admin-credentials
        namespace: trusted-profile-analyzer
      type: Opaque
      data:
        data:
        db.host: postgresql.example.com
        db.name: rhtpadb
        db.user: admin
        db.password: example1234
        db.port: 5432
      EOF
      Copy to Clipboard

  10. Set up your shell environment:

    Syntax

    export NAMESPACE=PROJECT_NAME
    export APP_DOMAIN_URL=-$NAMESPACE.$(oc -n openshift-ingress-operator get ingresscontrollers.operator.openshift.io default -o jsonpath='{.status.domain}')
    Copy to Clipboard

    Example

    $ export NAMESPACE=trusted-profile-analyzer
    $ export APP_DOMAIN_URL=-$NAMESPACE.$(oc -n openshift-ingress-operator get ingresscontrollers.operator.openshift.io default -o jsonpath='{.status.domain}')
    Copy to Clipboard

  11. Add the OpenShift Helm chart repository:

    Example

    $ helm repo add openshift-helm-charts https://charts.openshift.io/
    Copy to Clipboard

  12. Get the latest chart information from the Helm chart repositories:

    Example

    $ helm repo update
    Copy to Clipboard

  13. Run the Helm chart:

    Syntax

    helm install redhat-trusted-profile-analyzer openshift-helm-charts/redhat-trusted-profile-analyzer -n $NAMESPACE --values PATH_TO_VALUES_FILE --set-string appDomain=$APP_DOMAIN_URL
    Copy to Clipboard

    Example

    $ helm install redhat-trusted-profile-analyzer openshift-helm-charts/redhat-trusted-profile-analyzer -n $NAMESPACE --values values-rhtpa.yaml --set-string appDomain=$APP_DOMAIN_URL
    Copy to Clipboard

    Note

    You can run this Helm chart many times to apply the currently configured state from the values file.

  14. Once the installation finishes, you can log in to the RHTPA console by using a user’s credentials from your OIDC provider. You can find the RHTPA console URL by running the following command:

    Example

    $ oc -n $NAMESPACE get route --selector app.kubernetes.io/name=spog-ui -o jsonpath='https://{.items[0].status.ingress[0].host}{"\n"}'
    Copy to Clipboard

  15. A scheduled Cron job runs each day to gather the latest Common Vulnerabilities and Exposures (CVE) data for RHTPA. Instead of waiting, you can manually start this Cron job by running the following command:

    Example

    $ oc -n $NAMESPACE create job --from=cronjob/v11y-walker v11y-walker-now
    Copy to Clipboard

    Once the Cron job finishes, delete this Cron job:

    Example

    $ oc -n $NAMESPACE delete job v11y-walker-now
    Copy to Clipboard

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat