Chapter 2. Configuring TrustyAI


To configure model monitoring with TrustyAI for data scientists to use in OpenShift AI, a cluster administrator does the following tasks:

  • Configure monitoring for the model serving platform
  • Enable the TrustyAI component in the Red Hat OpenShift AI Operator
  • Configure TrustyAI to use a database, if you want to use your database instead of a PVC for storage with TrustyAI
  • Install the TrustyAI service on each project that contains models that the data scientists want to monitor
  • (Optional) Configure TrustyAI and KServe RawDeployment mode integration

For deploying large models such as large language models (LLMs), use the model serving platform.

+ To configure monitoring for this platform, see Configuring monitoring for the model serving platform.

2.2. Enabling the TrustyAI component

To allow your data scientists to use model monitoring with TrustyAI, you must enable the TrustyAI component in OpenShift AI.

Prerequisites

  • You have cluster administrator privileges for your OpenShift cluster.
  • You have access to the data science cluster.
  • You have installed Red Hat OpenShift AI.

Procedure

  1. In the OpenShift console, click Operators Installed Operators.
  2. Search for the Red Hat OpenShift AI Operator, and then click the Operator name to open the Operator details page.
  3. Click the Data Science Cluster tab.
  4. Click the default instance name (for example, default-dsc) to open the instance details page.
  5. Click the YAML tab to show the instance specifications.
  6. In the spec:components section, set the managementState field for the trustyai component to Managed:

     trustyai:
        managementState: Managed
    Copy to Clipboard Toggle word wrap
  7. Click Save.

Verification

Check the status of the trustyai-service-operator pod:

  1. In the OpenShift console, from the Project list, select redhat-ods-applications.
  2. Click Workloads Deployments.
  3. Search for the trustyai-service-operator-controller-manager deployment. Check the status:

    1. Click the deployment name to open the deployment details page.
    2. Click the Pods tab.
    3. View the pod status.

      When the status of the trustyai-service-operator-controller-manager-<pod-id> pod is Running, the pod is ready to use.

2.3. Configuring TrustyAI with a database

If you have a relational database in your OpenShift cluster such as MySQL or MariaDB, you can configure TrustyAI to use your database instead of a persistent volume claim (PVC). Using a database instead of a PVC for storage can improve scalability, performance, and data management in TrustyAI. Provide TrustyAI with a database configuration secret before deployment. You can create a secret or specify the name of an existing Kubernetes secret within your project.

Prerequisites

  • You have cluster administrator privileges for your OpenShift cluster.
  • You have installed the OpenShift CLI (oc) as described in the appropriate documentation for your cluster:

  • You have enabled the TrustyAI component, as described in Enabling the TrustyAI component.
  • The data scientist has created a project, as described in Creating a project, that contains the models that the data scientist wants to monitor.
  • If you are configuring the TrustyAI service with an external MySQL database, your database must already be in your cluster and use at least MySQL version 5.x. However, Red Hat recommends that you use MySQL version 8.x.
  • If you are configuring the TrustyAI service with a MariaDB database, your database must already be in your cluster and use MariaDB version 10.3 or later. However, Red Hat recommends that you use at least MariaDB version 10.5.
Note

The transport security layer (TLS) protocol does not work with the MariaDB operator 0.29 or later versions.

The MariaDB operator for s390x is not supported at this time.

Procedure

  1. In a terminal window, if you are not already logged in to your OpenShift cluster as a cluster administrator, log in to the OpenShift CLI (oc) as shown in the following example:

    $ oc login <openshift_cluster_url> -u <admin_username> -p <password>
    Copy to Clipboard Toggle word wrap
  2. Optional: If you want to use a TLS connection between TrustyAI and the database, create a TrustyAI service database TLS secret that uses the same certificates that you want to use for the database.

    1. Create a YAML file to contain your TLS secret and add the following code:

      apiVersion: v1
      kind: Secret
      metadata:
        name: <service_name>-db-tls
      type: kubernetes.io/tls
      data:
        tls.crt: |
          <TLS CERTIFICATE>
      
        tls.key: |
          <TLS KEY>
      Copy to Clipboard Toggle word wrap
    2. Save the file with the file name <service_name>-db-tls.yaml. For example, if your service name is trustyai-service, save the file as trustyai-service-db-tls.yaml.
    3. Apply the YAML file in the project that contains the models that the data scientist wants to monitor:

      $ oc apply -f <service_name>-db-tls.yaml -n <project_name>
      Copy to Clipboard Toggle word wrap
  3. Create a secret (or specify an existing one) that has your database credentials.

    1. Create a YAML file to contain your secret and add the following code:

      apiVersion: v1
      kind: Secret
      metadata:
        name: db-credentials
      type: Opaque
      stringData:
        databaseKind: <mariadb> 
      1
      
        databaseUsername: <TrustyAI_username> 
      2
      
        databasePassword: <TrustyAI_password> 
      3
      
        databaseService: mariadb-service 
      4
      
        databasePort: 3306 
      5
      
        databaseGeneration: update 
      6
      
        databaseName: trustyai_service 
      7
      Copy to Clipboard Toggle word wrap
      1
      The only currently supported databaseKind value is mariadb.
      2
      The username you want TrustyAI to use when interfacing with the database.
      3
      The password that TrustyAI must use when connecting to the database.
      4
      The Kubernetes (K8s) service that TrustyAI must use when connecting to the database (the default mariadb) .
      5
      The port that TrustyAI must use when connecting to the database (default is 3306).
      6
      The database schema generation strategy to be used by TrustyAI. It is the setting for the quarkus.hibernate-orm.database.generation argument, which determines how TrustyAI interacts with the database on its initial connection. Set to none, create, drop-and-create, drop, update, or validate.
      7
      The name of the individual database within the database service that the username and password authenticate to, as well as the specific database name that TrustyAI should read and write to on the database server.
    2. Save the file with the file name db-credentials.yaml. You will need this name later when you install or change the TrustyAI service.
    3. Apply the YAML file in the project that contains the models that the data scientist wants to monitor:

      $ oc apply -f db-credentials.yaml -n <project_name>
      Copy to Clipboard Toggle word wrap
  4. If you are installing TrustyAI for the first time on a project, continue to Installing the TrustyAI service for a project.

    If you already installed TrustyAI on a project, you can migrate the existing TrustyAI service from using a PVC to using a database.

    1. Create a YAML file to update the TrustyAI service custom resource (CR) and add the following code:

      apiVersion: trustyai.opendatahub.io/v1
      kind: TrustyAIService
      metadata:
        annotations:
          trustyai.opendatahub.io/db-migration: "true" 
      1
      
        name: trustyai-service 
      2
      
      spec:
        storage:
          format: "DATABASE" 
      3
      
          folder: "/inputs" 
      4
      
            size: "1Gi" 
      5
      
          databaseConfigurations: <database_secret_credentials> 
      6
      
        data:
          filename: "data.csv" 
      7
      
        metrics:
          schedule: "5s" 
      8
      Copy to Clipboard Toggle word wrap
      1
      Set to true to prompt the migration from PVC to database storage.
      2
      The name of the TrustyAI service instance.
      3
      The storage format for the data. Set this field to DATABASE.
      4
      The location within the PVC where you were storing the data. This must match the value specified in the existing CR.
      5
      The size of the data to request.
      6
      The name of the secret with your database credentials that you created in an earlier step. For example, db-credentials.
      7
      The suffix for the existing stored data files. This must match the value specified in the existing CR.
      8
      The interval at which to calculate the metrics. The default is 5s. The duration is specified with the ISO-8601 format. For example, 5s for 5 seconds, 5m for 5 minutes, and 5h for 5 hours.
    2. Save the file. For example, trustyai_crd.yaml.
    3. Apply the new TrustyAI service CR to the project that contains the models that the data scientist wants to monitor:

      $ oc apply -f trustyai_crd.yaml -n <project_name>
      Copy to Clipboard Toggle word wrap

2.4. Installing the TrustyAI service for a project

Install the TrustyAI service on a project to provide access to its features for all models deployed within that project. An instance of the TrustyAI service is required for each project, or namespace, that contains models that the data scientists want to monitor.

Note

Install only one instance of the TrustyAI service in a project. Multiple instances in the same project can result in unexpected behavior.

TrustyAI only supports models deployed with OpenVINO Model Server (OVMS). Non-OVMS models are not supported. Installing TrustyAI into a namespace where non-OVMS models are deployed can cause errors in the TrustyAI service.

You can use the OpenShift CLI (oc) to install an instance of the TrustyAI service.

Prerequisites

Procedure

  1. Open a new terminal window.
  2. Follow these steps to log in to your OpenShift cluster as a cluster administrator:

    1. In the OpenShift web console, click your user name and select Copy login command.
    2. After you have logged in, click Display token.
    3. Copy the Log in with this token command and paste it in the OpenShift CLI (oc).

      $ oc login --token=<token> --server=<openshift_cluster_url>
      Copy to Clipboard Toggle word wrap
  3. Navigate to the project that contains the models that the data scientist wants to monitor.

    oc project <project_name>
    Copy to Clipboard Toggle word wrap

    For example:

    oc project my-project
    Copy to Clipboard Toggle word wrap
  4. Create a TrustyAIService custom resource (CR) file, for example trustyai_crd.yaml:

    Example CR file for TrustyAI using a database

    apiVersion: trustyai.opendatahub.io/v1
    kind: TrustyAIService
    metadata:
      name: trustyai-service 
    1
    
    spec:
      storage:
    	  format: "DATABASE" 
    2
    
    	  size: "1Gi" 
    3
    
    	  databaseConfigurations: <database_secret_credentials> 
    4
    
      metrics:
      	schedule: "5s" 
    5
    Copy to Clipboard Toggle word wrap

    1
    The name of the TrustyAI service instance.
    2
    The storage format for the data, either DATABASE or PVC (persistent volume claim). Red Hat recommends that you use a database setup for better scalability, performance, and data management in TrustyAI.
    3
    The size of the data to request.
    4
    The name of the secret with your database credentials that you created in Configuring TrustyAI with a database. For example, db-credentials.
    5
    The interval at which to calculate the metrics. The default is 5s. The duration is specified with the ISO-8601 format. For example, 5s for 5 seconds, 5m for 5 minutes, and 5h for 5 hours.

    Example CR file for TrustyAI using a PVC

    apiVersion: trustyai.opendatahub.io/v1
    kind: TrustyAIService
    metadata:
      name: trustyai-service 
    1
    
    spec:
      storage:
    	  format: "PVC" 
    2
    
    	  folder: "/inputs" 
    3
    
    	  size: "1Gi" 
    4
    
      data:
    	  filename: "data.csv" 
    5
    
    	  format: "CSV" 
    6
    
      metrics:
      	schedule: "5s" 
    7
    
      	batchSize: 5000 
    8
    Copy to Clipboard Toggle word wrap

    1
    The name of the TrustyAI service instance.
    2
    The storage format for the data, either DATABASE or PVC (persistent volume claim).
    3
    The location within the PVC where you want to store the data.
    4
    The size of the PVC to request.
    5
    The suffix for the stored data files.
    6
    The format of the data. Currently, only comma-separated value (CSV) format is supported.
    7
    The interval at which to calculate the metrics. The default is 5s. The duration is specified with the ISO-8601 format. For example, 5s for 5 seconds, 5m for 5 minutes, and 5h for 5 hours.
    8
    (Optional) The observation’s historical window size to use for metrics calculation. The default is 5000, which means that the metrics are calculated using the 5,000 latest inferences.
  5. Add the TrustyAI service’s CR to your project:

    oc apply -f trustyai_crd.yaml
    Copy to Clipboard Toggle word wrap

    This command returns output similar to the following:

    trusty-service created
    Copy to Clipboard Toggle word wrap

Verification

Verify that you installed the TrustyAI service:

oc get pods | grep trustyai
Copy to Clipboard Toggle word wrap

You should see a response similar to the following:

trustyai-service-5d45b5884f-96h5z             1/1     Running
Copy to Clipboard Toggle word wrap

To use the TrustyAI service with KServe RawDeployment mode, you must first update the KServe ConfigMap, then create another ConfigMap in your model’s namespace to hold the Certificate Authority (CA) certificate.

Prerequisites

  • You have installed Red Hat OpenShift AI.
  • You have cluster administrator privileges for your OpenShift AI cluster.
  • You have access to a data science cluster that has TrustyAI enabled.
  • You have enabled the model serving platform.

Procedure

  1. Update the KServe ConfigMap (inferenceservice-config) in the OpenShift AI UI:

    1. From the OpenShift console, click Workloads ConfigMaps.
    2. From the project drop-down list, select the redhat-ods-applications namespace.
    3. Find the inferenceservice-config ConfigMap.
    4. Click the options menu (⋮) for that ConfigMap, and then click Edit ConfigMap.
    5. Add the following parameters to the logger key:

       "caBundle": "kserve-logger-ca-bundle",
       "caCertFile": "service-ca.crt",
       "tlsSkipVerify": false
      Copy to Clipboard Toggle word wrap
    6. Click Save.
  2. Create a ConfigMap in your model’s namespace to hold the CA certificate:

    1. Click Create Config Map.
    2. Enter the following code in the created ConfigMap:

        apiVersion: v1
         kind: ConfigMap
         metadata:
           name: kserve-logger-ca-bundle
           namespace: <your-model-namespace>
           annotations:
             service.beta.openshift.io/inject-cabundle: "true"
         data: {}
      Copy to Clipboard Toggle word wrap
  3. Click Save.
Note

The caBundle name can be any valid Kubernetes name, as long as it matches the name you used in the KServe ConfigMap. The caCertFile needs to match the cert name available in the CA bundle.

Verification

When you send inferences to your KServe Raw model, TrustyAI acknowledges the data capture in the output logs.

Note

If you do not observe any data on the Trusty AI logs, complete these configuration steps and redeploy the pod.

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top