此内容没有您所选择的语言版本。

Chapter 3. Setting up TrustyAI for your project


To set up model monitoring with TrustyAI for a data science project, a data scientist does the following tasks:

  • Authenticate the TrustyAI service
  • Send training data to TrustyAI for bias or data drift monitoring
  • Label your data fields (optional)

After setting up, a data scientist can create and view bias and data drift metrics for deployed models.

3.1. Authenticating the TrustyAI service

To access TrustyAI service external endpoints, you must provide OAuth proxy (oauth-proxy) authentication. You must obtain a user token, or a token from a service account with sufficient privileges, and then pass the token to the TrustyAI service when using curl commands.

Prerequisites

  • You have installed the OpenShift CLI (oc) as described in the appropriate documentation for your cluster:

  • Your OpenShift cluster administrator added you as a user to the OpenShift cluster and has installed the TrustyAI service for the data science project that contains the deployed models.

Procedure

  1. Open a new terminal window.
  2. Follow these steps to log in to your OpenShift cluster:

    1. In the upper-right corner of the OpenShift web console, click your user name and select Copy login command.
    2. After you have logged in, click Display token.
    3. Copy the Log in with this token command and paste it in the OpenShift CLI (oc).

      $ oc login --token=<token> --server=<openshift_cluster_url>
      Copy to Clipboard Toggle word wrap
  3. Enter the following command to set a user token variable on OpenShift:

    export TOKEN=$(oc whoami -t)
    Copy to Clipboard Toggle word wrap

Verification

  • Enter the following command to check the user token variable:

    echo $TOKEN
    Copy to Clipboard Toggle word wrap

Next step

When running curl commands, pass the token to the TrustyAI service using the Authorization header. For example:

curl -H "Authorization: Bearer $TOKEN" $TRUSTY_ROUTE
Copy to Clipboard Toggle word wrap

3.2. Uploading training data to TrustyAI

Upload training data to use with TrustyAI for bias monitoring or data drift detection.

Prerequisites

  • Your cluster administrator added you as a user to the OpenShift cluster and has installed the TrustyAI service for the data science project that contains the deployed models.
  • You have model training data to upload.
  • You authenticated the TrustyAI service as described in Authenticating the TrustyAI service.

Procedure

  1. Set the TRUSTY_ROUTE variable to the external route for the TrustyAI service in your project:

    TRUSTY_ROUTE=https://$(oc get route/trustyai-service --template={{.spec.host}})
    Copy to Clipboard Toggle word wrap
  2. Send the training data to the /data/upload endpoint:

    curl -sk $TRUSTY_ROUTE/data/upload  \
      --header 'Authorization: Bearer ${TOKEN}' \
      --header 'Content-Type: application/json' \
      -d @data/training_data.json
    Copy to Clipboard Toggle word wrap

    The following message is displayed if the upload was successful: 1000 datapoints successfully added to gaussian-credit-model data.

Verification

  • Verify that TrustyAI has received the data via the /info endpoint by inputting this query:

    curl -H 'Authorization: Bearer ${TOKEN}' \
        $TRUSTY_ROUTE/info | jq ".[0].data"
    Copy to Clipboard Toggle word wrap

    The output returns a json file containing the following information for the model:

    • The names, data types, and positions of fields in the input and output.
    • The observed values that these fields take. This value is usually null because there are too many unique feature values to enumerate.
    • The total number of input-output pairs observed. It should be 1000.

3.3. Sending training data to TrustyAI

To use TrustyAI for bias monitoring or data drift detection, you must send training data for your model to TrustyAI.

Prerequisites

  • Your OpenShift cluster administrator added you as a user to the OpenShift cluster and has installed the TrustyAI service for the data science project that contains the deployed models.
  • You authenticated the TrustyAI service as described in Authenticating the TrustyAI service.
  • You have uploaded model training data to TrustyAI.
  • Your deployed model is registered with TrustyAI.

    Verify that the TrustyAI service has registered your deployed model, as follows:

    1. In the OpenShift web console, navigate to Workloads Pods.
    2. From the project list, select the project that contains your deployed model.
    3. Select the pod for your serving platform (for example, modelmesh-serving-ovms-1.x-xxxxx).
    4. On the Environment tab, verify that the MM_PAYLOAD_PROCESSORS environment variable is set.

Procedure

  1. Set the TRUSTY_ROUTE variable to the external route for the TrustyAI service pod.

    TRUSTY_ROUTE=https://$(oc get route/trustyai-service --template={{.spec.host}})
    Copy to Clipboard Toggle word wrap
  2. Get the inference endpoints for the deployed model, as described in Accessing the inference endpoint for a deployed model.
  3. Send data to this endpoint. For more information, see the KServe v2 Inference Protocol documentation.

Verification

Follow these steps to view cluster metrics and verify that TrustyAI is receiving data.

  1. Log in to the OpenShift web console.
  2. Switch to the Developer perspective.
  3. In the left menu, click Observe.
  4. On the Metrics page, click the Select query list and then select Custom query.
  5. In the Expression field, enter trustyai_model_observations_total and press Enter. Your model should be listed and reporting observed inferences.
  6. Optional: Select a time range from the list above the graph. For example, select 5m.

3.4. Labeling data fields

After you send model training data to TrustyAI, you might want to apply a set of name mappings to your inputs and outputs so that the field names are meaningful and easier to work with.

Prerequisites

  • Your OpenShift cluster administrator added you as a user to the OpenShift cluster and has installed the TrustyAI service for the data science project that contains the deployed models.
  • You sent training data to TrustyAI as described in Sending training data to TrustyAI.

Procedure

  1. Open a new terminal window.
  2. Follow these steps to log in to your OpenShift cluster:

    1. In the upper-right corner of the OpenShift web console, click your user name and select Copy login command.
    2. After you have logged in, click Display token.
    3. Copy the Log in with this token command and paste it in the OpenShift CLI (oc).

      $ oc login --token=<token> --server=<openshift_cluster_url>
      Copy to Clipboard Toggle word wrap
  3. In the OpenShift CLI (oc), get the route to the TrustyAI service:

    TRUSTY_ROUTE=https://$(oc get route/trustyai-service --template={{.spec.host}})
    Copy to Clipboard Toggle word wrap
  4. To examine TrustyAI’s model metadata, query the /info endpoint:

    curl -H "Authorization: Bearer $TOKEN" $TRUSTY_ROUTE/info | jq ".[0].data"
    Copy to Clipboard Toggle word wrap

    This outputs a JSON file containing the following information for each model:

    • The names, data types, and positions of input fields and output fields.
    • The observed field values.
    • The total number of input-output pairs observed.
  5. Use POST /info/names to apply name mappings to the fields, similar to the following example.

    Change the model-name, original-name, and Prediction values to those used in your model. Change the New name values to the labels that you want to use.

    curl -sk -H "Authorization: Bearer $TOKEN" -X POST --location $TRUSTY_ROUTE/info/names \
      -H "Content-Type: application/json"   \
      -d "{
        \"modelId\": \"model-name\",
        \"inputMapping\":
          {
            \"original-name-0\": \"New name 0\",
            \"original-name-1\": \"New name 1\",
            \"original-name-2\": \"New name 2\",
            \"original-name-3\": \"New name 3\",
          },
        \"outputMapping\": {
          \"predict-0\": \"Prediction 0\"
        }
      }"
    Copy to Clipboard Toggle word wrap

    For another example, see https://github.com/trustyai-explainability/odh-trustyai-demos/blob/main/2-BiasMonitoring/kserve-demo/scripts/apply_name_mapping.sh.

Verification

A "Feature and output name mapping successfully applied" message is displayed.

返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat