Chapter 4. Monitoring model bias


As a data scientist, you might want to monitor your machine learning models for bias. This means monitoring for algorithmic deficiencies that might skew the outcomes or decisions that the model produces. Importantly, this type of monitoring helps you to ensure that the model is not biased against particular protected groups or features.

Red Hat OpenShift AI provides a set of metrics that help you to monitor your models for bias. You can use the OpenShift AI interface to choose an available metric and then configure model-specific details such as a protected attribute, the privileged and unprivileged groups, the outcome you want to monitor, and a threshold for bias. You then see a chart of the calculated values for a specified number of model inferences.

For more information about the specific bias metrics, see Using bias metrics.

4.1. Creating a bias metric

To monitor a deployed model for bias, you must first create bias metrics. When you create a bias metric, you specify details relevant to your model such as a protected attribute, privileged and unprivileged groups, a model outcome and a value that you want to monitor, and the acceptable threshold for bias.

For information about the specific bias metrics, see Using bias metrics.

For the complete list of TrustyAI metrics, see TrustyAI service API.

You can create a bias metric for a model by using the OpenShift AI dashboard or by using the OpenShift command-line interface (CLI).

You can use the OpenShift AI dashboard to create a bias metric for a model.

Prerequisites

  • You are familiar with the bias metrics that you can use with OpenShift AI and how to interpret them.
  • You are familiar with the specific data set schema and understand the names and meanings of the inputs and outputs.
  • Your OpenShift cluster administrator added you as a user to the OpenShift cluster and has installed the TrustyAI service for the data science project that contains the deployed models.
  • You set up TrustyAI for your data science project, as described in Setting up TrustyAI for your project.

Procedure

  1. Optional: To set the TRUSTY_ROUTE variable, follow these steps.

    1. In a terminal window, log in to the OpenShift cluster where OpenShift AI is deployed.

      oc login
      Copy to Clipboard Toggle word wrap
    2. Set the TRUSTY_ROUTE variable to the external route for the TrustyAI service pod.

      TRUSTY_ROUTE=https://$(oc get route/trustyai-service --template={{.spec.host}})
      Copy to Clipboard Toggle word wrap
  2. In the left menu of the OpenShift AI dashboard, click Models Model deployments .
  3. On the Model deployments page, select your project from the drop-down list.
  4. Click the name of the model that you want to configure bias metrics for.
  5. On the metrics page for the model, click the Model bias tab.
  6. Click Configure.
  7. In the Configure bias metrics dialog, complete the following steps to configure bias metrics:

    1. In the Metric name field, type a unique name for your bias metric. Note that you cannot change the name of this metric later.
    2. From the Metric type list, select one of the metrics types that are available in OpenShift AI.
    3. In the Protected attribute field, type the name of an attribute in your model that you want to monitor for bias.

      Tip

      You can use a curl command to query the metadata endpoint and view input attribute names and values. For example: curl -H "Authorization: Bearer $TOKEN" $TRUSTY_ROUTE/info | jq ".[0].data.inputSchema"

    4. In the Privileged value field, type the name of a privileged group for the protected attribute that you specified.
    5. In the Unprivileged value field, type the name of an unprivileged group for the protected attribute that you specified.
    6. In the Output field, type the name of the model outcome that you want to monitor for bias.

      Tip

      You can use a curl command to query the metadata endpoint and view output attribute names and values. For example: curl -H "Authorization: Bearer $TOKEN" $TRUSTY_ROUTE/info | jq ".[0].data.outputSchema"

    7. In the Output value field, type the value of the outcome that you want to monitor for bias.
    8. In the Violation threshold field, type the bias threshold for your selected metric type. This threshold value defines how far the specified metric can be from the fairness value for your metric, before the model is considered biased.
    9. In the Metric batch size field, type the number of model inferences that OpenShift AI includes each time it calculates the metric.
  8. Ensure that the values you entered are correct.

    Note

    You cannot edit a model bias metric configuration after you create it. Instead, you can duplicate a metric and then edit (configure) it; however, the history of the original metric is not applied to the copy.

  9. Click Configure.

Verification

  • The Bias metric configuration page shows the bias metrics that you configured for your model.

Next step

To view metrics, on the Bias metric configuration page, click View metrics in the upper-right corner.

4.1.2. Creating a bias metric by using the CLI

You can use the OpenShift command-line interface (CLI) to create a bias metric for a model.

Prerequisites

  • You are familiar with the bias metrics that you can use with OpenShift AI and how to interpret them.
  • You are familiar with the specific data set schema and understand the names and meanings of the inputs and outputs.
  • Your OpenShift cluster administrator added you as a user to the OpenShift cluster and has installed the TrustyAI service for the data science project that contains the deployed models.
  • You set up TrustyAI for your data science project, as described in Setting up TrustyAI for your project.

Procedure

  1. In a terminal window, log in to the OpenShift cluster where OpenShift AI is deployed.

    oc login
    Copy to Clipboard Toggle word wrap
  2. Set the TRUSTY_ROUTE variable to the external route for the TrustyAI service pod.

    TRUSTY_ROUTE=https://$(oc get route/trustyai-service --template={{.spec.host}})
    Copy to Clipboard Toggle word wrap
  3. Optionally, get the full list of TrustyAI service endpoints and payloads.

    curl -H "Authorization: Bearer $TOKEN" --location $TRUSTY_ROUTE/q/openapi
    Copy to Clipboard Toggle word wrap
  4. Use POST /metrics/group/fairness/spd/request to schedule a recurring bias monitoring metric with the following syntax and payload structure:

    Syntax:

    curl -sk -H "Authorization: Bearer $TOKEN" -X POST --location $TRUSTY_ROUTE/metrics/group/fairness/spd/request  \
     --header 'Content-Type: application/json' \
     --data <payload>
    Copy to Clipboard Toggle word wrap

    Payload structure:

    modelId
    The name of the model to query.
    protectedAttribute
    The name of the feature that distinguishes the groups that you are checking for fairness.
    privilegedAttribute
    The suspected favored (positively biased) class.
    unprivilegedAttribute
    The suspected unfavored (negatively biased) class.
    outcomeName
    The name of the output that provides the output you are examining for fairness.
    favorableOutcome
    The value of the outcomeName output that describes the favorable or desired model prediction.
    batchSize
    The number of previous inferences to include in the calculation.

For example:

curl -sk -H "Authorization: Bearer $TOKEN" -X POST --location $TRUSTY_ROUTE /metrics/group/fairness/spd/request \
     --header 'Content-Type: application/json' \
     --data "{
                 \"modelId\": \"demo-loan-nn-onnx-alpha\",
                 \"protectedAttribute\": \"Is Male-Identifying?\",
                 \"privilegedAttribute\": 1.0,
                 \"unprivilegedAttribute\": 0.0,
                 \"outcomeName\": \"Will Default?\",
                 \"favorableOutcome\": 0,
                 \"batchSize\": 5000
             }"
Copy to Clipboard Toggle word wrap

Verification

The bias metrics request should return output similar to the following:

{
   "timestamp":"2023-10-24T12:06:04.586+00:00",
   "type":"metric",
   "value":-0.0029676404469311524,
   "namedValues":null,
   "specificDefinition":"The SPD of -0.002968 indicates that the likelihood of Group:Is Male-Identifying?=1.0 receiving Outcome:Will Default?=0 was -0.296764 percentage points lower than that of Group:Is Male-Identifying?=0.0.",
   "name":"SPD",
   "id":"d2707d5b-cae9-41aa-bcd3-d950176cbbaf",
   "thresholds":{"lowerBound":-0.1,"upperBound":0.1,"outsideBounds":false}
}
Copy to Clipboard Toggle word wrap

The specificDefinition field helps you understand the real-world interpretation of these metric values. For this example, the model is fair over the Is Male-Identifying? field, with the rate of positive outcome only differing by about -0.3%.

4.1.3. Duplicating a bias metric

If you want to edit an existing metric, you can duplicate (copy) it in the OpenShift AI interface and then edit the values in the copy. However, note that the history of the original metric is not applied to the copy.

Prerequisites

  • You are familiar with the bias metrics that you can use with OpenShift AI and how to interpret them.
  • You are familiar with the specific data set schema and understand the names and meanings of the inputs and outputs.
  • There is an existing bias metric that you want to duplicate.

Procedure

  1. In the left menu of the OpenShift AI dashboard, click Models Model deployments.
  2. On the Model deployments page, click the name of the model with the bias metric that you want to duplicate.
  3. On the metrics page for the model, click the Model bias tab.
  4. Click Configure.
  5. On the Bias metric configuration page, click the action menu (⋮) next to the metric that you want to copy and then click Duplicate.
  6. In the Configure bias metric dialog, follow these steps:

    1. In the Metric name field, type a unique name for your bias metric. Note that you cannot change the name of this metric later.
    2. Change the values of the fields as needed. For a description of these fields, see Creating a bias metric by using the dashboard.
  7. Ensure that the values you entered are correct, and then click Configure.

Verification

  • The Bias metric configuration page shows the bias metrics that you configured for your model.

Next step

To view metrics, on the Bias metric configuration page, click View metrics in the upper-right corner.

4.2. Deleting a bias metric

You can delete a bias metric for a model by using the OpenShift AI dashboard or by using the OpenShift command-line interface (CLI).

You can use the OpenShift AI dashboard to delete a bias metric for a model.

Prerequisites

  • You have logged in to Red Hat OpenShift AI.
  • There is an existing bias metric that you want to delete.

Procedure

  1. In the left menu of the OpenShift AI dashboard, click Models Model deployments.
  2. On the Model deployments page, click the name of the model with the bias metric that you want to delete.
  3. On the metrics page for the model, click the Model bias tab.
  4. Click Configure.
  5. Click the action menu (⋮) next to the metric that you want to delete and then click Delete.
  6. In the Delete bias metric dialog, type the metric name to confirm the deletion.

    Note

    You cannot undo deleting a bias metric.

  7. Click Delete bias metric.

Verification

  • The Bias metric configuration page does not show the bias metric that you deleted.

4.2.2. Deleting a bias metric by using the CLI

You can use the OpenShift command-line interface (CLI) to delete a bias metric for a model.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have a user token for authentication as described in Authenticating the TrustyAI service.
  • There is an existing bias metric that you want to delete.

Procedure

  1. Open a new terminal window.
  2. Follow these steps to log in to your OpenShift cluster:

    1. In the upper-right corner of the OpenShift web console, click your user name and select Copy login command.
    2. After you have logged in, click Display token.
    3. Copy the Log in with this token command and paste it in the OpenShift command-line interface (CLI).

      $ oc login --token=<token> --server=<openshift_cluster_url>
      Copy to Clipboard Toggle word wrap
  3. In the OpenShift CLI, get the route to the TrustyAI service:

    TRUSTY_ROUTE=https://$(oc get route/trustyai-service --template={{.spec.host}})
    Copy to Clipboard Toggle word wrap
  4. Optional: To list all currently active requests for a metric, use GET /metrics/{{metric}}/requests. For example, to list all currently scheduled SPD metrics, type:

    curl -H "Authorization: Bearer $TOKEN" -X GET --location "$TRUSTY_ROUTE/metrics/spd/requests"
    Copy to Clipboard Toggle word wrap

    Alternatively, to list all currently scheduled metric requests, use GET /metrics/all/requests.

    curl -H "Authorization: Bearer $TOKEN" -X GET --location "$TRUSTY_ROUTE/metrics/all/requests"
    Copy to Clipboard Toggle word wrap
  5. To delete a metric, send an HTTP DELETE request to the /metrics/$METRIC/request endpoint to stop the periodic calculation, including the id of periodic task that you want to cancel in the payload. For example:

    curl -H "Authorization: Bearer $TOKEN" -X DELETE --location "$TRUSTY_ROUTE/metrics/spd/request" \
        -H "Content-Type: application/json" \
        -d "{
              \"requestId\": \"3281c891-e2a5-4eb3-b05d-7f3831acbb56\"
            }"
    Copy to Clipboard Toggle word wrap

Verification

Use GET /metrics/{{metric}}/requests to list all currently active requests for the metric and verify the metric that you deleted is not shown. For example:

curl -H "Authorization: Bearer $TOKEN" -X GET --location "$TRUSTY_ROUTE/metrics/spd/requests"
Copy to Clipboard Toggle word wrap

4.3. Viewing bias metrics for a model

After you create bias monitoring metrics, you can use the OpenShift AI dashboard to view and update the metrics that you configured.

Prerequisite

Procedure

  1. In the OpenShift AI dashboard, click Models Model deployments.
  2. On the Model deployments page, click the name of a model that you want to view bias metrics for.
  3. On the metrics page for the model, click the Model bias tab.
  4. To update the metrics shown on the page, follow these steps:

    1. In the Metrics to display section, use the Select a metric list to select a metric to show on the page.

      Note

      Each time you select a metric to show on the page, an additional Select a metric list appears. This enables you to show multiple metrics on the page.

    2. From the Time range list in the upper-right corner, select a value.
    3. From the Refresh interval list in the upper-right corner, select a value.

      The metrics page shows the metrics that you selected.

  5. Optional: To remove one or more metrics from the page, in the Metrics to display section, perform one of the following actions:

    • To remove an individual metric, click the cancel icon (✖) next to the metric name.
    • To remove all metrics, click the cancel icon (✖) in the Select a metric list.
  6. Optional: To return to configuring bias metrics for the model, on the metrics page, click Configure in the upper-right corner.

Verification

  • The metrics page shows the metrics selections that you made.

4.4. Using bias metrics

You can use the following bias metrics in Red Hat OpenShift AI:

Statistical Parity Difference

Statistical Parity Difference (SPD) is the difference in the probability of a favorable outcome prediction between unprivileged and privileged groups. The formal definition of SPD is the following:

  • ŷ = 1 is the favorable outcome.
  • Dᵤ and Dₚ are the unprivileged and privileged group data.

You can interpret SPD values as follows:

  • A value of 0 means that the model is behaving fairly for a selected attribute (for example, race, gender).
  • A value in the range -0.1 to 0.1 means that the model is reasonably fair for a selected attribute. Instead, you can attribute the difference in probability to other factors, such as the sample size.
  • A value outside the range -0.1 to 0.1 indicates that the model is unfair for a selected attribute.
  • A negative value indicates that the model has bias against the unprivileged group.
  • A positive value indicates that the model has bias against the privileged group.
Disparate Impact Ratio

Disparate Impact Ratio (DIR) is the ratio of the probability of a favorable outcome prediction for unprivileged groups to that of privileged groups. The formal definition of DIR is the following:

  • ŷ = 1 is the favorable outcome.
  • Dᵤ and Dₚ are the unprivileged and privileged group data.

The threshold to identify bias depends on your own criteria and specific use case.

For example, if your threshold for identifying bias is represented by a DIR value below 0.8 or above 1.2, you can interpret the DIR values as follows:

  • A value of 1 means that the model is fair for a selected attribute.
  • A value of between 0.8 and 1.2 means that the model is reasonably fair for a selected attribute.
  • A value below 0.8 or above 1.2 indicates bias.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat