Chapter 5. Monitoring data drift


Data drift refers to changes that occur in the distribution of incoming data that differ significantly from the data on which the model was originally trained. This distributional shift can cause model performance to become unreliable, because machine learning models rely heavily on the patterns in their training data.

Detecting data drift helps ensure that your models continue to perform as expected and that they remain accurate and reliable. Trusty AI measures the statistical alignment between a model’s training data and its incoming inference data using specialized metrics.

Metrics for drift detection include:

  • Mean-Shift
  • FourierMMD
  • Kolmogorov-Smirnov
  • ApproxKSTest

5.1. Creating a drift metric

To monitor a deployed model for data drift, you must first create drift metrics.

For information about the specific data drift metrics, see Using drift metrics.

For the complete list of TrustyAI metrics, see TrustyAI service API.

5.1.1. Creating a drift metric by using the CLI

You can use the OpenShift CLI (oc) to create a data drift metric for a model.

Prerequisites

  • You are familiar with the specific data set schema and understand the relevant inputs and outputs.
  • Your OpenShift cluster administrator added you as a user to the OpenShift cluster and has installed the TrustyAI service for the project that contains the deployed models.
  • You set up TrustyAI for your project, as described in Setting up TrustyAI for your project.

Procedure

  1. Open a new terminal window.
  2. Follow these steps to log in to your OpenShift cluster:

    1. In the upper-right corner of the OpenShift web console, click your user name and select Copy login command.
    2. After you have logged in, click Display token.
    3. Copy the Log in with this token command and paste it in the OpenShift CLI (oc).

      $ oc login --token=<token> --server=<openshift_cluster_url>
      Copy to Clipboard Toggle word wrap
  3. Set the TRUSTY_ROUTE variable to the external route for the TrustyAI service pod.

    TRUSTY_ROUTE=https://$(oc get route/trustyai-service --template={{.spec.host}})
    Copy to Clipboard Toggle word wrap
  4. Optionally, get the full list of TrustyAI service endpoints and payloads.

    curl -H "Authorization: Bearer $TOKEN" --location $TRUSTY_ROUTE/q/openapi
    Copy to Clipboard Toggle word wrap
  5. Use POST /metrics/drift/meanshift/request to schedule a recurring drift monitoring metric with the following syntax and payload structure:

    Syntax:

    curl -k -H "Authorization: Bearer $TOKEN" -X POST --location $TRUSTY_ROUTE/metrics/drift/meanshift/request \
     --header 'Content-Type: application/json' \
     --data <payload>
    Copy to Clipboard Toggle word wrap

    Payload structure:

    modelId
    The name of the model to monitor.
    referenceTag
    The data to use as the reference distribution.

For example:

curl -k -H "Authorization: Bearer $TOKEN" -X POST --location $TRUSTY_ROUTE/metrics/drift/meanshift/request \
     --header 'Content-Type: application/json' \
     --data "{
                 \"modelId\": \"gaussian-credit-model\",
                 \"referenceTag\": \"TRAINING\"
             }"
Copy to Clipboard Toggle word wrap

5.2. Deleting a drift metric by using the CLI

You can use the OpenShift CLI (oc) to delete a drift metric for a model.

Prerequisites

Procedure

  1. Open a new terminal window.
  2. Follow these steps to log in to your OpenShift cluster:

    1. In the OpenShift web console, click your user name and select Copy login command.
    2. After you have logged in, click Display token.
    3. Copy the Log in with this token command and paste it in the OpenShift CLI (oc).

      $ oc login --token=<token> --server=<openshift_cluster_url>
      Copy to Clipboard Toggle word wrap
  3. In the OpenShift CLI (oc), get the route to the TrustyAI service:

    TRUSTY_ROUTE=https://$(oc get route/trustyai-service --template={{.spec.host}})
    Copy to Clipboard Toggle word wrap
  4. Optional: To list all currently active requests for a metric, use GET /metrics/{{metric}}/requests. For example, to list all currently scheduled MeanShift metrics, type:

    curl -k -H "Authorization: Bearer $TOKEN" -X GET --location "$TRUSTY_ROUTE/metrics/drift/meanshift/requests"
    Copy to Clipboard Toggle word wrap

    Alternatively, to list all currently scheduled metric requests, use GET /metrics/all/requests.

    curl -H "Authorization: Bearer $TOKEN" -X GET --location "$TRUSTY_ROUTE/metrics/all/requests"
    Copy to Clipboard Toggle word wrap
  5. To delete a metric, send an HTTP DELETE request to the /metrics/$METRIC/request endpoint to stop the periodic calculation, including the id of periodic task that you want to cancel in the payload. For example:

    curl -k -H "Authorization: Bearer $TOKEN" -X DELETE --location "$TRUSTY_ROUTE/metrics/drift/meanshift/request" \
        -H "Content-Type: application/json" \
        -d "{
              \"requestId\": \"$id\"
            }"
    Copy to Clipboard Toggle word wrap

Verification

Use GET /metrics/{{metric}}/requests to list all currently active requests for the metric and verify the metric that you deleted is not shown. For example:

curl -H "Authorization: Bearer $TOKEN" -X GET --location "$TRUSTY_ROUTE/metrics/drift/meanshift/requests"
Copy to Clipboard Toggle word wrap

5.3. Viewing drift metrics for a model

After you create data drift monitoring metrics, use the OpenShift web console to view and update the drift metrics that you configured.

Prerequisites

Procedure

  1. Log in to the OpenShift web console.
  2. Switch to the Developer perspective.
  3. In the left menu, click Observe.
  4. As described in Monitoring your project metrics, use the web console to run queries for trustyai_* metrics.

5.4. Using drift metrics

You can use the following data drift metrics in Red Hat OpenShift AI:

MeanShift

The MeanShift metric calculates the per-column probability that the data values in a test data set are from the same distribution as those in a training data set (assuming that the values are normally distributed). This metric measures the difference in the means of specific features between the two datasets.

MeanShift is useful for identifying straightforward changes in data distributions, such as when the entire distribution has shifted to the left or right along the feature axis.

This metric returns the probability that the distribution seen in the "real world" data is derived from the same distribution as the reference data. The closer the value is to 0, the more likely there is to be significant drift.

FourierMMD

The FourierMMD metric provides the probability that the data values in a test data set have drifted from the training data set distribution, assuming that the computed Maximum Mean Discrepancy (MMD) values are normally distributed. This metric compares the empirical distributions of the data sets by using an MMD measure in the Fourier domain.

FourierMMD is useful for detecting subtle shifts in data distributions that might be overlooked by simpler statistical measures.

This metric returns the probability that the distribution seen in the "real world" data has drifted from the reference data. The closer the value is to 1, the more likely there is to be significant drift.

KSTest

The KSTest metric calculates two Kolmogorov-Smirnov tests for each column to determine whether the data sets are derived from the same distributions. This metric measures the maximum distance between the empirical cumulative distribution functions (CDFs) of the data sets, without assuming any specific underlying distribution.

KSTest is useful for detecting changes in distribution shape, location, and scale.

This metric returns the probability that the distribution seen in the "real world" data is derived from the same distribution as the reference data. The closer the value is to 0, the more likely there is to be significant drift.

ApproxKSTest

The ApproxKSTest metric performs an approximate Kolmogorov-Smirnov test, ensuring that the maximum error is 6*epsilon compared to an exact KSTest.

ApproxKSTest is useful for detecting changes in distributions for large data sets where performing an exact KSTest might be computationally expensive.

This metric returns the probability that the distribution seen in the "real world" data is derived from the same distribution as the reference data. The closer the value is to 0, the more likely there is to be significant drift.

This example scenario deploys an XGBoost model into your cluster and reviews its output using a drift metric.

The XGBoost model was created for the purpose of this demonstration and predicts credit card approval based on the following features: age, credit score, years of education, and years in employment.

When the model is deployed and the data that you upload is formatted, use the mean shift metric to monitor for data drift. This metric is useful for ensuring that a model remains accurate and reliable in a production environment.

Mean shift compares a numeric test dataset against a numeric training dataset. It produces a p-value that measures the probability the test data has originated from the same numeric distribution as the training data. A p-value less than 0.05 indicates a statistically significant drift between the two datasets. A p-value equal to or greater than 0.05 indicates no statistically significant evidence of drift.

Note

Mean shift performs best when each feature in the data is normally distributed. Choose a different metric for working with different or unknown data distributions.

Prerequisites

Procedure

  1. Obtain a bearer token to authenticate your external endpoints by running the following command:

    $ oc apply -f resources/service_account.yaml
    export TOKEN=$(oc create token user-one)
    Copy to Clipboard Toggle word wrap
  2. In your model namespace, deploy the storage container, serving runtime, and the credit model:

    $ oc project model-namespace || true
    $ oc apply -f resources/model_storage_container.yaml
    $ oc apply -f resources/odh-mlserver-1.x.yaml
    $ oc apply -f resources/model_gaussian_credit.yaml
    Copy to Clipboard Toggle word wrap
  3. Set the route for your data upload:

    TRUSTY_ROUTE=https://$(oc get route/trustyai-service --template={{.spec.host}})
    Copy to Clipboard Toggle word wrap
  4. Download the training data payload (file size 472 KB):

    wget https://github.com/trustyai-explainability/odh-trustyai-demos/blob/72f748da9410f92a60bea73ce5e3f47c10ad1cea/3-DataDrift/kserve-demo/data/training_data.json -O training_data.json
    Copy to Clipboard Toggle word wrap
  5. Label your model training data. This data has four main fields. The model_name and data_tag fields require a label because they are directly referenced in the Metrics dashboard later in the scenario. In addition to the required fields, it is best to also label response and request fields. The four fields are:

    1. model_name: The name of the model that correlates to this data. The name should match that of the model provided in the model YAML, which is gaussian-credit-model.
    2. data_tag: A string tag to reference this particular set of data. Use the string "TRAINING".
    3. request: This is a KServe inference request, as if you were sending this data directly to the model server’s /infer endpoint.
    4. response: The KServe inference response that is returned from sending the above request to the model.
  6. Upload the model training data to the TrustyAI endpoint:

    curl -sk -H "Authorization: Bearer ${TOKEN}" $TRUSTY_ROUTE/data/upload  \
     --header 'Content-Type: application/json' \
     -d @training_data.json
    Copy to Clipboard Toggle word wrap

    The following message appears confirming the data upload: 1000 datapoints successfully added to gaussian-credit-model data.

  7. Label your model input and output fields with the actual column names of the data in your KServe payloads. Send a JSON payload containing a simple set of original-name : new-name pairs, assigning new meaningful names to the input and output features of your model. A message that says "Feature and output name mapping successfully applied" appears if the request is successful:

    curl -sk -H "Authorization: Bearer ${TOKEN}" -X POST --location $TRUSTY_ROUTE/info/names \
      -H "Content-Type: application/json"   \
      -d "{
        \"modelId\": \"gaussian-credit-model\",
        \"inputMapping\":
          {
            \"credit_inputs-0\": \"Age\",
            \"credit_inputs-1\": \"Credit Score\",
            \"credit_inputs-2\": \"Years of Education\",
            \"credit_inputs-3\": \"Years of Employment\"
          },
        \"outputMapping\": {
          \"predict-0\": \"Acceptance Probability\"
        }
      }"
    Copy to Clipboard Toggle word wrap
    Tip

    Define name mappings in TrustyAI to assign memorable names to input or output names. These names can then be used in subsequent requests to the TrustyAI service.

  8. Verify that TrustyAI has received the data by querying the /info endpoint:

    curl -H "Authorization: Bearer ${TOKEN}" $TRUSTY_ROUTE/info | jq '.["gaussian-credit-model"].data.inputSchema'
    Copy to Clipboard Toggle word wrap
  9. The following output appears as a JSON file confirming that TrustyAI has successfully received the data:

    {
      "items": {
        "Years of Education": {
          "type": "DOUBLE",
          "name": "credit_inputs-2",
          "columnIndex": 2
        },
        "Years of Employment": {
          "type": "DOUBLE",
          "name": "credit_inputs-3",
          "columnIndex": 3
        },
        "Age": {
          "type": "DOUBLE",
          "name": "credit_inputs-0",
          "columnIndex": 0
        },
        "Credit Score": {
          "type": "DOUBLE",
          "name": "credit_inputs-1",
          "columnIndex": 1
        }
      },
      "nameMapping": {
        "credit_inputs-0": "Age",
        "credit_inputs-1": "Credit Score",
        "credit_inputs-2": "Years of Education",
        "credit_inputs-3": "Years of Employment"
      }
    }
    Copy to Clipboard Toggle word wrap
  10. Create a recurring drift monitoring metric using /metrics/drift/meanshift/request. This will measure the drift of all recorded inference data against the reference distribution. The body of the payload requires a modelId that sets which model to monitor and a referenceTag that determines which data to use as the reference distribution. The values of these fields should match the modelId and referenceTag inside your data upload payload:

    curl -k -H "Authorization: Bearer ${TOKEN}" -X POST --location $TRUSTY_ROUTE/metrics/drift/meanshift/request -H "Content-Type: application/json" \
      -d "{
            \"modelId\": \"gaussian-credit-model\",
            \"referenceTag\": \"TRAINING\"
          }"
    Copy to Clipboard Toggle word wrap
  11. Check the metrics in the OpenShift console under Observe Metrics:

    1. Set the time window to 5 minutes and the refresh interval to 15 seconds.
    2. In the Expression field, enter trustyai_meanshift.

      Note

      It may take a few seconds before the cluster monitoring stacks picks up the new metric. You may need to refresh before the new metrics appear, if you’re already in the section of the OpenShift console.

  12. Observe in the Metric Chart onscreen that a metric is emitted for each of the four features and the single output, making for five measurements in total. All metric values should equal 1 (no drift), because we only have the training data, which can’t drift from itself.
  13. Collect some simulated real-world inferences to observe the drift monitoring. To do this, send small batches of data to the model, mimicking a real-world deployment:

    1. Get the route to the model:

      MODEL=gaussian-credit-model
      BASE_ROUTE=$(oc get inferenceservice gaussian-credit-model -o jsonpath='{.status.url}')
      MODEL_ROUTE="${BASE_ROUTE}/v2/models/${MODEL}/infer"
      Copy to Clipboard Toggle word wrap
    2. Download the data batch and send data payloads to your model:

      DATA_PATH=sample_trustyai_model_data
      mkdir $DATA_PATH
      for batch in {0..595..5}; do
      	wget https://github.com/trustyai-explainability/odh-trustyai-demos/tree/main/3-DataDrift/kserve-demo/data/data_batches/$batch.json -O $DATA_PATH/$batch.json
      	curl -sk "${MODEL_ROUTE}"\
        		-H "Authorization: Bearer ${TOKEN}" \
        		-H "Content-Type: application/json" \
        		-d @$DATA_PATH/$batch.json
        	sleep 1
      done
      Copy to Clipboard Toggle word wrap
  14. Observe the updated drift metrics in the Observe Metrics section of the OpenShift console. The mean shift metric values for the various features change:

    1. The values for Credit Score, Age, and Acceptance Probability have all dropped to 0, indicating there is a statistically very high likelihood that the values of these fields in the inference data come from a different distribution than that of the training data.
    2. The Years of Employment and Years of Education scores have dropped to 0.34 and 0.82 respectively, indicating that there is a little drift, but not enough to be particularly concerning.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat