Chapter 1. Overview of model monitoring
To ensure that machine-learning models are transparent, fair, and reliable, data scientists can use TrustyAI in OpenShift AI to monitor their data science models.
Data scientists can monitor their data science models in OpenShift AI for the following metrics:
- Bias
- Check for unfair patterns or biases in data and model predictions to ensure your model’s decisions are unbiased.
- Data drift
- Detect changes in input data distributions over time by comparing the latest real-world data to the original training data. Comparing the data identifies shifts or deviations that could impact model performance, ensuring that the model remains accurate and reliable.