Chapter 1. Overview of model monitoring
To ensure that machine learning models are transparent, fair, and reliable, data scientists can use TrustyAI in OpenShift AI to monitor and assess their data science models.
Data scientists can monitor their data science and machine learning models in OpenShift AI for the following metrics:
- Bias: Check for unfair patterns or biases in data and model predictions to ensure your model’s decisions are unbiased.
- Data drift: Detect changes in input data distributions over time by comparing the latest real-world data to the original training data. Comparing the data identifies shifts or deviations that could impact model performance, ensuring that the model remains accurate and reliable.
Data scientists can assess their data science and machine learning models in OpenShift AI using the following services:
- LLM evaluation: Monitor your Large Language Models (LLMs) against a range of metrics, in order to ensure the accuracy and quality of its output.
- Guardrails: Safeguard text generation inputs and outputs of Large Language Models (LLMs). The Guardrails Orchestrator manages the network requests between the user, the generative model, and the various detector services, and the Guardrails detectors identify and flag content that violates predefined rules, such as the presence of sensitive data, harmful language, or prompt injection attacks, as well as perform standalone detections.