Este contenido no está disponible en el idioma seleccionado.
Chapter 5. Evaluate LLMs with EvalHub
Use EvalHub to evaluate your large language models against standardized benchmarks, track results with MLflow, and manage evaluation workflows across multiple tenants.
5.1. Understanding EvalHub Copiar enlaceEnlace copiado en el portapapeles!
EvalHub is an evaluation orchestration service for large language models (LLMs) on Red Hat OpenShift AI. It provides a versioned REST API for submitting evaluation jobs, managing benchmark providers, and tracking results through MLflow experiment tracking. Each evaluation runs as an isolated Job, enabling parallel execution and horizontal scalability across namespaces and tenants.
EvalHub consists of three components:
- EvalHub Server — A REST API service that handles evaluation workflows, job orchestration, and provider management, with PostgreSQL storage.
-
EvalHub SDK and CLI — A Python client library and command-line tool for submitting evaluations and building framework adapters. The CLI provides the
evalhubcommand for interacting with EvalHub from the terminal. - Providers — Evaluation framework adapters packaged as container images. Each provider translates EvalHub job requests into evaluation framework-specific commands and reports results back to the server.
5.1.1. Core concepts Copiar enlaceEnlace copiado en el portapapeles!
The following concepts are central to EvalHub.
- Providers
-
A provider represents an evaluation framework, such as
lm_evaluation_harness,garak,guidellm, orlighteval. Each provider includes a set of benchmarks. EvalHub includes built-in providers that are read-only. - Benchmarks
-
A benchmark is a specific evaluation task within a provider. For example, the
lm_evaluation_harnessprovider includes benchmarks such asmmlu,hellaswag,arc_challenge, andgsm8k. Each benchmark has a category such asmath,reasoning,safety, orcode, along with metrics and optional pass criteria. - Collections
-
A collection groups benchmarks from one or more providers into a reusable evaluation suite. For example, a
safety-and-fairness-v1collection might combine safety benchmarks fromlm_evaluation_harnesswith vulnerability scans fromgarak. - Pass criteria and thresholds
Pass criteria define the minimum score that a benchmark or job must achieve to pass. Thresholds can be set at three levels, from most to least specific:
- Benchmark level — You set a benchmark-level threshold per benchmark in a job submission or collection definition. This overrides all other thresholds.
- Collection level — A collection-level threshold applies to all benchmarks in the collection that do not have their own threshold.
Provider level — A provider-level threshold is the default threshold defined in the provider’s benchmark configuration.
Each benchmark declares a primary score metric, such as
acc_normortoxicity_score, and optionally alower_is_betterflag. Whenlower_is_betterisfalse(the default), the benchmark passes if the score is greater than or equal to the threshold. Whenlower_is_betteristrue, it passes if the score is less than or equal to the threshold.Each benchmark in a collection or job can be assigned a weight that controls its relative importance in the overall score. At the job level, EvalHub computes a weighted average of all benchmark primary scores and compares it against the job-level threshold to determine an overall pass or fail result.
- Evaluation jobs
-
An evaluation job represents a single evaluation run against a model. A job references either a list of benchmarks or a collection, a model endpoint, and optional MLflow experiment configuration. Jobs progress through states:
pending,running,completed,failed,cancelled, orpartially_failed. - Adapters
-
An adapter wraps an evaluation framework, such as
lm_evaluation_harness, and implements theFrameworkAdapterinterface so that EvalHub can orchestrate the evaluation. Adapters are packaged as Red Hat Universal Base Image 9 (UBI9) container images.
5.2. EvalHub architecture overview Copiar enlaceEnlace copiado en el portapapeles!
When you submit an evaluation job, EvalHub follows this workflow:
- The client submits a job through the REST API or SDK.
-
The server validates the request, resolves benchmarks, and persists the job with a status of
pending. The runtime creates a Kubernetes Job for each benchmark. Each Job pod contains two containers:
- The adapter container runs the evaluation framework. Adapters are provider-specific container images that implement a standard interface, translating the job specification into the evaluation framework-specific invocations and returning structured results.
-
The sidecar proxy container authenticates to the EvalHub server using a
ServiceAccounttoken and forwards status events and results from the adapter. The sidecar also proxies authenticated requests to MLflow and OCI registries when configured. This design keeps credentials out of the adapter container, which can run custom user-provided code.
- The adapter runs the evaluation and reports status events back to EvalHub through the sidecar.
- The server aggregates and stores the results. If MLflow integration is enabled, the server also logs the results to MLflow.
5.3. Deploy EvalHub with the TrustyAI Operator Copiar enlaceEnlace copiado en el portapapeles!
Deploy EvalHub through the TrustyAI Operator as part of the OpenShift AI.
Prerequisites
- You have cluster administrator privileges for your OpenShift cluster.
-
You have installed the OpenShift CLI (
oc) version 4.12 or later. -
You have the TrustyAI component in your OpenShift AI
DataScienceClusterset toManaged. -
You have configured KServe to use
RawDeploymentmode.
Procedure
Create a Secret containing the PostgreSQL connection string. The Secret must contain a
db-urlkey with a valid PostgreSQL connection URI:apiVersion: v1 kind: Secret metadata: name: evalhub-db-credentials type: Opaque stringData: db-url: "postgres://evalhub:changeme@postgresql.evalhub.svc.cluster.local:5432/evalhub"NoteReplace the hostname, credentials including the
changemeplaceholder, and database name to match your PostgreSQL deployment.$ oc apply -f evalhub-db-credentials.yaml -n <namespace>Create an EvalHub custom resource to deploy the service:
Example
evalhub_cr.yamlapiVersion: trustyai.opendatahub.io/v1alpha1 kind: EvalHub metadata: name: evalhub spec: replicas: 1 database: type: postgresql secret: evalhub-db-credentials providers: - lm_evaluation_harness - garak - guidellm collections: - safety-and-fairness-v1 env: - name: MLFLOW_TRACKING_URI value: "http://mlflow.mlflow.svc.cluster.local:5000"Expand Table 5.1. EvalHub custom resource parameters Parameter Description replicasThe number of EvalHub pods to create.
database.typeStorage backend. Set to
postgresqlfor PostgreSQL.database.secretName of a Secret containing the PostgreSQL connection string.
providersList of evaluation provider configurations to load at startup.
collectionsList of benchmark collections to load at startup.
otelOptional: OpenTelemetry exporter configuration for traces and metrics.
envEnvironment variables to set in the EvalHub deployment containers.
Apply the custom resource to the cluster:
$ oc apply -f evalhub_cr.yaml -n <namespace>NoteUse a dedicated namespace for EvalHub rather than
redhat-ods-applications. Theredhat-ods-applicationsnamespace has NetworkPolicies that restrict cross-namespace traffic, which requires additional labeling on tenant namespaces. For more information, see Section 5.23, “Set up a tenant namespace”.
The TrustyAI Operator automatically reconciles the EvalHub custom resource in your namespace.
Verification
Confirm that the EvalHub pod is running:
$ oc get pods -l app=eval-hub -n <namespace>Example output
NAME READY STATUS RESTARTS AGE evalhub-7b9f4c6d88-x2k4p 1/1 Running 0 2mQuery the health endpoint:
$ export EVALHUB_URL=https://$(oc get routes evalhub -o jsonpath='{.spec.host}' -n <namespace>) $ curl $EVALHUB_URL/api/v1/health | jq .Example response
{ "status": "healthy", "timestamp": "2026-04-13T10:00:00Z", "version": "0.3.0", "uptime": 3600000000000, "active_evaluations": 0 }Install the EvalHub Python SDK to interact with the server. To install the SDK client library, run the following command:
$ pip install "eval-hub-sdk[client]"To also include the CLI, run the following command:
$ pip install "eval-hub-sdk[cli]"
5.4. EvalHub multi-tenancy Copiar enlaceEnlace copiado en el portapapeles!
EvalHub is a multi-tenant service. All API requests, except requests to /api/v1/health, must include the X-Tenant header, which identifies the target namespace. Resources such as jobs, providers, and collections are scoped to the tenant specified in this header. For information about setting up tenant namespaces and granting access, see Section 5.22, “EvalHub multi-tenancy and RBAC”.
When using curl, include the -H "X-Tenant: <namespace>" header in each request.
When using the Python SDK, set the tenant at client initialization:
from evalhub import SyncEvalHubClient
client = SyncEvalHubClient(
base_url="https://evalhub.example.com",
tenant="my-namespace"
)
When using the CLI, configure the tenant in your connection profile. The CLI stores connection settings in named profiles at ~/.config/evalhub/config.yaml. Settings are persistent across commands. Use --profile <name> to override the active profile at runtime.
$ evalhub config set tenant my-namespace
All API requests must also include an Authorization: Bearer $TOKEN header. The curl examples in this guide assume you have stored the EvalHub route URL in the EVALHUB_URL environment variable and a valid bearer token in the TOKEN environment variable. For information about obtaining the route URL, see Section 5.3, “Deploy EvalHub with the TrustyAI Operator”. For information about obtaining a bearer token, see Section 5.24, “Grant access to EvalHub”.
5.5. List EvalHub providers and benchmarks Copiar enlaceEnlace copiado en el portapapeles!
List the evaluation providers and benchmarks registered in EvalHub to see which evaluation frameworks and tasks are available for your jobs. You can list providers using the REST API, Python SDK, or CLI.
Prerequisites
- You have a running EvalHub instance.
Procedure
List all registered providers:
$ curl -s -H "Authorization: Bearer $TOKEN" -H "X-Tenant: <namespace>" $EVALHUB_URL/api/v1/evaluations/providers | jq .Example output
{ "items": [ { "resource": { "id": "lm_evaluation_harness", "owner": "system" }, "name": "lm_evaluation_harness", "title": "LM Evaluation Harness", "benchmarks": [ ... ] }, { "resource": { "id": "garak", "owner": "system" }, "name": "garak", "title": "Garak", "benchmarks": [ ... ] } ] }Get a specific provider with its benchmarks:
$ curl -s -H "Authorization: Bearer $TOKEN" -H "X-Tenant: <namespace>" $EVALHUB_URL/api/v1/evaluations/providers/lm_evaluation_harness | jq .Example output
{ "resource": { "id": "lm_evaluation_harness", "owner": "system" }, "name": "lm_evaluation_harness", "title": "LM Evaluation Harness", "benchmarks": [ { "id": "mmlu", "name": "MMLU", "category": "reasoning" }, { "id": "hellaswag", "name": "HellaSwag", "category": "reasoning" }, { "id": "arc_challenge", "name": "ARC Challenge", "category": "reasoning" }, ... ] }
Alternatively, use the Python SDK:
from evalhub.client import SyncEvalHubClient
client = SyncEvalHubClient(
base_url="https://evalhub.example.com",
tenant="my-namespace"
)
for provider in client.providers.list():
print(f"{provider.resource.id}: {provider.name}")
benchmarks = client.benchmarks.list(provider_id="lm_evaluation_harness")
for b in benchmarks:
print(f" {b.id}: {b.name}")
+ .Example output
lm_evaluation_harness: LM Evaluation Harness
garak: Garak
guidellm: GuideLLM
mmlu: Massive Multitask Language Understanding
hellaswag: HellaSwag
gsm8k: Grade School Math 8K
...
Alternatively, use the CLI:
$ evalhub providers list
+ .Example output
ID NAME DESCRIPTION BENCHMARKS
lm_evaluation_harness LM Evaluation Harness EleutherAI language model evaluation 167
garak Garak LLM vulnerability and safety scanner 12
guidellm GuideLLM Performance benchmarking 4
To get details for a specific provider:
$ evalhub providers describe lm_evaluation_harness
+ .Example output
Provider: LM Evaluation Harness
ID: lm_evaluation_harness
Description: EleutherAI language model evaluation framework
Benchmarks (167):
ID NAME CATEGORY METRICS
mmlu Massive Multitask Language Und… knowledge acc, acc_norm
hellaswag HellaSwag reasoning acc, acc_norm
gsm8k Grade School Math 8K math exact_match
arc_easy ARC Easy reasoning acc, acc_norm
...
Verification
- Confirm that the provider list is not empty and includes the built-in providers enabled in your EvalHub deployment.
5.6. Submit an evaluation job Copiar enlaceEnlace copiado en el portapapeles!
Submit an evaluation job in EvalHub by specifying a model endpoint and one or more benchmarks. EvalHub runs the benchmarks against the model and returns a job ID that you can use to track results.
Prerequisites
- You have a running EvalHub instance.
- You have a model endpoint accessible from within the cluster.
- You know which providers and benchmarks are available. See Section 5.5, “List EvalHub providers and benchmarks”.
Procedure
Submit a job by specifying the model endpoint and one or more benchmarks:
$ curl -X POST $EVALHUB_URL/api/v1/evaluations/jobs \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -H "X-Tenant: <namespace>" \ -d '{ "model": { "url": "http://my-model.my-namespace.svc.cluster.local:8080/v1", "name": "my-model" }, "benchmarks": [ { "provider_id": "lm_evaluation_harness", "benchmark_id": "mmlu" }, { "provider_id": "lm_evaluation_harness", "benchmark_id": "hellaswag" } ] }'NoteMost providers expect the model URL to point to an OpenAI-compatible inference endpoint. The required URL format may vary depending on the provider. Check the provider documentation for specific requirements.
The server returns a
202 Acceptedresponse with the job resource, including a job ID for tracking.
Alternatively, use the Python SDK:
from evalhub.client import SyncEvalHubClient
from evalhub.models import JobSubmissionRequest, ModelConfig, BenchmarkConfig
client = SyncEvalHubClient(
base_url="https://evalhub.example.com",
tenant="my-namespace"
)
job = client.jobs.create(JobSubmissionRequest(
model=ModelConfig(
url="http://my-model.my-namespace.svc.cluster.local:8080/v1",
name="my-model"
),
benchmarks=[
BenchmarkConfig(provider_id="lm_evaluation_harness", benchmark_id="mmlu"),
BenchmarkConfig(provider_id="lm_evaluation_harness", benchmark_id="hellaswag"),
]
))
print(f"Job ID: {job.resource.id}")
Alternatively, use the CLI:
$ evalhub eval run \
--name my-eval \
--model-url http://my-model.my-namespace.svc.cluster.local:8080/v1 \
--model-name my-model \
--provider lm_evaluation_harness \
-b mmlu -b hellaswag
You can also submit from a YAML config file:
$ evalhub eval run --config evaljob.yaml
Verification
Confirm the job is registered and check its status:
$ curl -s -H "Authorization: Bearer $TOKEN" -H "X-Tenant: <namespace>" \ $EVALHUB_URL/api/v1/evaluations/jobs/<job_id> | jq .status.stateThe job status transitions from
pendingtorunningtocompleted.Alternatively, use the CLI:
$ evalhub eval status <job_id>Alternatively, use the Python SDK:
job = client.jobs.get(job_id) print(job.state)
5.7. Track evaluation jobs and results Copiar enlaceEnlace copiado en el portapapeles!
Track the status of running evaluation jobs and retrieve results after completion. You can check individual jobs, list all jobs, and filter by status.
Prerequisites
- You have submitted an evaluation job to EvalHub.
- You have the job ID returned from the submission.
Procedure
Check the status of a specific job:
$ curl -s \ -H "Authorization: Bearer $TOKEN" \ -H "X-Tenant: <namespace>" \ $EVALHUB_URL/api/v1/evaluations/jobs/<job_id> | jq .Example response for a completed job
{ "resource": { "id": "<job_id>", "tenant": "<namespace>", "created_at": "2026-04-22T10:00:00Z" }, "status": { "state": "completed", "benchmarks": [ { "id": "mmlu", "provider_id": "lm_evaluation_harness", "status": "completed" }, { "id": "hellaswag", "provider_id": "lm_evaluation_harness", "status": "completed" } ] }, "results": { "benchmarks": [ { "id": "mmlu", "provider_id": "lm_evaluation_harness", "metrics": { "acc": 0.65, "acc_norm": 0.68 } }, { "id": "hellaswag", "provider_id": "lm_evaluation_harness", "metrics": { "acc": 0.72, "acc_norm": 0.75 } } ] }, "name": "my-eval", "model": { "url": "http://my-model:8080/v1", "name": "my-model" }, ... }After the job completes, retrieve the benchmark results:
$ curl -s \ -H "Authorization: Bearer $TOKEN" \ -H "X-Tenant: <namespace>" \ $EVALHUB_URL/api/v1/evaluations/jobs/<job_id> | jq .resultsThe
resultsobject contains benchmark scores, metrics, and pass/fail outcomes. If pass criteria are configured, the results include atestfield with the overall score, threshold, and pass/fail status.List all jobs, optionally filtered by status:
$ curl -s \ -H "Authorization: Bearer $TOKEN" \ -H "X-Tenant: <namespace>" \ "$EVALHUB_URL/api/v1/evaluations/jobs?status=completed&limit=10" | jq .Expand Table 5.2. Job query parameters Parameter Default Description limit50Maximum number of results to return. The maximum allowed value is 100.
offset0Number of results to skip for pagination.
status—
Filter by job state:
pending,running,completed,failed,cancelled,partially_failed.name—
Filter by job name. Uses exact, case-sensitive matching.
tags—
Filter by a single tag. Returns jobs that contain the specified tag in their tags list.
owner—
Filter by the authenticated username of the job owner, for example
system:serviceaccount:<namespace>:<name>for aServiceAccountor the OpenShift username.experiment_id—
Filter by MLflow experiment ID.
Alternatively, use the CLI.
To watch a job’s status in real time, use the --watch flag. The CLI polls the job at regular intervals and displays benchmark progress until the job reaches a terminal state:
$ evalhub eval status --watch <job_id>
To retrieve formatted results after a job completes:
$ evalhub eval results <job_id> --format table
+ .Example output
BENCHMARK PROVIDER METRIC VALUE
mmlu lm_evaluation_harness acc 0.65
mmlu lm_evaluation_harness acc_norm 0.68
hellaswag lm_evaluation_harness acc 0.72
hellaswag lm_evaluation_harness acc_norm 0.75
The --format flag supports table, json, yaml, and csv.
Alternatively, use the Python SDK.
To check the status of a specific job:
job = client.jobs.get(job_id)
print(f"State: {job.state}")
To wait for a job to complete:
result = client.jobs.wait_for_completion(job_id, timeout=3600, poll_interval=5.0)
for b in result.results.benchmarks:
print(f"{b.id}: {b.metrics}")
To list jobs filtered by status:
from evalhub.models import JobStatus
completed_jobs = client.jobs.list(status=JobStatus.COMPLETED, limit=10)
for job in completed_jobs:
print(f"{job.id}: {job.state}")
5.8. Cancel and delete jobs Copiar enlaceEnlace copiado en el portapapeles!
Cancel a running evaluation job or permanently delete a job record from the database.
Prerequisites
- You have submitted an evaluation job to EvalHub.
- You have the job ID of the job to cancel or delete.
-
You have
deletepermissions on theevaluationsvirtual resource in the tenant namespace. For more information, see Section 5.24, “Grant access to EvalHub”.
Procedure
Run one of the following commands depending on whether you want to cancel or permanently delete the job:
To cancel a running job with a soft delete, where the job is marked as
cancelledbut the record is preserved for auditing, run the following command:$ curl -X DELETE -H "Authorization: Bearer $TOKEN" -H "X-Tenant: <namespace>" $EVALHUB_URL/api/v1/evaluations/jobs/<job_id>To permanently delete a job record from the database, run the following command with the
hard_deletequery parameter:WarningThe
hard_deleteoperation permanently removes the job record from the database. This action cannot be undone, and the job results will no longer be available for auditing.$ curl -X DELETE -H "Authorization: Bearer $TOKEN" -H "X-Tenant: <namespace>" "$EVALHUB_URL/api/v1/evaluations/jobs/<job_id>?hard_delete=true"
For both soft and hard deletes, EvalHub cleans up associated Job and ConfigMap Kubernetes resources in the tenant namespace before updating or removing the record. The server returns 204 No Content on success.
Alternatively, use the CLI.
To cancel a running job with a soft delete:
$ evalhub eval cancel <job_id>
To permanently delete a job with a hard delete:
$ evalhub eval cancel <job_id> --hard
Alternatively, use the Python SDK.
To cancel a running job with a soft delete:
client.jobs.cancel(job_id)
To permanently delete a job with a hard delete:
client.jobs.cancel(job_id, hard_delete=True)
Verification
For a soft delete, verify the job status is
cancelled:$ curl -s -H "Authorization: Bearer $TOKEN" -H "X-Tenant: <namespace>" \ $EVALHUB_URL/api/v1/evaluations/jobs/<job_id> | jq .status.stateAlternatively, use the CLI:
$ evalhub eval status <job_id>Alternatively, use the Python SDK:
job = client.jobs.get(job_id) print(job.state)For a hard delete, verify the job returns
404 Not Found:$ curl -s -o /dev/null -w "%{http_code}" \ -H "Authorization: Bearer $TOKEN" \ -H "X-Tenant: <namespace>" \ $EVALHUB_URL/api/v1/evaluations/jobs/<job_id>The CLI and Python SDK raise an error when retrieving a hard-deleted job, confirming that the record has been removed.
5.9. EvalHub built-in collections Copiar enlaceEnlace copiado en el portapapeles!
EvalHub includes several built-in collections that group benchmarks from one or more providers into reusable evaluation suites. Each benchmark in a collection can have its own weight, primary score metric, and pass criteria threshold. For more information, see Section 5.1, “Understanding EvalHub”.
| Collection | Category | Description | Benchmarks |
|---|---|---|---|
|
| general | Open LLM Leaderboard v2. Comprehensive evaluation suite for general-purpose language models. |
|
|
| safety | Evaluates model safety, bias, and fairness across diverse scenarios. |
|
|
| safety | End-to-end safety assessment covering toxic content generation, tendency to produce false or misleading information, and alignment with ethical principles. |
|
Each built-in collection defines per-benchmark weights and thresholds. For example, the safety-and-fairness-v1 collection assigns higher weights to toxigen and ethics_cm (weight 3) than to winogender and crows_pairs_english (weight 1), which gives these benchmarks greater influence on the overall safety score.
5.10. Create a custom collection in EvalHub Copiar enlaceEnlace copiado en el portapapeles!
Create a custom collection that groups benchmarks from one or more providers into a reusable evaluation job.
Prerequisites
- You have a running EvalHub instance.
Procedure
Create a collection:
$ curl -X POST $EVALHUB_URL/api/v1/evaluations/collections \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -H "X-Tenant: <namespace>" \ -d '{ "name": "my-safety-suite", "category": "safety", "benchmarks": [ {"provider_id": "lm_evaluation_harness", "benchmark_id": "truthfulqa_mc2"}, {"provider_id": "garak", "benchmark_id": "owasp_llm_top_10"} ] }'Example response
{ "resource": { "id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "tenant": "<namespace>", "created_at": "2026-04-22T10:00:00Z", "owner": "<username>" }, "name": "my-safety-suite", "category": "safety", "benchmarks": [ {"provider_id": "lm_evaluation_harness", "id": "truthfulqa_mc2"}, {"provider_id": "garak", "id": "owasp_llm_top_10"} ] }
Alternatively, use the CLI with a YAML spec file:
my-safety-suite.yaml
name: my-safety-suite
category: safety
benchmarks:
- provider_id: lm_evaluation_harness
benchmark_id: truthfulqa_mc2
- provider_id: garak
benchmark_id: owasp_llm_top_10
$ evalhub collections create --file my-safety-suite.yaml
Alternatively, use the Python SDK:
collection = client.collections.create({
"name": "my-safety-suite",
"category": "safety",
"benchmarks": [
{"provider_id": "lm_evaluation_harness", "benchmark_id": "truthfulqa_mc2"},
{"provider_id": "garak", "benchmark_id": "owasp_llm_top_10"}
]
})
Verification
Confirm the collection was created:
$ curl -s -H "Authorization: Bearer $TOKEN" -H "X-Tenant: <namespace>" \ $EVALHUB_URL/api/v1/evaluations/collections/<collection_id> | jq .Alternatively, use the CLI:
$ evalhub collections describe <collection_id>Alternatively, use the Python SDK:
collection = client.collections.get(collection_id)
Using a collection in a job
After creating a collection, you can submit evaluation jobs that reference it. The following example shows a job submission using the created collection:
$ curl -X POST $EVALHUB_URL/api/v1/evaluations/jobs \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-H "X-Tenant: <namespace>" \
-d '{
"model": {
"url": "http://my-model.my-namespace.svc.cluster.local:8080/v1",
"name": "my-model"
},
"collection": {
"id": "<collection_id>"
}
}'
5.11. Configure API key authentication for model endpoints Copiar enlaceEnlace copiado en el portapapeles!
Configure EvalHub to authenticate to a model endpoint using an API key stored as a Kubernetes Secret.
Prerequisites
- You have the model endpoint url.
- You have the API key for your model endpoint.
Procedure
Create a Secret containing your API key:
model-auth.yamlapiVersion: v1 kind: Secret metadata: name: model-auth type: Opaque stringData: api-key: "<api-key>"Apply the Secret to the tenant namespace:
$ oc apply -f model-auth.yaml -n <namespace>
Verification
Confirm that the Secret was created and contains the expected
api-keykey:$ oc get secret model-auth -n <namespace> -o jsonpath='{.data}' | jq 'keys'The output should include
<api-key>.
Next steps
When you submit an evaluation job, include an auth field in the model object to reference the Secret:
Example model configuration with API key authentication
"model": {
"url": "http://my-model.my-namespace.svc.cluster.local:8080/v1",
"name": "my-model",
"auth": {
"secret_ref": "model-auth"
}
}
where:
secret_refSpecifies the name of the Secret that contains the API key.
5.12. Authenticate models with a ServiceAccount token Copiar enlaceEnlace copiado en el portapapeles!
For models served with KServe and protected by kube-rbac-proxy, EvalHub can use automatic ServiceAccount token injection.
Procedure
-
Create a RoleBinding granting the job
ServiceAccountaccess to the model’s InferenceService.
For more information about creating a ServiceAccount and RoleBinding for model authentication, see Making authenticated inference requests in Deploying models with distributed inference.
5.13. Use custom data from S3 for EvalHub evaluations Copiar enlaceEnlace copiado en el portapapeles!
You can load external test datasets from S3-compatible storage, such as MinIO or Amazon S3, before an evaluation runs. When configured, EvalHub schedules an init container that downloads the data to /test_data inside the Job pod. The adapter can then read the files from that path.
This feature only applies when EvalHub runs benchmarks as Jobs. It does not apply to local-only evaluation runs.
Prerequisites
- You have an S3-compatible storage endpoint with your test dataset already uploaded to a bucket.
- You have the S3 credentials for your storage endpoint.
Procedure
Create a
Secretcontaining your S3 credentials:my-s3-credentials.yamlapiVersion: v1 kind: Secret metadata: name: my-s3-credentials namespace: <namespace> type: Opaque stringData: AWS_ACCESS_KEY_ID: "<your-access-key>" AWS_SECRET_ACCESS_KEY: "<your-secret-key>" AWS_DEFAULT_REGION: "<your-region>" AWS_S3_ENDPOINT: "<your-s3-endpoint>"where:
AWS_DEFAULT_REGION-
Specifies the region for your S3-compatible storage, for example
us-east-1. AWS_S3_ENDPOINTSpecifies the endpoint URL for your S3-compatible storage, for example
https://minio.example.com:9000for MinIO. For Amazon S3, you can omit this field or use the default AWS endpoint.$ oc apply -f my-s3-credentials.yaml
When you submit an evaluation job, add a
test_data_refblock to each benchmark that requires external data:Example S3 test data configuration in a job submission
"benchmarks": [ { "provider_id": "lm_evaluation_harness", "benchmark_id": "mmlu", "test_data_ref": { "s3": { "bucket": "my-eval-data", "key": "datasets/mmlu", "secret_ref": "my-s3-credentials" } } } ]where:
s3.bucket- Specifies the S3 bucket name.
s3.key- Specifies the S3 key prefix for the dataset files.
s3.secret_refSpecifies the name of the
Secretcontaining the S3 credentials.For the full job submission request, see Section 5.6, “Submit an evaluation job”.
The init container downloads all objects under the specified S3 prefix to
/test_data, preserving the relative directory structure. Thesecret_refmust reference aSecretin the tenant namespace.
The expected file format and directory structure of the test data depend on the adapter and benchmark. See the adapter documentation for the required data layout.
Alternatively, use the CLI:
$ evalhub eval run \
--name s3-data-eval \
--model-url http://my-model.my-namespace.svc.cluster.local:8080/v1 \
--model-name my-model \
--provider lm_evaluation_harness \
--benchmark mmlu \
--test-data-s3-bucket my-eval-data \
--test-data-s3-key datasets/mmlu \
--test-data-s3-secret my-s3-credentials
Alternatively, use the Python SDK:
from evalhub.models import (
JobSubmissionRequest, ModelConfig, BenchmarkConfig,
TestDataRef, S3TestDataRef
)
job = client.jobs.submit(JobSubmissionRequest(
name="s3-data-eval",
model=ModelConfig(
url="http://my-model.my-namespace.svc.cluster.local:8080/v1",
name="my-model"
),
benchmarks=[
BenchmarkConfig(
id="mmlu",
provider_id="lm_evaluation_harness",
test_data_ref=TestDataRef(
s3=S3TestDataRef(
bucket="my-eval-data",
key="datasets/mmlu",
secret_ref="my-s3-credentials",
)
),
)
],
))
Collections also support test_data_ref on individual benchmarks, allowing you to define custom data sources as part of a reusable evaluation suite.
Verification
Confirm that the job completes successfully. If the init container fails to download data from S3, the job transitions to the
failedstate.$ curl -s \ -H "Authorization: Bearer $TOKEN" \ -H "X-Tenant: <namespace>" \ $EVALHUB_URL/api/v1/evaluations/jobs/<job_id> | jq .status.stateIf the job fails, check the init container logs for download errors:
$ oc logs <pod_name> -c init -n <namespace>
5.14. Export evaluation results to an OCI registry Copiar enlaceEnlace copiado en el portapapeles!
EvalHub can export evaluation artifacts, such as logs, metrics, and outputs, by pushing artifacts to an Open Container Initiative (OCI) compatible registry for long-term storage and traceability.
Prerequisites
- You have access to an OCI-compatible container registry such as Quay.io.
- You have registry credentials for the OCI registry.
Procedure
Create a
kubernetes.io/dockerconfigjsonSecret with your registry credentials:$ oc create secret docker-registry oci-registry-credentials \ --docker-server=quay.io \ --docker-username=<username> \ --docker-password=<password> \ -n <namespace>When you submit an evaluation job, include an
exportsblock in the job submission body:Example OCI export configuration in a job submission
"benchmarks": [ { "provider_id": "lm_evaluation_harness", "benchmark_id": "mmlu" } ], "exports": { "oci": { "coordinates": { "oci_host": "quay.io", "oci_repository": "my-org/eval-results" }, "k8s": { "connection": "oci-registry-credentials" } } }where:
oci.coordinates.oci_host- Specifies the OCI registry hostname.
oci.coordinates.oci_repository- Specifies the repository path within the registry.
oci.k8s.connectionSpecifies the name of the Secret containing the registry credentials.
For the full job submission request, see Section 5.6, “Submit an evaluation job”.
Results artifact from the evaluation frameworks are stored as OCI artifacts with separate layers, allowing selective access to specific outputs.
Verification
After the job completes, retrieve the OCI artifact reference from the job results:
$ curl -s -H "Authorization: Bearer $TOKEN" -H "X-Tenant: <namespace>" \ $EVALHUB_URL/api/v1/evaluations/jobs/<job_id> | jq '.results.benchmarks[0].artifacts'Verify the artifact exists in the registry by using
skopeo:$ skopeo inspect --creds <username>:<password> docker://quay.io/my-org/eval-results:<tag>The tag is in the format
evalhub-<hash>, where the hash is derived from the job ID, provider, and benchmark. You can find the full OCI reference, including the tag, in the job results.
5.15. Configure MLflow experiment tracking for evaluation jobs Copiar enlaceEnlace copiado en el portapapeles!
When MLflow is configured for EvalHub, you can associate evaluation jobs with designated MLflow experiments. EvalHub automatically logs benchmark metrics as MLflow runs within the experiment.
Prerequisites
- You have a running MLflow instance accessible from the EvalHub deployment.
- You have configured the MLflow tracking URI in the EvalHub configuration. See Section 5.21, “EvalHub configuration reference” for details.
Procedure
When you submit an evaluation job, include an
experimentblock in the job submission body:Example experiment configuration in a job submission
"benchmarks": [ { "provider_id": "lm_evaluation_harness", "benchmark_id": "mmlu" } ], "experiment": { "name": "my-model-v2-eval" }For the full job submission request, see Section 5.6, “Submit an evaluation job”.
When using the CLI, include the experiment field in your YAML config file:
Example experiment fragment in a YAML config file
experiment:
name: my-model-v2-eval
$ evalhub eval run --config eval-with-mlflow.yaml
+ For the full YAML config file structure, see Section 5.6, “Submit an evaluation job”.
When using the Python SDK, pass an ExperimentConfig to the JobSubmissionRequest:
from evalhub.models import ExperimentConfig
experiment=ExperimentConfig(name="my-model-v2-eval")
+ For the full JobSubmissionRequest, see Section 5.6, “Submit an evaluation job”.
Verification
When the job completes, the results section includes an mlflow_experiment_url linking to the experiment in the MLflow UI:
$ curl -s -H "Authorization: Bearer $TOKEN" -H "X-Tenant: <namespace>" \
$EVALHUB_URL/api/v1/evaluations/jobs/<job_id> | jq .results.mlflow_experiment_url
Example output
"https://mlflow.example.com/#/experiments/42"
Alternatively, use the CLI. The evalhub eval results command automatically displays the MLflow experiment URL when available:
$ evalhub eval results <job_id>
Alternatively, use the Python SDK:
job = client.jobs.get(job_id)
print(job.results.mlflow_experiment_url)
5.16. Add a custom provider by using the API Copiar enlaceEnlace copiado en el portapapeles!
Register a custom provider by using the REST API. A provider definition includes a name, a container image for the adapter runtime, and a list of benchmarks. For more information about adapters, see Section 5.1, “Understanding EvalHub”.
Prerequisites
- You have a running EvalHub instance.
- You have a container image for your custom adapter packaged as a UBI9 image.
Procedure
Register the custom provider:
$ curl -X POST $EVALHUB_URL/api/v1/evaluations/providers \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -H "X-Tenant: <namespace>" \ -d '{ "name": "my-custom-provider", "title": "My Custom Provider", "description": "Custom evaluation framework for domain-specific benchmarks.", "benchmarks": [ { "id": "domain_accuracy", "name": "Domain Accuracy", "category": "general", "metrics": ["accuracy", "f1"], "primary_score": { "metric": "accuracy", "lower_is_better": false }, "pass_criteria": { "threshold": 0.8 } } ], "runtime": { "k8s": { "image": "quay.io/my-org/my-adapter:latest", "cpu_request": "500m", "memory_request": "512Mi", "cpu_limit": "2000m", "memory_limit": "4Gi" } } }'Example response
{ "resource": { "id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "tenant": "<namespace>", "created_at": "2026-04-22T10:00:00Z", "owner": "<username>" }, "name": "my-custom-provider", "title": "My Custom Provider", "description": "Custom evaluation framework for domain-specific benchmarks.", "benchmarks": [ { "id": "domain_accuracy", "name": "Domain Accuracy", "category": "general", "metrics": ["accuracy", "f1"], "primary_score": { "metric": "accuracy", "lower_is_better": false }, "pass_criteria": { "threshold": 0.8 } } ], "runtime": { "k8s": { "image": "quay.io/my-org/my-adapter:latest", "cpu_request": "500m", "memory_request": "512Mi", "cpu_limit": "2000m", "memory_limit": "4Gi" } } }
The runtime.k8s section specifies the container image and resource requests for the adapter pod. Each benchmark must declare an id, name, and category. The optional primary_score and pass_criteria fields set default thresholds for the benchmark.
User-created providers can be updated and deleted through the API. Built-in providers with owner: system are read-only.
The Python SDK and CLI do not support creating providers. Use the REST API to register custom providers.
Verification
Confirm the provider was registered by retrieving it with the ID from the response:
$ curl -s -H "Authorization: Bearer $TOKEN" -H "X-Tenant: <namespace>" \ $EVALHUB_URL/api/v1/evaluations/providers/<provider_id> | jq .nameThe output should return
"my-custom-provider".Alternatively, use the CLI:
$ evalhub providers describe <provider_id>Alternatively, use the Python SDK:
provider = client.providers.get(provider_id) print(provider.name)
5.17. Add a custom provider by using a ConfigMap Copiar enlaceEnlace copiado en el portapapeles!
Add providers at the operator level by creating a ConfigMap in the operator namespace with the appropriate labels. The TrustyAI Operator discovers ConfigMap(s) by label and mounts them into the EvalHub deployment automatically. Providers registered this way are system-owned, read-only, and available to all tenants. To register a tenant-scoped provider that can be updated or deleted, use the REST API instead. See Section 5.16, “Add a custom provider by using the API”.
Prerequisites
- You have a running EvalHub deployment.
- You have a container image for your custom adapter. See Section 5.19, “Write a custom evaluation adapter”.
-
You have cluster administrator privileges or permissions to create
ConfigMapresources in the operator namespace. - You have permissions to edit the EvalHub custom resource.
Procedure
Create a
ConfigMapin the EvalHub custom resource namespace with the provider definition:evalhub-provider-my-custom-provider.yamlapiVersion: v1 kind: ConfigMap metadata: name: evalhub-provider-my-custom-provider namespace: <evalhub-namespace> labels: trustyai.opendatahub.io/evalhub-provider-type: system trustyai.opendatahub.io/evalhub-provider-name: my-custom-provider data: my-custom-provider.yaml: | id: my-custom-provider name: My Custom Provider description: Custom evaluation framework for domain-specific benchmarks. runtime: k8s: image: quay.io/my-org/my-adapter:latest cpu_request: "500m" memory_request: "512Mi" cpu_limit: "2000m" memory_limit: "4Gi" benchmarks: - id: domain_accuracy name: Domain Accuracy category: general metrics: - accuracy - f1 primary_score: metric: accuracy lower_is_better: false pass_criteria: threshold: 0.8$ oc apply -f evalhub-provider-my-custom-provider.yamlReference the provider name in your EvalHub custom resource by adding it to the
spec.providerslist:Example
spec.providersfragmentspec: providers: - lm_evaluation_harness - garak - my-custom-providerFor the full EvalHub custom resource structure, see Section 5.3, “Deploy EvalHub with the TrustyAI Operator”.
The operator copies the ConfigMap to the instance namespace and mounts it as a projected volume at /etc/evalhub/config/providers. The EvalHub server loads all provider YAML files from this directory at startup.
Verification
Confirm that the
ConfigMapwas created:$ oc get configmap evalhub-provider-my-custom-provider -n <evalhub-namespace>Check that the EvalHub deployment has restarted and is ready:
$ oc get pods -l app=eval-hub -n <evalhub-namespace>Confirm the custom provider is loaded:
$ curl -s -H "Authorization: Bearer $TOKEN" -H "X-Tenant: <namespace>" \ $EVALHUB_URL/api/v1/evaluations/providers/my-custom-provider | jq .nameThe output should return
"My Custom Provider".
5.18. Add a collection by using a ConfigMap Copiar enlaceEnlace copiado en el portapapeles!
Add collections at the operator level by creating a ConfigMap in the operator namespace with the appropriate labels. The TrustyAI Operator discovers ConfigMap(s) by label and mounts them into the EvalHub deployment automatically. Collections registered this way are system-owned, read-only, and available to all tenants. To create a tenant-scoped collection that can be updated or deleted, use the REST API instead. See Section 5.10, “Create a custom collection in EvalHub”.
Prerequisites
- You have a running EvalHub deployment.
-
You have cluster administrator privileges or permissions to create
ConfigMapresources in the operator namespace. - You have permissions to edit the EvalHub custom resource.
- You know which provider-benchmark pairs you want to include in the collection. See Section 5.5, “List EvalHub providers and benchmarks”.
Procedure
Create a
ConfigMapin the EvalHub custom resource namespace with the collection definition:evalhub-collection-my-eval-suite.yamlapiVersion: v1 kind: ConfigMap metadata: name: evalhub-collection-my-eval-suite namespace: <evalhub-namespace> labels: trustyai.opendatahub.io/evalhub-collection-type: system trustyai.opendatahub.io/evalhub-collection-name: my-eval-suite data: my-eval-suite.yaml: | id: my-eval-suite name: My Evaluation Suite category: general description: Custom evaluation suite for internal model validation. pass_criteria: threshold: 0.7 benchmarks: - id: mmlu provider_id: lm_evaluation_harness weight: 2 primary_score: metric: acc_norm lower_is_better: false pass_criteria: threshold: 0.6 - id: hellaswag provider_id: lm_evaluation_harness weight: 1 primary_score: metric: acc_norm lower_is_better: false pass_criteria: threshold: 0.7$ oc apply -f evalhub-collection-my-eval-suite.yamlReference the collection in your EvalHub custom resource by adding the collection name to the
spec.collectionslist:Example
spec.collectionsfragmentspec: collections: - leaderboard-v2 - safety-and-fairness-v1 - my-eval-suiteFor the full EvalHub custom resource structure, see Section 5.3, “Deploy EvalHub with the TrustyAI Operator”.
The operator mounts collection ConfigMap(s) at /etc/evalhub/config/collections.
Verification
Confirm that the
ConfigMapwas created:$ oc get configmap evalhub-collection-my-eval-suite -n <evalhub-namespace>Check that the EvalHub deployment has restarted and is ready:
$ oc get pods -l app=eval-hub -n <evalhub-namespace>List collections and confirm the custom collection appears:
$ curl -s -H "Authorization: Bearer $TOKEN" -H "X-Tenant: <namespace>" \ $EVALHUB_URL/api/v1/evaluations/collections/my-eval-suite | jq .nameThe output should return
"My Evaluation Suite".
5.19. Write a custom evaluation adapter Copiar enlaceEnlace copiado en el portapapeles!
An adapter translates EvalHub job requests into evaluation framework-specific commands. To write a custom adapter, install the EvalHub SDK with adapter dependencies and implement a single method.
Prerequisites
- You have Python 3.11 or later installed.
- You have an evaluation framework that you want to integrate with EvalHub.
-
You have
podmanor another container build tool installed to package the adapter as a container image.
Procedure
Install the EvalHub SDK with the adapter extra:
$ pip install "eval-hub-sdk[adapter]"Create a class that extends
FrameworkAdapterand implementsrun_benchmark_job:from evalhub.adapter import FrameworkAdapter from evalhub.models import JobSpec, JobCallbacks, JobResults, JobStatusUpdate, JobPhase class MyAdapter(FrameworkAdapter): def run_benchmark_job(self, config: JobSpec, callbacks: JobCallbacks) -> JobResults: callbacks.report_status(JobStatusUpdate( phase=JobPhase.RUNNING_EVALUATION, message="Running evaluation" )) # Replace with your framework's evaluation function scores = run_my_framework( model_url=config.model.url, benchmark=config.benchmark_id, parameters=config.parameters ) return JobResults( id=config.id, benchmark_id=config.benchmark_id, benchmark_index=config.benchmark_index, model_name=config.model.name, results=scores, num_examples_evaluated=len(scores), duration_seconds=self._get_duration() # Implement to return elapsed seconds )The framework handles loading the job specification from the mounted
ConfigMap, authenticating with the sidecar proxy container that communicates with the EvalHub server, and reporting results. Your adapter only needs to run the evaluation and return the results. For more information about the adapter and sidecar architecture, see Section 5.2, “EvalHub architecture overview”.Package your adapter as a Red Hat Universal Base Image 9 (UBI9) container image. Create a
Containerfilein your adapter directory:ContainerfileFROM registry.access.redhat.com/ubi9/python-312 WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY main.py /app/main.py ENTRYPOINT ["python", "main.py"]Build the image:
$ podman build -t quay.io/my-org/my-adapter:latest .Push the image to a container registry:
$ podman push quay.io/my-org/my-adapter:latest-
Reference the image in the provider’s
runtime.k8s.imagefield when registering the provider. See Section 5.16, “Add a custom provider by using the API”.
The following tables describe the JobSpec and JobCallbacks interfaces available to your adapter.
| Field | Description |
|---|---|
|
| Unique job identifier. |
|
| Identifier of the provider that the benchmark belongs to. |
|
| Identifier of the benchmark to evaluate. |
|
| Index of this benchmark within the job. |
|
|
Model configuration, including |
|
|
Benchmark-specific parameters, for example |
|
|
The number of examples to evaluate. When set to |
|
| Optional OCI artifact export specification. |
| Method | Purpose |
|---|---|
|
| Sends progress updates including the phase, message, and completed/total steps. |
|
| Pushes evaluation artifacts to an OCI registry. |
|
|
Reports the final results to the EvalHub server. This method is called automatically if you return |
5.20. EvalHub API endpoints Copiar enlaceEnlace copiado en el portapapeles!
All endpoints use the path prefix /api/v1. The OpenAPI 3.1.0 specification is available at /openapi.yaml and interactive documentation is available at /docs.
5.20.1. Evaluation job endpoints Copiar enlaceEnlace copiado en el portapapeles!
| Endpoint | Method | Description |
|---|---|---|
|
| POST |
Create and submit an evaluation job. Returns |
|
| GET | List evaluation jobs with pagination and filtering. |
|
| GET | Get a specific evaluation job with current status and results. |
|
| DELETE |
Cancel or hard-delete a job. Use |
|
| POST | Submit job status events from the adapter runtime. |
| State | Description |
|---|---|
|
| The job is created and awaiting execution. |
|
| The evaluation is actively running. |
|
| All benchmarks completed successfully. |
|
| The evaluation encountered a fatal error. |
|
| The user canceled the job. |
|
| Some benchmarks succeed and others failed. |
5.20.2. Provider endpoints Copiar enlaceEnlace copiado en el portapapeles!
| Endpoint | Method | Description |
|---|---|---|
|
| POST | Create a custom provider. |
|
| GET |
List providers. Use |
|
| GET | Get a provider with all its benchmarks. |
|
| PUT | Replace a provider. |
|
| PATCH | Patch a provider with JSON Patch operations. |
|
| DELETE | Delete a provider. |
| Provider | Benchmarks | Description |
|---|---|---|
|
| 167 | General-purpose LLM evaluation: MMLU, HellaSwag, ARC, TruthfulQA, GSM8K, and more across 12 categories. |
|
| 8 | Security vulnerability scanning: OWASP LLM Top 10, AVID taxonomy, CWE. |
|
| 7 | Guidance language model evaluation. |
|
| 24 | Lightweight evaluation framework. |
5.20.3. Collection endpoints Copiar enlaceEnlace copiado en el portapapeles!
| Endpoint | Method | Description |
|---|---|---|
|
| POST | Create a benchmark collection. |
|
| GET | List collections with filtering. |
|
| GET | Get a collection with all benchmark references. |
|
| PUT | Replace a collection. |
|
| PATCH | Patch a collection with JSON Patch operations. |
|
| DELETE | Delete a collection. |
5.20.4. Health and observability endpoints Copiar enlaceEnlace copiado en el portapapeles!
| Endpoint | Method | Description |
|---|---|---|
|
| GET | Health check with status, timestamp, and build information. |
|
| GET | Prometheus metrics endpoint when enabled. |
|
| GET | OpenAPI 3.1.0 specification in YAML or JSON based on Accept header. |
|
| GET | Interactive Swagger UI documentation. |
5.21. EvalHub configuration reference Copiar enlaceEnlace copiado en el portapapeles!
Configuration applies to the EvalHub server component. EvalHub is configured by using config/config.yaml and environment variables. Environment variables take precedence over the configuration file.
When deploying EvalHub with the TrustyAI Operator, the operator generates the config.yaml automatically from the EvalHub custom resource and environment variables defined in the spec.env field. You do not need to create or edit config.yaml directly. For information about configuring the EvalHub custom resource, see Section 5.3, “Deploy EvalHub with the TrustyAI Operator”.
5.21.1. Service configuration Copiar enlaceEnlace copiado en el portapapeles!
| Parameter | Environment variable | Default | Description |
|---|---|---|---|
|
|
|
| The port that the API server listens on. |
|
|
|
| The address that the API server binds to. |
|
|
| — | Path to the TLS certificate file. |
|
|
| — | Path to the TLS private key file. |
|
| — |
|
Disables authentication and authorization. Setting this to |
5.21.2. Database configuration Copiar enlaceEnlace copiado en el portapapeles!
When deploying EvalHub with the TrustyAI Operator, you must set spec.database.type in the EvalHub custom resource to either postgresql or sqlite. The operator generates the corresponding configuration automatically. The postgresql option sets the driver to pgx and injects the connection URL from a Kubernetes Secret. The sqlite option sets the driver to sqlite with an in-memory database. Data is not persisted across restarts with sqlite. Use postgresql for production deployments.
The following table describes the parameters available in the EvalHub config/config.yaml configuration file.
| Parameter | Environment variable | Default | Description |
|---|---|---|---|
|
| — |
|
The storage driver. Supported values: |
|
|
|
|
The database connection string. The default value is a SQLite in-memory URI, which stores all data in memory and does not persist across restarts. For PostgreSQL, use the format |
5.21.3. MLflow configuration Copiar enlaceEnlace copiado en el portapapeles!
| Parameter | Environment variable | Default | Description |
|---|---|---|---|
|
|
| — | The URL of the MLflow tracking server. Setting this parameter enables MLflow integration. When set, evaluation results are logged to MLflow. Without this parameter, MLflow tracking is disabled. |
|
|
| — | The path to a TLS CA certificate file for verifying the MLflow server’s certificate. |
|
|
|
|
If |
|
|
| — |
The path to a file containing an authentication token for the MLflow server. The token is sent as a Bearer token in the |
|
|
| — | The MLflow workspace or experiment namespace. |
5.21.4. OpenTelemetry configuration Copiar enlaceEnlace copiado en el portapapeles!
When deploying with the TrustyAI Operator, include the otel field in the EvalHub custom resource to enable OpenTelemetry. The presence of the otel field in the CR enables OpenTelemetry automatically.
| CR field | Default | Description |
|---|---|---|
|
|
|
The exporter type. Supported values: |
|
| — |
The endpoint for the OTLP exporter, for example |
|
|
|
If |
|
|
|
Trace sampling ratio as a value between |
5.22. EvalHub multi-tenancy and RBAC Copiar enlaceEnlace copiado en el portapapeles!
EvalHub supports namespace-based multi-tenancy, where each Kubernetes namespace represents a tenant. EvalHub enforces isolation at multiple layers, including authentication, authorization, data access, and job execution.
EvalHub enforces isolation at the following layers:
-
Authentication — EvalHub uses the Kubernetes
TokenReviewAPI to validate bearer tokens in incoming requests. -
Authorization —
SubjectAccessReview(SAR) checks verify that the caller has permission to perform the requested operation on EvalHub virtual resources in the target namespace. Virtual resources are logical resource names that EvalHub defines for RBAC purposes under thetrustyai.opendatahub.ioAPI group. They do not correspond to Kubernetes custom resource definitions. The virtual resources areevaluations,collections,providers, andstatus-events. For the full list of verbs, see Section 5.25, “EvalHub roles reference”. -
Data isolation — EvalHub scopes all database queries by
tenant_idto prevent cross-tenant data access. - Job execution — EvalHub creates Job resources in the tenant’s namespace.
The X-Tenant request header determines the target tenant namespace. The X-User header identifies the authenticated user.
5.23. Set up a tenant namespace Copiar enlaceEnlace copiado en el portapapeles!
Register a namespace as an EvalHub tenant so that users, programmatic clients, and agents can submit evaluation jobs in that namespace.
Prerequisites
- You have cluster administrator privileges.
- You have a running EvalHub instance.
- You have a namespace to use as a tenant.
Procedure
Add the tenant label to the namespace:
$ oc label namespace <namespace> evalhub.trustyai.opendatahub.io/tenant=The label value is intentionally empty. The TrustyAI Operator checks for the presence of the label, not its value.
NoteUse a dedicated namespace for EvalHub rather than
redhat-ods-applications, as described in Section 5.3, “Deploy EvalHub with the TrustyAI Operator”. Theredhat-ods-applicationsnamespace hasNetworkPolicyresources that restrict cross-namespace traffic, which requires additional labeling on tenant namespaces. If EvalHub is deployed inredhat-ods-applications, label each tenant namespace to allow the evaluationJobsidecar to communicate with the EvalHub server:$ oc label namespace <namespace> opendatahub.io/generated-namespace=trueReview the
NetworkPolicyresources withoc get networkpolicy -n <evalhub-server-namespace>to determine any additional requirements.
The TrustyAI Operator watches for this label and automatically provisions the following resources in the labeled namespace:
-
A job
ServiceAccountused by evaluationJobpods as their identity. -
A
RoleandRoleBindinggranting the jobServiceAccountpermission to createstatus-eventsfor reporting job progress. -
A
RoleBindinggranting the EvalHub APIServiceAccountpermission to create and deleteJobresources in the tenant namespace. -
A
RoleBindinggranting the EvalHub APIServiceAccountpermission to manageConfigMapresources used to mount job specifications intoJobpods. -
A
RoleBindinggranting the jobServiceAccountaccess to MLflow resources when MLflow is configured. -
A service CA
ConfigMapwith the cluster CA bundle injected by OpenShift, so thatJobpods can make HTTPS requests to the EvalHub API.
When the tenant label is removed from a namespace, the controller cleans up all provisioned resources automatically.
Verification
Confirm that the tenant label is set on the namespace:
$ oc get namespace <namespace> --show-labels | grep evalhubConfirm that the operator provisioned the expected resources in the tenant namespace:
$ oc get serviceaccount,rolebinding,configmap -n <namespace> | grep evalhubThe output should include a
ServiceAccount,RoleBindingresources, and a service CAConfigMapcreated by the operator.
5.24. Grant access to EvalHub Copiar enlaceEnlace copiado en el portapapeles!
Grant tenant users access to EvalHub by creating a Role and RoleBinding in the tenant namespace. EvalHub supports three types of principals.
Prerequisites
-
You have permissions to create
RoleandRoleBindingresources in the tenant namespace. -
You have impersonation privileges to verify access with
oc auth can-i --as. - You have set up the target namespace as an EvalHub tenant.
- You have identified which virtual resources and verbs to grant. See Section 5.25, “EvalHub roles reference” for available resources.
Procedure
Select the type of principal that matches your use case.
| Principal type | Token source | Use case |
|---|---|---|
|
| Mounted pod token or long-lived token | Automation, CI/CD pipelines, agents using Model Context Protocol (MCP) |
| OpenShift User |
| Interactive use |
| OpenShift Group | User token with group membership | Team-based access |
Create a
Rolein the tenant namespace that grants access to the required EvalHub virtual resources:apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: evalhub-evaluator namespace: <namespace> rules: - apiGroups: ["trustyai.opendatahub.io"] resources: ["evaluations", "collections", "providers"] verbs: ["get", "list", "create", "update", "delete"] - apiGroups: ["mlflow.kubeflow.org"] resources: ["experiments"] verbs: ["create", "get"]$ oc apply -f evalhub-evaluator-role.yamlCreate a
RoleBindingto bind the principal to theRole.To grant access to a ServiceAccount:
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: my-sa-evalhub-access namespace: <namespace> subjects: - kind: ServiceAccount name: my-sa namespace: <namespace> roleRef: kind: Role name: evalhub-evaluator apiGroup: rbac.authorization.k8s.io$ oc apply -f my-sa-evalhub-access.yamlTo obtain a bearer token for a
ServiceAccount, run the following command:$ export TOKEN=$(oc create token my-sa -n <namespace> --duration=1h)To grant access to a User:
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: user-evalhub-access namespace: <namespace> subjects: - kind: User name: <username> roleRef: kind: Role name: evalhub-evaluator apiGroup: rbac.authorization.k8s.io$ oc apply -f user-evalhub-access.yamlTo obtain a bearer token for an OpenShift User, log in as the user and run the following command:
$ export TOKEN=$(oc whoami -t)To grant access to a Group:
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: team-evalhub-access namespace: <namespace> subjects: - kind: Group name: evalhub-users roleRef: kind: Role name: evalhub-evaluator apiGroup: rbac.authorization.k8s.io$ oc apply -f team-evalhub-access.yamlTo obtain a bearer token for a Group member, log in as a user who belongs to the group and run the following command:
$ export TOKEN=$(oc whoami -t)
Verification
Verify that the principal has the expected permissions on the EvalHub virtual resources by using oc auth can-i.
For a
ServiceAccount:$ oc auth can-i create evaluations.trustyai.opendatahub.io \ -n <namespace> \ --as=system:serviceaccount:<namespace>:my-saFor an OpenShift User:
$ oc auth can-i create evaluations.trustyai.opendatahub.io \ -n <namespace> \ --as=<username>For an OpenShift Group:
$ oc auth can-i create evaluations.trustyai.opendatahub.io \ -n <namespace> \ --as=<username> --as-group=evalhub-users
Each command should return yes.
5.25. EvalHub roles reference Copiar enlaceEnlace copiado en el portapapeles!
EvalHub uses virtual Kubernetes resources for tenant authorization. These resources do not correspond to actual Kubernetes API resources. EvalHub performs SubjectAccessReview (SAR) checks against these resources in the tenant namespace specified by the X-Tenant header.
To authorize tenant users, create a Role in the tenant namespace granting the required verbs on these virtual resources. For instructions, see Section 5.24, “Grant access to EvalHub”.
| API group | Resource | Verbs | Description |
|---|---|---|---|
|
|
|
| Submit, view, update, and delete evaluation jobs. |
|
|
|
| Create, view, update, and delete benchmark collections. |
|
|
|
| Create, view, update, and delete evaluation providers. |
|
|
|
| Report job progress. Used by operator-provisioned job ServiceAccounts, not by tenant users. |
|
|
|
| Create and access MLflow experiments for result tracking. |
5.26. Additional resources Copiar enlaceEnlace copiado en el portapapeles!
The following resources provide additional information about EvalHub.
- EvalHub documentation site
- Server API reference — REST API endpoints and configuration
- Python SDK reference — Client library documentation
- CLI reference — Command-line interface guide
- Architecture guide — Adapter pattern and adapter development
- Multi-tenancy guide — Detailed RBAC and tenant configuration