Enabling AI safety with Guardrails
Ensure safety in your OpenShift AI models
Abstract
Chapter 1. Enabling AI safety with Guardrails Copy linkLink copied to clipboard!
The TrustyAI Guardrails Orchestrator service is a tool to invoke detections on text generation inputs and outputs, as well as standalone detections.
It is underpinned by the open-source project FMS-Guardrails Orchestrator from IBM. You can deploy the Guardrails Orchestrator service through a Custom Resource Definition (CRD) that is managed by the TrustyAI Operator.
The following sections describe the Guardrails components, how to deploy them and provide example use cases of how to protect your AI applications using these tools:
- Understanding detectors
Explore the available detector types in the Guardrails framework. Currently supported detectors are:
- The built-in detector: Out-of-the-box guardrailing algorithms for quick setup and easy experimentation.
- Hugging Face detectors: Text classification models for guardrailing, such as ibm-granite/granite-guardian-hap-38m or any other text classifier from Hugging Face.
- Configuring the Orchestrator
- Configure the Orchestrator to communicate with available detectors and your generation model.
- Configuring the Guardrails Gateway
- Define preset guardrail pipelines with corresponding unique endpoints.
- Deploying the Orchestrator
- Create a Guardrails Orchestrator to begin securing your Large Language Model (LLM) deployments.
- Automatically configuring Guardrails using
AutoConfig - Automatically configure Guardrails based on available resources in your namespace.
- Monitoring user-inputs to your LLM
- Enable a safer LLM by filtering hateful, profane, or toxic inputs.
- Enabling the OpenTelemetry exporter for metrics and tracing
- Provide observability for the security and governance mechanisms of AI applications.
1.1. Understanding detectors Copy linkLink copied to clipboard!
The Guardrails framework uses "detector" servers to contain guardrailing logic. Any server that provides the IBM /detectors API is compatible with the Guardrails framework. The main endpoint for a detector server is the /api/v1/text/contents, and the payload looks like the following:
1.1.1. Built-in Detector Copy linkLink copied to clipboard!
The Guardrails framework provides a set of “built-in” detectors out-of-the-box, which provides a number of detection algorithms. The built-in detector currently provides the following algorithms:
regex-
us-social-security-number- detect US social security numbers -
credit-card- detect credit card numbers -
email- detect email addresses -
ipv4- detect IPv4 addresses -
ipv6- detect IP6 addresses -
us-phone-number- detect US phone numbers -
uk-post-code- detect UK post codes -
$CUSTOM_REGEX- use a custom regex to define your own detector
-
file_type-
json- detect valid JSON -
xml- detect valid XML -
yaml- detect valid YAML -
json-with-schema:$SCHEMA- detect whether the text content satisfies a provided JSON schema. To specify a schema, replace $SCHEMA with a JSON schema -
xml-with-schema:$SCHEMA- detect whether the text content satisfies a provided XML schema. To specify a schema, replace $SCHEMA with an XML Schema Definition (XSD) -
yaml-with-schema:$SCHEMA- detect whether the text content satisfies a provided XML schema. To specify a schema, replace $SCHEMA with a JSON schema (not a YAML schema)
-
customDeveloper preview
Custom detectors defined via a
custom_detectors.pyfile.The detector algorithm can be chosen with
detector_params, by first choosing the top-level taxonomy (e.g.,regexorfile_type) and then providing a list of the desired algorithms from within that category. In the following example, both thecredit-cardandemailalgorithms are run against the provided message:
1.1.2. The Hugging Face Detector serving runtime Copy linkLink copied to clipboard!
To use Hugging Face AutoModelsForSequenceClassification as detectors within the Guardrails Orchestrator, you need to first configure a Hugging Face serving runtime.
The guardrails-detector-huggingface-runtime is a KServe serving runtime for Hugging Face predictive text models. This allows models such as the ibm-granite/granite-guardian-hap-38m to be used within the TrustyAI Guardrails ecosystem.
Example custom serving runtime
This YAML file contains an example of a custom serving Huggingface runtime:
The above serving runtime example matches the default template used with Red Hat OpenShift AI, and should suffice for the majority of use-cases. The main relevant configuration parameter is the SAFE_LABELS environment variable. This specifies which prediction label or labels from the AutoModelForSequenceClassification constitute a "safe" response and therefore should not trigger guardrailing. For example, if [0, 1] is specified as SAFE_LABELS for a four-class model, a predicted label of 0 or 1 is considered "safe", while a predicted label of 2 or 3 triggers guardrailing. The default value is [0].
1.1.2.1. Guardrails Detector Hugging Face serving runtime configuration values Copy linkLink copied to clipboard!
| Property | Value |
|---|---|
| Template Name |
|
| Runtime Name |
|
| Display Name |
|
| Model Format |
|
| Component | Configuration | Value |
|---|---|---|
| Server | uvicorn |
|
| Port | Container |
|
| Metrics Port | Prometheus |
|
| Metrics Path | Prometheus |
|
| Log Config | Path |
|
| Parameter | Default | Description |
|---|---|---|
|
| - | Container image (required) |
|
|
| Model mount path |
|
|
| HuggingFace cache |
|
|
| A JSON-formatted list |
|
|
| Number of Uvicorn workers |
|
|
| Server bind address |
|
|
| Server port |
| Endpoint | Method | Description | Content-Type | Headers |
|---|---|---|---|---|
|
| GET | Health check endpoint |
|
|
|
| POST | Content detection endpoint |
|
3 types: * |
1.2. Orchestrator Configuration Parameters Copy linkLink copied to clipboard!
The first step in deploying the Guardrails framework is to first define your Orchestrator configuration with a Config Map. This serves as a registry of the components in the system, namely by specifying the model-to-be-guardrailed and the available detector servers.
Here is an example version of an Orchestrator configuration file:
Example orchestrator_configmap.yaml
| Parameter | Description |
|---|---|
|
|
Describes the generation model to be guardrailed. Requires a |
|
| A service configuration. Throughout the Orchestrator config, all external services are described using the service configuration, which contains the following fields:
|
|
|
The
|
|
| Each key in the detector section defines the name of the detector server. This can be any string, but you’ll need to reference these names later, so pick memorable and descriptive names. |
|
|
The
|
|
| Defines which headers from your requests to the Guardrails Orchestrator get sent onwards to the various services specified in this configuration. If you want to ensure that the Orchestrator can talk to authenticated services, include "authorization" and "content-type" in your passthrough header list. |
1.3. Guardrails Gateway Config Parameters Copy linkLink copied to clipboard!
The Guardrails gateway provides a mechanism for defining preset detector pipelines and creating a unique, endpoint-per-pipeline preset. To use the Guardrails gateway, create a Guardrails Gateway configuration with a Config Map.
+ .Example gateway_configmap.yaml
| Parameter | Description |
|---|---|
|
| The list of detector servers and parameters to use inside your Guardrails Gateway presets. The following fields are available:
|
|
| Define Guardrail pipeline presets according to combinations of available detectors. Each preset route requires the following fields:
|
In the routes presets configuration, each input and output detector in the detectors list must use a unique server. For example, if we have the following detectors, the routes preset configuration is invalid because it uses two input: true detectors from the serverA server:
routes:
- name: route1
detectors:
- detector1
- detector2
routes:
- name: route1
detectors:
- detector1
- detector2
However, the following routes preset configuration is valid, because while both detectors use serverA, detector1 is only an input detector, while detector3 is only an output detector, and therefore does not conflict:
routes:
- name: route1
detectors:
- detector1
- detector3
routes:
- name: route1
detectors:
- detector1
- detector3
The following routes preset is also valid, because, while two input detectors from serverA are used, they are not used in the same route preset:
1.4. Deploying the Guardrails Orchestrator Copy linkLink copied to clipboard!
You can deploy a Guardrails Orchestrator instance in your namespace to monitor elements, such as user inputs to your Large Language Model (LLM).
Prerequisites
- You have cluster administrator privileges for your OpenShift cluster.
You have installed the OpenShift CLI (
oc) as described in the appropriate documentation for your cluster:- Installing the OpenShift CLI for OpenShift Container Platform
- Installing the OpenShift CLI for Red Hat OpenShift Service on AWS
-
You are familiar with how to create a
configMapfor monitoring a user-defined workflow. You perform similar steps in this procedure. See Understanding config maps. -
You have configured KServe to use
RawDeploymentmode. For more information, see Deploying models on the single-model serving platform. -
You have the TrustyAI component in your OpenShift AI
DataScienceClusterset toManaged. - You have a large language model (LLM) for chat generation or text classification, or both, deployed in your namespace.
Deploy your Orchestrator config map:
oc apply -f <ORCHESTRATOR CONFIGMAP>.yaml -n <TEST_NAMESPACE>
$ oc apply -f <ORCHESTRATOR CONFIGMAP>.yaml -n <TEST_NAMESPACE>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Deploy your Guardrails gateway config map:
oc apply -f <GUARDRAILS GATEWAY CONFIGMAP>.yaml -n <TEST_NAMESPACE>
$ oc apply -f <GUARDRAILS GATEWAY CONFIGMAP>.yaml -n <TEST_NAMESPACE>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a Guardrails Orchestrator custom resource. Make sure that the
orchestratorConfigandguardrailsGatewayConfigmatch the names of the resources you created in steps 1 and 2.Example
orchestrator_cr.yamlCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow If desired, the TrustyAI controller can automatically generate an
orchestratorConfigandguardrailsGatewayConfigbased on the available resources in your namespace. To access this, include theautoConfigparameter inside your Custom Resource, and see Auto Configuring Guardrails for documentation on its usage.Expand Table 1.6. Parameters from example orchestrator_cr.yaml CR Parameter Description orchestratorConfig(optional)The name of the
ConfigMapobject that contains generator, detector, and chunker arguments. If usingautoConfig, this field can be omitted.guardrailsGatewayConfig(optional)The name of the ConfigMap object that specifies gateway configurations. This field can be omitted if you are not using the Guardrails Gateway or are using
autoConfig.customDetectorsConfig(optional)This feature is in development preview.
autoConfig(optional)A list of paired name and value arguments to define how the Guardrails AutoConfig. Any manually-specified configuration files in
orchestratorConfigorguardrailsGatewayConfigtakes precedence over the automatically-generated configuration files.-
inferenceServiceToGuardrail- The name of the inference service you want to guardrail. This should exactly match the model name provided when deploying the model. For a list of valid names, you can runoc get isvc -n $NAMESPACE detectorServiceLabelToMatch- A string label to use when searching for available detector servers. All inference services in your namespace with the label$detectorServiceLabelToMatch: trueis automatically configured as a detector.See Auto Configuring Guardrails for more information.
enableBuiltInDetectors(optional)A boolean value to inject the built-in detector sidecar container into the Orchestrator pod. The built-in detector is a lightweight HTTP server containing a number of available guardrailing algorithms.
enableGuardrailsGateway(optional)A boolean value to enable controlled interaction with the Orchestrator service by enforcing stricter access to its exposed endpoints. It provides a mechanism of configuring detector pipelines, and then provides a unique
/v1/chat/completionsendpoint per configured detector pipeline.otelExporter(optional)A list of paired name and value arguments for configuring OpenTelemetry traces or metrics, or both:
-
otlpProtocol- Sets the protocol for all the OpenTelemetry protocol (OTLP) endpoints. Valid values aregrpc(default) orhttp -
otlpTracesEndpoint- Sets the OTLP endpoint. Default values arelocalhost:4317forgrpcandlocalhost:4318forhttp -
otlpMetricsEndpoint- Overrides the default OTLP metrics endpoint -
enableTraces- Whether to enable tracing data export, default false -
enableMetrics- Whether to enable metrics data export, default false
logLevel(optional)The log level to be used in the Guardrails Orchestrator- available values are
Error,Warn,Info(default),Debug, andTrace.tlsSecrets(optional)A list of names of
Secretobjects to mount to the Guardrails Orchestrator container. All secrets provided here are mounted into the directory/etc/tls/$SECRET_NAMEfor use in your Orchestrator config TLS configuration. Each secret should contain atls.crtand atls.keyfield.replicasThe number of Orchestrator pods to create.
-
Deploy the Orchestrator CR, which creates a service account, deployment, service, and route object in your namespace:
oc apply -f orchestrator_cr.yaml -n <TEST_NAMESPACE>
oc apply -f orchestrator_cr.yaml -n <TEST_NAMESPACE>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Confirm that the Orchestrator and LLM pods are running:
oc get pods -n <TEST_NAMESPACE>
$ oc get pods -n <TEST_NAMESPACE>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example response
NAME READY STATUS RESTARTS AGE guardrails-orchestrator-sample 3/3 Running 0 3h53m
NAME READY STATUS RESTARTS AGE guardrails-orchestrator-sample 3/3 Running 0 3h53mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Query the
/healthendpoint of the Orchestrator route to check the current status of the detector and generator services. If a200 OKresponse is returned, the services are functioning normally:GORCH_ROUTE_HEALTH=$(oc get routes guardrails-orchestrator-sample-health -o jsonpath='{.spec.host}' -n <TEST_NAMESPACE)$ GORCH_ROUTE_HEALTH=$(oc get routes guardrails-orchestrator-sample-health -o jsonpath='{.spec.host}' -n <TEST_NAMESPACE)Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -v https://$GORCH_ROUTE_HEALTH/health
$ curl -v https://$GORCH_ROUTE_HEALTH/healthCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example response
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.5. Auto-configuring Guardrails Copy linkLink copied to clipboard!
Auto-configuration simplifies the Guardrails setup process by automatically identifying available detector servers in your namespace, handling TLS configuration, and generating configuration files for a Guardrails Orchestrator deployment. For example, if any of the detectors or generation services use HTTPS, their credentials are automatically discovered, mounted, and used. Additionally, the Orchestrator is automatically configured to forward all necessary authentication token headers.
Prerequisites
-
Each detector service you intend to use has an OpenShift label applied in the resource metadata. For example,
metadata.labels.<label_name>: 'true'. Choose a descriptive name for the label as it is required for auto-configuration. - You have set up the inference service to which you intend to apply Guardrails.
You have installed the OpenShift CLI (
oc) as described in the appropriate documentation for your cluster:- Installing the OpenShift CLI for OpenShift Container Platform
- Installing the OpenShift CLI for Red Hat OpenShift Service on AWS
Procedure
Create a
GuardrailsOrchestratorCR with theautoConfigconfiguration. For example, create a YAML file namedguardrails_orchestrator_auto_cr.yamlwith the following contents:Example
guardrails_orchestrator_auto_cr.yamlCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
inferenceServiceToGuardrail: Specifies the name of the vLLM inference service to protect with Guardrails. detectorServiceLabelToMatch: Specifies the label that you applied to each of your detector servers in themetadata.labelsspecification for the detector. The Guardrails OrchestratorConfigMapautomatically updates to reflect detectors in your namespace that match the label set in thedetectorServiceLabelToMatchfield.If
enableGuardrailsGatewayis true, a template Guardrails gateway config called<ORCHESTRATOR_NAME>-gateway-auto-configis generated. You can modify this file to tailor your Guardrails Gateway setup as desired. The Guardrails Orchestrator automatically deploys when changes are detected. Once modified, the labeltrustyai/has-diverged-from-auto-configis applied. To revert the file back to the auto-generated starting point, simply delete it and the original auto-generated file is recreated.If
enableBuiltInDetectorsis true, the built-in detector server is automatically added to your Orchestrator configuration under the samebuilt-in-detector, and a sample configuration is included in the auto-generated Guardrails gateway config if desired.
-
Deploy the Orchestrator custom resource. This step creates a service account, deployment, service, and route object in your namespace.
oc apply -f guardrails_orchestrator_auto_cr.yaml -n <your_namespace>
oc apply -f guardrails_orchestrator_auto_cr.yaml -n <your_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
You can verify that the GuardrailsOrchestrator CR and corresponding automatically-generated configuration objects were successfully created in your namespace by running the following commands:
Confirm that the
GuardrailsOrchestratorCR was created:oc get guardrailsorchestrator -n <your_namespace>
$ oc get guardrailsorchestrator -n <your_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the automatically generated Guardrails Orchestrator
ConfigMaps:oc get configmap -n <your_namespace> | grep auto-config
$ oc get configmap -n <your_namespace> | grep auto-configCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can then view the automatically generated configmap:
oc get configmap/<auto-generated config map name> -n <your_namespace> -o yaml
$ oc get configmap/<auto-generated config map name> -n <your_namespace> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.6. Configuring the OpenTelemetry exporter Copy linkLink copied to clipboard!
You can configure the OpenTelemetry exporter to collect traces and metrics from the GuardrailsOrchestrator service. This enables you to monitor and observe the service behavior in your environment.
Prerequisites
- You have installed the Tempo Operator from the OperatorHub.
- You have installed the Red Hat build of OpenTelemetry from the OperatorHub.
Procedure
Enable user workload monitoring to observe telemetry data in OpenShift:
oc -n openshift-monitoring patch configmap cluster-monitoring-config --type merge -p '{"data":{"config.yaml":"enableUserWorkload: true\n"}}'$ oc -n openshift-monitoring patch configmap cluster-monitoring-config --type merge -p '{"data":{"config.yaml":"enableUserWorkload: true\n"}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy a MinIO instance to serve as the storage backend for Tempo:
Create a YAML file named
minio.yamlwith the following content:Example
minio.yamlconfigurationCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the MinIO configuration:
oc apply -f minio.yaml
$ oc apply -f minio.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the MinIO pod is running:
oc get pods -l app=minio
$ oc get pods -l app=minioCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE minio-5f8c9d7b6d-abc12 1/1 Running 0 30s
NAME READY STATUS RESTARTS AGE minio-5f8c9d7b6d-abc12 1/1 Running 0 30sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a TempoStack instance:
Create a secret for MinIO credentials:
oc create secret generic tempo-s3-secret \ --from-literal=endpoint=http://minio:9000 \ --from-literal=bucket=tempo \ --from-literal=access_key_id=minio \ --from-literal=access_key_secret=minio123
$ oc create secret generic tempo-s3-secret \ --from-literal=endpoint=http://minio:9000 \ --from-literal=bucket=tempo \ --from-literal=access_key_id=minio \ --from-literal=access_key_secret=minio123Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a bucket in MinIO for Tempo storage:
oc run -i --tty --rm minio-client --image=quay.io/minio/mc:latest --restart=Never -- \ sh -c "mc alias set minio http://minio:9000 minio minio123 && mc mb minio/tempo"
$ oc run -i --tty --rm minio-client --image=quay.io/minio/mc:latest --restart=Never -- \ sh -c "mc alias set minio http://minio:9000 minio minio123 && mc mb minio/tempo"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a YAML file named
tempo.yamlwith the following content:Example
tempo.yamlconfigurationCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the Tempo configuration:
oc apply -f tempo.yaml
$ oc apply -f tempo.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the TempoStack pods are running:
oc get pods -l app.kubernetes.io/instance=<tempo_stack_name>
$ oc get pods -l app.kubernetes.io/instance=<tempo_stack_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Configure the OpenTelemetry instance to send telemetry data to the Tempo distributor:
Create a YAML file named
opentelemetry.yamlwith the following content:Example
opentelemetry.yamlconfigurationCopy to Clipboard Copied! Toggle word wrap Toggle overflow The OpenTelemetry collector configuration defines the Tempo distributor and Prometheus services as exporters, which means that the OpenTelemetry collector sends telemetry data to these backends.
Apply the OpenTelemetry configuration:
oc apply -f opentelemetry.yaml
$ oc apply -f opentelemetry.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the OpenTelemetry collector pod is running:
oc get pods -l app.kubernetes.io/name=<otelcol_name>-collector
$ oc get pods -l app.kubernetes.io/name=<otelcol_name>-collectorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE <otelcol_name>-collector-7d9c8f5b6d-abc12 1/1 Running 0 45s
NAME READY STATUS RESTARTS AGE <otelcol_name>-collector-7d9c8f5b6d-abc12 1/1 Running 0 45sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Define a
GuardrailsOrchestratorcustom resource object to specify theotelExporterconfigurations in a YAML file namedorchestrator_otel_cr.yaml:Example
orchestrator_otel_cr.yamlobject with OpenTelemetry configuredCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
orchestratorConfig: This references the config map that you created when deploying the Guardrails Orchestrator service. -
otlpProtocol: The protocol for sending traces and metrics data. Valid values aregrpcorhttp. -
otlpTracesEndpoint: The hostname and port for exporting trace data to the OpenTelemetry collector. -
otlpMetricsEndpoint: The hostname and port for exporting metrics data to the OpenTelemetry collector. -
enableMetrics: Set totrueto enable exporting metrics data. -
enableTracing: Set totrueto enable exporting trace data.
-
Deploy the orchestrator custom resource:
oc apply -f orchestrator_otel_cr.yaml
$ oc apply -f orchestrator_otel_cr.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Send a request to the guardrails service and verify your OpenTelemetry configuration.
Observe traces using the Jaeger UI:
Access the Jaeger UI by port-forwarding the Tempo traces service:
oc port-forward svc/tempo-<tempo_stack_name>-query-frontend 16686:16686
$ oc port-forward svc/tempo-<tempo_stack_name>-query-frontend 16686:16686Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
In a separate browser window, navigate to
http://localhost:16686. - Under Service, select fms_guardrails_orchestr8 and click Find Traces.
Observe metrics using the OpenShift Metrics UI:
In the Administrator perspective within the OpenShift web console, select Observe > Metrics and query one of the following metrics:
-
incoming_request_count -
success_request_count -
server_error_response_count -
client_response_count -
client_request_duration
-
Chapter 2. Using Guardrails for AI safety Copy linkLink copied to clipboard!
Use the Guardrails tools to ensure the safety and security of your generative AI applications in production.
2.1. Detecting PII and sensitive data Copy linkLink copied to clipboard!
Protect user privacy by identifying and filtering personally identifiable information (PII) in LLM inputs and outputs using built-in regex detectors or custom detection models.
2.2. Detecting personally identifiable information (PII) by using Guardrails with Llama Stack Copy linkLink copied to clipboard!
The trustyai_fms Orchestrator server is an external provider for Llama Stack that allows you to configure and use the Guardrails Orchestrator and compatible detection models through the Llama Stack API. This implementation of Llama Stack combines Guardrails Orchestrator with a suite of community-developed detectors to provide robust content filtering and safety monitoring.
This example demonstrates how to use the built-in Guardrails Regex Detector to detect personally identifiable information (PII) with Guardrails Orchestrator as Llama Stack safety guardrails, using the LlamaStack Operator to deploy a distribution in your Red Hat OpenShift AI namespace.
Guardrails Orchestrator with Llama Stack is not supported on s390x, as it requires the LlamaStack Operator, which is currently unavailable for this architecture.
Prerequisites
- You have cluster administrator privileges for your OpenShift cluster.
You have installed the OpenShift CLI (
oc) as described in the appropriate documentation for your cluster:- Installing the OpenShift CLI for OpenShift Container Platform
- Installing the OpenShift CLI for Red Hat OpenShift Service on AWS
- You have a large language model (LLM) for chat generation or text classification, or both, deployed in your namespace.
A cluster administrator has installed the following Operators in OpenShift:
- Red Hat Authorino Operator, version 1.2.1 or later
- Red Hat OpenShift Service Mesh, version 2.6.7-0 or later
Procedure
Configure your OpenShift AI environment with the following configurations in the
DataScienceCluster. Note that you must manually update thespec.llamastack.managementStatefield toManaged:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a project in your OpenShift AI namespace:
PROJECT_NAME="lls-minimal-example" oc new-project $PROJECT_NAME
PROJECT_NAME="lls-minimal-example" oc new-project $PROJECT_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the Guardrails Orchestrator with regex detectors by applying the Orchestrator configuration for regex-based PII detection:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the same namespace, create a Llama Stack distribution:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
— After deploying the LlamaStackDistribution CR, a new pod is created in the same namespace. This pod runs the LlamaStack server for your distribution. —
-
Once the Llama Stack server is running, use the
/v1/shieldsendpoint to dynamically register a shield. For example, register a shield that uses regex patterns to detect personally identifiable information (PII). Open a port-forward to access it locally:
oc -n $PROJECT_NAME port-forward svc/llama-stack 8321:8321
oc -n $PROJECT_NAME port-forward svc/llama-stack 8321:8321Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
/v1/shieldsendpoint to dynamically register a shield. For example, register a shield that uses regex patterns to detect personally identifiable information (PII):Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the shield was registered:
curl -s http://localhost:8321/v1/shields | jq '.'
curl -s http://localhost:8321/v1/shields | jq '.'Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following output indicates that the shield has been registered successfully:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Once the shield has been registered, verify that it is working by sending a message containing PII to the
/v1/safety/run-shieldendpoint:Email detection example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This should return a response indicating that the email was detected:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Social security number (SSN) detection example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This should return a response indicating that the SSN was detected:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Credit card detection example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This should return a response indicating that the credit card number was detected:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3. Filtering flagged content by sending requests to the regex detector Copy linkLink copied to clipboard!
You can use the Guardrails Orchestrator API to send requests to the regex detector. The regex detector filters conversations by flagging content that matches specified regular expression patterns.
Prerequisites
You have deployed a Guardrails Orchestrator with the built-in-detector server, such as in the following example:
Example guardrails_orchestrator_auto_cr.yaml CR
Procedure
Send a request to the built-in detector that you configured. The following example sends a request to a regex detector named
regexto flag personally identifying information.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example response
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4. Securing prompts Copy linkLink copied to clipboard!
Prevent malicious prompt injection attacks by using specialized detectors to identify and block potentially harmful prompts before they reach your model.
2.5. Mitigating Prompt Injection by using a Hugging Face Prompt Injection detector Copy linkLink copied to clipboard!
These instructions build on the previous HAP scenario example and consider two detectors, HAP and Prompt Injection, deployed as part of the guardrailing system.
The instructions focus on the Hugging Face (HF) Prompt Injection detector, outlining two scenarios:
- Using the Prompt Injection detector with a generative large language model (LLM), deployed as part of the Guardrails Orchestrator service and managed by the TrustyAI Operator, to perform analysis of text input or output of an LLM, using the Orchestrator API.
- Perform standalone detections on text samples using an open-source Detector API.
These examples provided contain sample text that some people may find offensive, as the purpose of the detectors is to demonstrate how to filter out offensive, hateful, or malicious content.
Prerequisites
- You have cluster administrator privileges for your OpenShift cluster.
You have installed the OpenShift CLI (
oc) as described in the appropriate documentation for your cluster:- Installing the OpenShift CLI for OpenShift Container Platform
- Installing the OpenShift CLI for Red Hat OpenShift Service on AWS
- You are familiar with how to configure and deploy the Guardrails Orchestrator service. See Deploying the Guardrails Orchestrator
-
You have the TrustyAI component in your OpenShift AI
DataScienceClusterset toManaged. - You have a large language model (LLM) for chat generation or text classification, or both, deployed in your namespace, to follow the Orchestrator API example.
Scenario 1: Using a Prompt Injection detector with a generative large language model
Create a new project in Openshift using the CLI:
oc new-project detector-demo
oc new-project detector-demoCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create
service_account.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply
service_account.yamlto create the service account:oc apply -f service_account.yaml
oc apply -f service_account.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
prompt_injection_detector.yaml. In the following code example, replace <your_rhoai_version> with your OpenShift AI version (for example, v2.25). This feature requires OpenShift AI version 2.25 or later.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply
prompt_injection_detector.yamlto configure a serving runtime, inference service, and route for the Prompt Injection detector you want to incorporate in your Guardrails orchestration service:oc apply -f prompt_injection_detector.yaml
oc apply -f prompt_injection_detector.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create
hap_detector.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
image: Replace<your_rhoai_version>with your OpenShift AI version (for example,v2.25). This feature requires OpenShift AI version 2.25 or later.
-
Apply
hap_detector.yamlto configure a serving runtime, inference service, and route for the HAP detector:oc apply -f hap_detector.yaml
$ oc apply -f hap_detector.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor more information about configuring the HAP detector and deploying a text generation LLM, see the TrustyAI LLM demos.
Add the detector to the
ConfigMapin the Guardrails Orchestrator:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe built-in detectors have been switched off by setting the
enableBuiltInDetectorsoption tofalse.Use HAP and Prompt Injection detectors to perform detections on lists of messages comprising a conversation and/or completions from a model:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Within the Orchestrator API, you can use these detectors (HAP and Prompt Injection) to:
- Carry out content filtering for a text generation LLM at the input level, output level, or both.
- Perform standalone detections with the Orchestrator API.
The following images are not supported on arm64, s390x, and ppc64le:
-
quay.io/rgeada/llm_downloader:latest -
quay.io/trustyai/modelmesh-minio-examples:latest -
quay.io/trustyai/guardrails-detector-huggingface-runtime:v0.2.0
As a workaround:
- HAP and Prompt Injection models can be downloaded from Hugging Face, stored in S3-compatible storage, and deployed via the OpenShift AI Dashboard.
-
A compatible image for Hugging Face
ServingRuntimeis available in the OpenShift AI Dashboard under Serving Runtime Templates.
Scenario 2: Using a Prompt Injection detector to perform standalone detections
You can use Prompt Injection detectors to perform standalone detection using a Detector API or the Orchestrator API.
Get the route of your detector:
PROMPT_INJECTION_ROUTE=$(oc get routes prompt-injection-detector-route -o jsonpath='{.spec.host}')PROMPT_INJECTION_ROUTE=$(oc get routes prompt-injection-detector-route -o jsonpath='{.spec.host}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the health status of your detector:
curl -s http://$PROMPT_INJECTION_ROUTE/health | jq
curl -s http://$PROMPT_INJECTION_ROUTE/health | jqCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command returns `"ok"` if the detector is functioning correctly.
This command returns `"ok"` if the detector is functioning correctly.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Perform detections using your detector:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following output is displayed:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6. Moderating and safeguarding content Copy linkLink copied to clipboard!
Filter toxic, hateful, or profane content from user inputs and model outputs to maintain safe and appropriate AI interactions.
2.7. Detecting hateful and profane language Copy linkLink copied to clipboard!
The following example demonstrates how to use Guardrails Orchestrator to monitor user inputs to your LLM, specifically to detect and protect against hateful and profane language (HAP). A comparison query without the detector enabled shows the differences in responses when guardrails is disabled versus enabled.
Prerequisites
- You have cluster administrator privileges for your OpenShift cluster.
You have installed the OpenShift CLI (
oc) as described in the appropriate documentation for your cluster:- Installing the OpenShift CLI for OpenShift Container Platform
- Installing the OpenShift CLI for Red Hat OpenShift Service on AWS
- You have deployed the Guardrails Orchestrator and related detectors. For more information, see Deploying the Guardrails Orchestrator
Procedure
Define a
ConfigMapobject in a YAML file to specify the LLM service you wish to guardrail against and the HAP detector service you want to run the guardrails with. For example, create a file namedorchestrator_cm.yamlwith the following content:Example
orchestrator_cm.yamlyamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration to deploy the detector:
oc apply -f orchestrator_cm.yaml -n <TEST_NAMESPACE>
$ oc apply -f orchestrator_cm.yaml -n <TEST_NAMESPACE>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the external HTTP route for the orchestrator:
GORCH_ROUTE=$(oc get routes gorch-test -o jsonpath='{.spec.host}' -n <TEST_NAMESPACE>)GORCH_ROUTE=$(oc get routes gorch-test -o jsonpath='{.spec.host}' -n <TEST_NAMESPACE>)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Query the orchestrator’s
api/v2/chat/completions-detectionsendpoint without the HAP detector enabled to generate a response without guardrails:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example response
{"id":"cmpl-f6da55d06ade4792a33d4ae67a07cc38","object":"chat.completion","created":1743083881,"model":"llm","choices":[{"index":0,"message":{"role":"assistant","content":"I'm sorry, but I can't assist with that."},"logprobs":null,"finish_reason":"stop"}],"usage":{"prompt_tokens":56,"total_tokens":69,"completion_tokens":13}}{"id":"cmpl-f6da55d06ade4792a33d4ae67a07cc38","object":"chat.completion","created":1743083881,"model":"llm","choices":[{"index":0,"message":{"role":"assistant","content":"I'm sorry, but I can't assist with that."},"logprobs":null,"finish_reason":"stop"}],"usage":{"prompt_tokens":56,"total_tokens":69,"completion_tokens":13}}Copy to Clipboard Copied! Toggle word wrap Toggle overflow When HAP detections are not enabled on model inputs and outputs through the Guardrails Orchestrator, the model generates output without flagging unsuitable inputs.
Query the
api/v2/chat/completions-detectionsendpoint of the orchestrator and enable the HAP detector to generate a response with guardrails:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example response
{"id":"086980692dc1431f9c32cd56ba607067","object":"","created":1743084024,"model":"llm","choices":[],"usage":{"prompt_tokens":0,"total_tokens":0,"completion_tokens":0},"detections":{"input":[{"message_index":0,"results":[{"start":0,"end":36,"text":"<explicit_text>, I really hate this stuff","detection":"sequence_classifier","detection_type":"sequence_classification","detector_id":"hap","score":0.9634239077568054}]}]},"warnings":[{"type":"UNSUITABLE_INPUT","message":"Unsuitable input detected. Please check the detected entities on your input and try again with the unsuitable input removed."}]}{"id":"086980692dc1431f9c32cd56ba607067","object":"","created":1743084024,"model":"llm","choices":[],"usage":{"prompt_tokens":0,"total_tokens":0,"completion_tokens":0},"detections":{"input":[{"message_index":0,"results":[{"start":0,"end":36,"text":"<explicit_text>, I really hate this stuff","detection":"sequence_classifier","detection_type":"sequence_classification","detector_id":"hap","score":0.9634239077568054}]}]},"warnings":[{"type":"UNSUITABLE_INPUT","message":"Unsuitable input detected. Please check the detected entities on your input and try again with the unsuitable input removed."}]}Copy to Clipboard Copied! Toggle word wrap Toggle overflow When you enable HAP detections on model inputs and outputs via the Guardrails Orchestrator, unsuitable inputs are clearly flagged and model outputs are not generated.
Optional: You can also enable standalone detections on text by querying the
api/v2/text/detection/contentendpoint:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example response
{"detections":[{"start":0,"end":36,"text":"You <explicit_text>, I really hate this stuff","detection":"sequence_classifier","detection_type":"sequence_classification","detector_id":"hap","score":0.9634239077568054}]}{"detections":[{"start":0,"end":36,"text":"You <explicit_text>, I really hate this stuff","detection":"sequence_classifier","detection_type":"sequence_classification","detector_id":"hap","score":0.9634239077568054}]}Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.8. Enforcing configured safety pipelines for LLM inference by using Guardrails Gateway Copy linkLink copied to clipboard!
The Guardrails Gateway is a sidecar image that you can use with the GuardrailsOrchestrator service. When running your AI application in production, you can use the Guardrails Gateway to enforce a consistent, custom set of safety policies using a preset guardrail pipeline. For example, you can create a preset guardrail pipeline for PII detection and language moderation. You can then send chat completions requests to the preset pipeline endpoints without needing to alter existing inference API calls. It provides the OpenAI v1/chat/completions API and allows you to specify which detectors and endpoints you want to use to access the service.
Prerequisites
- You have configured the Guardrails gateway image.
Procedure
Set up the endpoint for the detectors:
GUARDRAILS_GATEWAY=https://$(oc get routes guardrails-gateway -o jsonpath='{.spec.host}')GUARDRAILS_GATEWAY=https://$(oc get routes guardrails-gateway -o jsonpath='{.spec.host}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Based on the example configurations provided in Configuring the built-in detector and Guardrails gateway, the available endpoint for the model with Guardrails is
$GUARDRAILS_GATEWAY/pii.Query the model with Guardrails
piiendpoint:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example response
Warning: Unsuitable input detected. Please check the detected entities on your input and try again with the unsuitable input removed. Input Detections: 0) The regex detector flagged the following text: "123-45-6789"
Warning: Unsuitable input detected. Please check the detected entities on your input and try again with the unsuitable input removed. Input Detections: 0) The regex detector flagged the following text: "123-45-6789"Copy to Clipboard Copied! Toggle word wrap Toggle overflow