Chapter 3. Deploying a RAG stack in a project


Important

This feature is currently available in Red Hat OpenShift AI 3.0 as a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

As an OpenShift cluster administrator, you can deploy a Retrieval-Augmented Generation (RAG) stack in OpenShift AI. This stack provides the infrastructure, including LLM inference, vector storage, and retrieval services that data scientists and AI engineers use to build conversational workflows in their projects.

To deploy the RAG stack in a project, complete the following tasks:

  • Activate the Llama Stack Operator in OpenShift AI.
  • Enable GPU support on the OpenShift cluster. This task includes installing the required NVIDIA Operators.
  • Deploy an inference model, for example, the llama-3.2-3b-instruct model. This task includes creating a storage connection and configuring GPU allocation.
  • Create a LlamaStackDistribution instance to enable RAG functionality. This action deploys LlamaStack alongside a Milvus vector store and connects both components to the inference model.
  • Ingest domain data into Milvus by running Docling in an AI pipeline or Jupyter notebook. This process keeps the embeddings synchronized with the source data.
  • Expose and secure the model endpoints.

3.1. Overview of RAG

Retrieval-augmented generation (RAG) in OpenShift AI enhances large language models (LLMs) by integrating domain-specific data sources directly into the model’s context. Domain-specific data sources can be structured data, such as relational database tables, or unstructured data, such as PDF documents.

RAG indexes content and builds an embedding store that data scientists and AI engineers can query. When data scientists or AI engineers pose a question to a RAG chatbot, the RAG pipeline retrieves the most relevant pieces of data, passes them to the LLM as context, and generates a response that reflects both the prompt and the retrieved content.

By implementing RAG, data scientists and AI engineers can obtain tailored, accurate, and verifiable answers to complex queries based on their own datasets within a project.

3.1.1. Audience for RAG

The target audience for RAG is practitioners who build data-grounded conversational AI applications using OpenShift AI infrastructure.

For Data Scientists
Data scientists can use RAG to prototype and validate models that answer natural-language queries against data sources without managing low-level embedding pipelines or vector stores. They can focus on creating prompts and evaluating model outputs instead of building retrieval infrastructure.
For MLOps Engineers
MLOps engineers typically deploy and operate RAG pipelines in production. Within OpenShift AI, they manage LLM endpoints, monitor performance, and ensure that both retrieval and generation scale reliably. RAG decouples vector store maintenance from the serving layer, enabling MLOps engineers to apply CI/CD workflows to data ingestion and model deployment alike.
For Data Engineers
Data engineers build workflows to load data into storage that OpenShift AI indexes. They keep embeddings in sync with source systems, such as S3 buckets or relational tables to ensure that chatbot responses are accurate.
For AI Engineers
AI engineers architect RAG chatbots by defining prompt templates, retrieval methods, and fallback logic. They configure agents and add domain-specific tools, such as OpenShift job triggers, enabling rapid iteration.

3.2. Overview of vector databases

Vector databases are a crucial component of retrieval-augmented generation (RAG) in OpenShift AI. They store and index vector embeddings that represent the semantic meaning of text or other data. When you integrate vector databases with Llama Stack in OpenShift AI, you can build RAG applications that combine large language models (LLMs) with relevant, domain-specific knowledge.

Vector databases provide the following capabilities:

  • Store vector embeddings generated by embedding models.
  • Support efficient similarity search to retrieve semantically related content.
  • Enable RAG workflows by supplying the LLM with contextually relevant data from a specific domain.

When you deploy RAG workloads in OpenShift AI, you can deploy vector databases through the Llama Stack Operator. Currently, OpenShift AI supports the following vector databases:

  • Inline Milvus Lite An Inline Milvus vector database runs embedded within the Llama Stack Distribution (LSD) pod and is suitable for lightweight experimentation and small-scale development. Inline Milvus stores data in a local SQLite database and is limited in scale and persistence.
  • Inline FAISS Inline FAISS provides an alternative lightweight vector store for RAG workloads. FAISS (Facebook AI Similarity Search) is an open-source library for efficient similarity search and clustering of dense vectors. When configured inline with a SQLite backend, FAISS runs entirely within the Llama Stack container and stores embeddings locally without requiring a separate database service. Inline FAISS is ideal for testing and experimental RAG deployments.
  • Remote Milvus A remote Milvus vector database runs as a standalone service in your project namespace or as an external managed deployment. Remote Milvus is best for production-grade RAG use cases because it provides persistence, scalability, and isolation from the Llama Stack Distribution (LSD) pod. In OpenShift environments, you must deploy Milvus with an etcd service directly in your project. For more information on using etcd services, see Providing redundancy with etcd.

Consider the following points when you decide which vector database to use for your RAG workloads:

  • Use inline Milvus Lite if you want to experiment quickly with RAG in a self-contained setup and do not require persistence across pod restarts.
  • Use inline FAISS if you need a lightweight, in-process vector store with local persistence through SQLite and no network dependency.
  • Use remote Milvus if you need reliable storage, high availability, and the ability to scale RAG workloads in your OpenShift AI environment.

3.2.1. Overview of Milvus vector databases

Milvus is an open source vector database designed for high-performance similarity search across embedding data. In OpenShift AI, Milvus is supported as a remote vector database provider for the Llama Stack Operator. Milvus enables retrieval-augmented generation (RAG) workloads that require persistence, scalability, and efficient search across large document collections.

Milvus vector databases provide you with the following capabilities in OpenShift AI:

  • Similarity search using Approximate Nearest Neighbor (ANN) algorithms.
  • Persistent storage support for vectors.
  • Indexing and query optimizations for embedding-based search.
  • Integration with external metadata and APIs.

In OpenShift AI, you can use Milvus vector databases in the following operational modes:

  • Inline Milvus Lite, which runs embedded in the Llama Stack Distribution pod for testing or small-scale experiments.
  • Remote Milvus, which runs as a standalone service in your OpenShift project or as an external managed Milvus service. Remote Milvus is recommended for production workloads.

When you deploy a remote Milvus vector database, you must run the following components in your OpenShift project:

  • Secret (milvus-secret): Stores sensitive data such as the Milvus root password.
  • PersistentVolumeClaim (milvus-pvc): Provides persistent storage for Milvus data.
  • Deployment (etcd-deployment): Runs an etcd instance that Milvus uses for metadata storage and service coordination.
  • Service (etcd-service): Exposes the etcd port for Milvus to connect to.
  • Deployment (milvus-standalone): Runs Milvus in standalone mode and connects it to the etcd service and PVC.
  • Service (milvus-service): Exposes Milvus gRPC (19530) and HTTP (9091 health check) ports for client access.

Milvus requires an etcd service to manage metadata such as collections, indexes, and partitions, and to provide service discovery and coordination among Milvus components. Even when running in standalone mode, Milvus depends on etcd to operate correctly and maintain metadata consistency. For more information on using etcd services, see Providing redundancy with etcd.

Important

Do not use the OpenShift control plane etcd for Milvus. You must deploy a separate etcd instance inside your project or connect to an external etcd service.

Use Remote Milvus when you require a persistent, scalable, and production-ready vector database that integrates seamlessly with OpenShift AI. Consider choosing a remote Milvus vector database if your deployment must cater for the following requirements:

  • Persistent vector storage across restarts or upgrades.
  • Scalable indexing and high-performance vector search.
  • A production-grade RAG architecture integrated with OpenShift AI.

3.2.2. Overview of FAISS vector databases

The FAISS (Facebook AI Similarity Search) library is an open-source framework for high-performance vector search and clustering. It is optimized for dense numerical embeddings and supports both CPU and GPU execution. You can enable inline FAISS in OpenShift AI with an embedded SQLite backend in your Llama Stack Distribution. This configures Llama Stack to use FAISS as an in-process vector store, storing embeddings locally within the container without requiring a separate vector database service.

Inline FAISS enables efficient similarity search and retrieval within retrieval augmented generation (RAG) workflows. It operates entirely within the LlamaStackDistribution instance, making it a lightweight option for experimental and testing environments.

Inline FAISS offers the following benefits:

  • Simplified setup with no external database or network dependencies.
  • Persistent local storage of FAISS vector data.
  • Reduced latency for embedding ingestion and query operations.
  • Compatibility with OpenAI-compatible Vector Store API endpoints.

Inline FAISS stores vectors either in memory or in a local SQLite database file, allowing the deployment to retain vector data across sessions with minimal overhead.

Note

Inline FAISS is best for experimental or testing environments. It does not provide distributed storage or high availability. For production-grade workloads that require scalability or redundancy, consider using an external vector database, such as Milvus.

3.3. Deploying a Llama model with KServe

To use Llama Stack and retrieval-augmented generation (RAG) workloads in OpenShift AI, you must deploy a Llama model with a vLLM model server and configure KServe in KServe RawDeployment mode.

Prerequisites

  • You have installed OpenShift 4.19 or newer.
  • You have logged in to Red Hat OpenShift AI.
  • You have cluster administrator privileges for your OpenShift cluster.
  • You have activated the Llama Stack Operator.
  • You have installed KServe.
  • You have enabled the model serving platform. For more information about enabling the model serving platform, see Enabling the model serving platform.
  • You can access the model serving platform in the dashboard configuration. For more information about setting dashboard configuration options, see Customizing the dashboard.
  • You have enabled GPU support in OpenShift AI, including installing the Node Feature Discovery Operator and NVIDIA GPU Operator. For more information, see Installing the Node Feature Discovery Operator and Enabling NVIDIA GPUs.
  • You have installed the OpenShift CLI (oc) as described in the appropriate documentation for your cluster:

  • You have created a project.
  • The vLLM serving runtime is installed and available in your environment.
  • You have created a storage connection for your model that contains a URI - v1 connection type. This storage connection must define the location of your Llama 3.2 model artifacts. For example, oci://quay.io/redhat-ai-services/modelcar-catalog:llama-3.2-3b-instruct. For more information about creating storage connections, see Adding a connection to your project.
Procedure

These steps are only supported in OpenShift AI versions 2.19 and later.

  1. In the OpenShift AI dashboard, navigate to the project details page and click the Deployments tab.
  2. In the Model serving platform tile, click Select model.
  3. Click the Deploy model button.

    The Deploy model dialog opens.

  4. Configure the deployment properties for your model:

    1. In the Model deployment name field, enter a unique name for your deployment.
    2. In the Serving runtime field, select vLLM NVIDIA GPU serving runtime for KServe from the drop-down list.
    3. In the Deployment mode field, select KServe RawDeployment from the drop-down list.
    4. Set Number of model server replicas to deploy to 1.
    5. In the Model server size field, select Custom from the drop-down list.

      • Set CPUs requested to 1 core.
      • Set Memory requested to 10 GiB.
      • Set CPU limit to 2 core.
      • Set Memory limit to 14 GiB.
      • Set Accelerator to NVIDIA GPUs.
      • Set Accelerator count to 1.
    6. From the Connection type, select a relevant data connection from the drop-down list.
  5. In the Additional serving runtime arguments field, specify the following recommended arguments:

    --dtype=half
    --max-model-len=20000
    --gpu-memory-utilization=0.95
    --enable-chunked-prefill
    --enable-auto-tool-choice
    --tool-call-parser=llama3_json
    --chat-template=/app/data/template/tool_chat_template_llama3.2_json.jinja
    1. Click Deploy.

      Note

      Model deployment can take several minutes, especially for the first model that is deployed on the cluster. Initial deployment may take more than 10 minutes while the relevant images download.

Verification

  1. Verify that the kserve-controller-manager and odh-model-controller pods are running:

    1. Open a new terminal window.
    2. Log in to your OpenShift cluster from the CLI:
    3. In the upper-right corner of the OpenShift web console, click your user name and select Copy login command.
    4. After you have logged in, click Display token.
    5. Copy the Log in with this token command and paste it in the OpenShift CLI (oc).

      $ oc login --token=<token> --server=<openshift_cluster_url>
    6. Enter the following command to verify that the kserve-controller-manager and odh-model-controller pods are running:

      $ oc get pods -n redhat-ods-applications | grep -E 'kserve-controller-manager|odh-model-controller'
    7. Confirm that you see output similar to the following example:

      kserve-controller-manager-7c865c9c9f-xyz12   1/1     Running   0          4m21s
      odh-model-controller-7b7d5fd9cc-wxy34        1/1     Running   0          3m55s
    8. If you do not see either of the kserve-controller-manager and odh-model-controller pods, there could be a problem with your deployment. In addition, if the pods appear in the list, but their Status is not set to Running, check the pod logs for errors:

      $ oc logs <pod-name> -n redhat-ods-applications
    9. Check the status of the inference service:

      $ oc get inferenceservice -n llamastack
      $ oc get pods -n <project name> | grep llama
      • The deployment automatically creates the following resources:

        • A ServingRuntime resource.
        • An InferenceService resource, a Deployment, a pod, and a service pointing to the pod.
      • Verify that the server is running. For example:

        $ oc logs llama-32-3b-instruct-predictor-77f6574f76-8nl4r  -n <project name>

        Check for output similar to the following example log:

        INFO     2025-05-15 11:23:52,750 __main__:498 server: Listening on ['::', '0.0.0.0']:8321
        INFO:     Started server process [1]
        INFO:     Waiting for application startup.
        INFO     2025-05-15 11:23:52,765 __main__:151 server: Starting up
        INFO:     Application startup complete.
        INFO:     Uvicorn running on http://['::', '0.0.0.0']:8321 (Press CTRL+C to quit)
      • The deployed model displays in the Deployments tab on the project details page for the project it was deployed under.
  2. If you see a ConvertTritonGPUToLLVM error in the pod logs when querying the /v1/chat/completions API, and the vLLM server restarts or returns a 500 Internal Server error, apply the following workaround:

    Before deploying the model, remove the --enable-chunked-prefill argument from the Additional serving runtime arguments field in the deployment dialog.

    The error is displayed similar to the following:

    /opt/vllm/lib64/python3.12/site-packages/vllm/attention/ops/prefix_prefill.py:36:0: error: Failures have been detected while processing an MLIR pass pipeline
    /opt/vllm/lib64/python3.12/site-packages/vllm/attention/ops/prefix_prefill.py:36:0: note: Pipeline failed while executing [`ConvertTritonGPUToLLVM` on 'builtin.module' operation]: reproducer generated at `std::errs, please share the reproducer above with Triton project.`
    INFO:     10.129.2.8:0 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error

3.4. Testing your vLLM model endpoints

To verify that your deployed Llama 3.2 model is accessible externally, ensure that your vLLM model server is exposed as a network endpoint. You can then test access to the model from outside both the OpenShift cluster and the OpenShift AI interface.

Important

If you selected Make deployed models available through an external route during deployment, your vLLM model endpoint is already accessible outside the cluster. You do not need to manually expose the model server. Manually exposing vLLM model endpoints, for example, by using oc expose, creates an unsecured route unless you configure authentication. Avoid exposing endpoints without security controls to prevent unauthorized access.

Prerequisites

  • You have cluster administrator privileges for your OpenShift cluster.
  • You have logged in to Red Hat OpenShift AI.
  • You have activated the Llama Stack Operator in OpenShift AI.
  • You have deployed an inference model, for example, the llama-3.2-3b-instruct model.
  • You have installed the OpenShift CLI (oc) as described in the appropriate documentation for your cluster:

Procedure

  1. Open a new terminal window.

    1. Log in to your OpenShift cluster from the CLI:
    2. In the upper-right corner of the OpenShift web console, click your user name and select Copy login command.
    3. After you have logged in, click Display token.
    4. Copy the Log in with this token command and paste it in the OpenShift CLI (oc).

      $ oc login --token=<token> --server=<openshift_cluster_url>
  2. If you enabled Require token authentication during model deployment, retrieve your token:

    $ export MODEL_TOKEN=$(oc get secret default-name-llama-32-3b-instruct-sa -n <project name> --template={{ .data.token }} | base64 -d)
  3. Obtain your model endpoint URL:

    • If you enabled Make deployed models available through an external route during model deployment, click Endpoint details on the Deployments page in the OpenShift AI dashboard to obtain your model endpoint URL.
    • In addition, if you did not enable Require token authentication during model deployment, you can also enter the following command to retrieve the endpoint URL:

      $ export MODEL_ENDPOINT="https://$(oc get route llama-32-3b-instruct -n <project name> --template={{ .spec.host }})"
  4. Test the endpoint with a sample chat completion request:

    • If you did not enable Require token authentication during model deployment, enter a chat completion request. For example:

      $ curl -X POST $MODEL_ENDPOINT/v1/chat/completions \
       -H "Content-Type: application/json" \
       -d '{
       "model": "llama-32-3b-instruct",
       "messages": [
         {
           "role": "user",
           "content": "Hello"
         }
       ]
      }'
    • If you enabled Require token authentication during model deployment, include a token in your request. For example:

      curl -s -k $MODEL_ENDPOINT/v1/chat/completions \
      --header "Authorization: Bearer $MODEL_TOKEN" \
      --header 'Content-Type: application/json' \
      -d '{
        "model": "llama-32-3b-instruct",
        "messages": [
          {
            "role": "user",
            "content": "can you tell me a funny joke?"
          }
        ]
      }' | jq .
      Note

      The -k flag disables SSL verification and should only be used in test environments or with self-signed certificates.

Verification

Confirm that you received a JSON response containing a chat completion. For example:

{
  "id": "chatcmpl-05d24b91b08a4b78b0e084d4cc91dd7e",
  "object": "chat.completion",
  "created": 1747279170,
  "model": "llama-32-3b-instruct",
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "reasoning_content": null,
      "content": "Hello! It's nice to meet you. Is there something I can help you with or would you like to chat?",
      "tool_calls": []
    },
    "logprobs": null,
    "finish_reason": "stop",
    "stop_reason": null
  }],
  "usage": {
    "prompt_tokens": 37,
    "total_tokens": 62,
    "completion_tokens": 25,
    "prompt_tokens_details": null
  },
  "prompt_logprobs": null
}

If you do not receive a response similar to the example, verify that the endpoint URL and token are correct, and ensure your model deployment is running.

3.5. Deploying a remote Milvus vector database

To use Milvus as a remote vector database provider for Llama Stack in OpenShift AI, you must deploy Milvus and its required etcd service in your OpenShift project. This procedure shows how to deploy Milvus in standalone mode without the Milvus Operator.

Note

The following example configuration is intended for testing or evaluation environments. For production-grade deployments, see https://milvus.io/docs in the Milvus documentation.

Prerequisites

  • You have installed OpenShift 4.19 or newer.
  • You have enabled GPU support in OpenShift AI. This includes installing the Node Feature Discovery operator and NVIDIA GPU Operators. For more information, see Installing the Node Feature Discovery operator and Enabling NVIDIA GPUs.
  • You have cluster administrator privileges for your OpenShift cluster.
  • You are logged in to Red Hat OpenShift AI.
  • You have a StorageClass available that can provision persistent volumes.
  • You created a root password to secure your Milvus service.
  • You have deployed an inference model with vLLM, for example, the llama-3.2-3b-instruct model, and you have selected Make deployed models available through an external route and Require token authentication during model deployment.
  • You have the correct inference model identifier, for example, llama-3-2-3b.
  • You have the model endpoint URL, ending with /v1, such as https://llama-32-3b-instruct-predictor:8443/v1.
  • You have the API token required to access the model endpoint.
  • You have installed the OpenShift command line interface (oc) as described in Installing the OpenShift CLI.

Procedure

  1. In the OpenShift console, click the Quick Create ( quick create icon ) icon and then click the Import YAML option.
  2. Verify that your project is the selected project.
  3. In the Import YAML editor, paste the following manifest and click Create:

    apiVersion: v1
    kind: Secret
    metadata:
      name: milvus-secret
    type: Opaque
    stringData:
      root-password: "MyStr0ngP@ssw0rd"
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: milvus-pvc
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 20Gi
      volumeMode: Filesystem
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: etcd-deployment
      labels:
        app: etcd
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: etcd
      strategy:
        type: Recreate
      template:
        metadata:
          labels:
            app: etcd
        spec:
          containers:
            - name: etcd
              image: quay.io/coreos/etcd:v3.5.5
              command:
                - etcd
                - --advertise-client-urls=http://127.0.0.1:2379
                - --listen-client-urls=http://0.0.0.0:2379
                - --data-dir=/etcd
              ports:
                - containerPort: 2379
              volumeMounts:
                - name: etcd-data
                  mountPath: /etcd
              env:
                - name: ETCD_AUTO_COMPACTION_MODE
                  value: revision
                - name: ETCD_AUTO_COMPACTION_RETENTION
                  value: "1000"
                - name: ETCD_QUOTA_BACKEND_BYTES
                  value: "4294967296"
                - name: ETCD_SNAPSHOT_COUNT
                  value: "50000"
          volumes:
            - name: etcd-data
              emptyDir: {}
          restartPolicy: Always
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: etcd-service
    spec:
      ports:
        - port: 2379
          targetPort: 2379
      selector:
        app: etcd
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: milvus-standalone
      name: milvus-standalone
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: milvus-standalone
      strategy:
        type: Recreate
      template:
        metadata:
          labels:
            app: milvus-standalone
        spec:
          containers:
            - name: milvus-standalone
              image: milvusdb/milvus:v2.6.0
              args: ["milvus", "run", "standalone"]
              env:
                - name: DEPLOY_MODE
                  value: standalone
                - name: ETCD_ENDPOINTS
                  value: etcd-service:2379
                - name: COMMON_STORAGETYPE
                  value: local
                - name: MILVUS_ROOT_PASSWORD
                  valueFrom:
                    secretKeyRef:
                      name: milvus-secret
                      key: root-password
              livenessProbe:
                exec:
                  command: ["curl", "-f", "http://localhost:9091/healthz"]
                initialDelaySeconds: 90
                periodSeconds: 30
                timeoutSeconds: 20
                failureThreshold: 5
              ports:
                - containerPort: 19530
                  protocol: TCP
                - containerPort: 9091
                  protocol: TCP
              volumeMounts:
                - name: milvus-data
                  mountPath: /var/lib/milvus
          restartPolicy: Always
          volumes:
            - name: milvus-data
              persistentVolumeClaim:
                claimName: milvus-pvc
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: milvus-service
    spec:
      selector:
        app: milvus-standalone
      ports:
        - name: grpc
          port: 19530
          targetPort: 19530
        - name: http
          port: 9091
          targetPort: 9091
    Note
    • Use the gRPC port (19530) for the MILVUS_ENDPOINT setting in Llama Stack.
    • The HTTP port (9091) is reserved for health checks.
    • If you deploy Milvus in a different namespace, use the fully qualified service name in your Llama Stack configuration. For example: http://milvus-service.<namespace>.svc.cluster.local:19530

Verification

  1. In the OpenShift web console, click Workloads Deployments.
  2. Verify that both etcd-deployment and milvus-standalone show a status of 1 of 1 pods available.
  3. Click Pods in the navigation panel and confirm that pods for both deployments are Running.
  4. Click the milvus-standalone pod name, then select the Logs tab.
  5. Verify that Milvus reports a healthy startup with output similar to:

    Milvus Standalone is ready to serve ...
    Listening on 0.0.0.0:19530 (gRPC)
  6. Click Networking Services and confirm that the milvus-service and etcd-service resources exist and are exposed on ports 19530 and 2379, respectively.
  7. (Optional) Click Pods milvus-standalone Terminal and run the following health check:

    curl http://localhost:9091/healthz

    A response of {"status": "healthy"} confirms that Milvus is running correctly.

3.6. Deploying a LlamaStackDistribution instance

You can deploy Llama Stack with retrieval-augmented generation (RAG) by pairing it with a vLLM-served Llama 3.2 model. This module provides the following deployment examples of the LlamaStackDistribution custom resource (CR):

  • Example A: Inline Milvus (embedded, single-node)
  • Example B: Remote Milvus (external service)
  • Example C: Inline FAISS (SQLite backend)

Prerequisites

  • You have installed OpenShift 4.19 or newer.
  • You have enabled GPU support in OpenShift AI. This includes installing the Node Feature Discovery Operator and NVIDIA GPU Operator. For more information, see Installing the Node Feature Discovery Operator and Enabling NVIDIA GPUs.
  • You have cluster administrator privileges for your OpenShift cluster.
  • You are logged in to Red Hat OpenShift AI.
  • You have activated the Llama Stack Operator in OpenShift AI.
  • You have deployed an inference model with vLLM (for example, llama-3.2-3b-instruct) and selected Make deployed models available through an external route and Require token authentication during model deployment. In addition, in Add custom runtime arguments, you have added --enable-auto-tool-choice.
  • You have the correct inference model identifier, for example, llama-3-2-3b.
  • You have the model endpoint URL ending with /v1, for example, https://llama-32-3b-instruct-predictor:8443/v1.
  • You have the API token required to access the model endpoint.
  • You have installed the OpenShift CLI (oc) as described in the appropriate documentation for your cluster:

Procedure

  1. Open a new terminal window and log in to your OpenShift cluster from the CLI:

    In the upper-right corner of the OpenShift web console, click your user name and select Copy login command. After you have logged in, click Display token. Copy the Log in with this token command and paste it in the OpenShift CLI (oc).

    $ oc login --token=<token> --server=<openshift_cluster_url>
  2. Create a secret that contains the inference model environment variables:

    export INFERENCE_MODEL="llama-3-2-3b"
    export VLLM_URL="https://llama-32-3b-instruct-predictor:8443/v1"
    export VLLM_TLS_VERIFY="false"   # Use "true" in production
    export VLLM_API_TOKEN="<token identifier>"
    
    oc create secret generic llama-stack-inference-model-secret \
      --from-literal=INFERENCE_MODEL="$INFERENCE_MODEL" \
      --from-literal=VLLM_URL="$VLLM_URL" \
      --from-literal=VLLM_TLS_VERIFY="$VLLM_TLS_VERIFY" \
      --from-literal=VLLM_API_TOKEN="$VLLM_API_TOKEN"
  3. Choose one of the following deployment examples:
Important

To enable Llama Stack in a disconnected environment, you need add the following parameters to your LlamaStackDistribution custom resource.

- name: SENTENCE_TRANSFORMERS_HOME
  value: /opt/app-root/src/.cache/huggingface/hub
- name: HF_HUB_OFFLINE
  value: "1"
- name: TRANSFORMERS_OFFLINE
  value: "1"
- name: HF_DATASETS_OFFLINE
  value: "1"

The built-in Llama Stack tool websearch is not available in the Red Hat Llama Stack Distribution in disconnected environments. In addition, the built-in Llama Stack tool wolfram_alpha tool is not available in the Red Hat Llama Stack Distribution in all clusters.

Use this example for development or small datasets where an embedded, single-node Milvus is sufficient. No MILVUS_* connection variables are required.

  1. In the OpenShift web console, select Administrator Quick Create ( quick create icon ) Import YAML, and create a CR similar to the following:

    apiVersion: llamastack.io/v1alpha1
    kind: LlamaStackDistribution
    metadata:
      name: lsd-llama-milvus-inline
    spec:
      replicas: 1
      server:
        containerSpec:
          resources:
            requests:
              cpu: "250m"
              memory: "500Mi"
            limits:
              cpu: 4
              memory: "12Gi"
          env:
            - name: INFERENCE_MODEL
              valueFrom:
                secretKeyRef:
                  name: llama-stack-inference-model-secret
                  key: INFERENCE_MODEL
            - name: VLLM_MAX_TOKENS
              value: "4096"
            - name: VLLM_URL
              valueFrom:
                secretKeyRef:
                  name: llama-stack-inference-model-secret
                  key: VLLM_URL
            - name: VLLM_TLS_VERIFY
              valueFrom:
                secretKeyRef:
                  name: llama-stack-inference-model-secret
                  key: VLLM_TLS_VERIFY
            - name: VLLM_API_TOKEN
              valueFrom:
                secretKeyRef:
                  name: llama-stack-inference-model-secret
                  key: VLLM_API_TOKEN
          name: llama-stack
          port: 8321
        distribution:
          name: rh-dev
    Note

    The rh-dev value is an internal image reference. When you create the LlamaStackDistribution custom resource, the OpenShift AI Operator automatically resolves rh-dev to the container image in the appropriate registry. This internal image reference allows the underlying image to update without requiring changes to your custom resource.

Use this example for production-grade or large datasets with an external Milvus service. This configuration reads both MILVUS_ENDPOINT and MILVUS_TOKEN from a dedicated secret.

  1. Create the Milvus connection secret:

    # Required: gRPC endpoint on port 19530
    export MILVUS_ENDPOINT="tcp://milvus-service:19530"
    export MILVUS_TOKEN="<milvus-root-or-user-token>"
    export MILVUS_CONSISTENCY_LEVEL="Bounded"   # Optional; choose per your deployment
    
    oc create secret generic milvus-secret \
      --from-literal=MILVUS_ENDPOINT="$MILVUS_ENDPOINT" \
      --from-literal=MILVUS_TOKEN="$MILVUS_TOKEN" \
      --from-literal=MILVUS_CONSISTENCY_LEVEL="$MILVUS_CONSISTENCY_LEVEL"
    Important

    Use the gRPC port 19530 for MILVUS_ENDPOINT. Ports such as 9091 are typically used for health checks and are not valid for client traffic.

  2. In the OpenShift web console, select Administrator Quick Create ( quick create icon ) Import YAML, and create a CR similar to the following:

    apiVersion: llamastack.io/v1alpha1
    kind: LlamaStackDistribution
    metadata:
      name: lsd-llama-milvus-remote
    spec:
      replicas: 1
      server:
        containerSpec:
          resources:
            requests:
              cpu: "250m"
              memory: "500Mi"
            limits:
              cpu: 4
              memory: "12Gi"
          env:
            - name: INFERENCE_MODEL
              valueFrom:
                secretKeyRef:
                  name: llama-stack-inference-model-secret
                  key: INFERENCE_MODEL
            - name: VLLM_MAX_TOKENS
              value: "4096"
            - name: VLLM_URL
              valueFrom:
                secretKeyRef:
                  name: llama-stack-inference-model-secret
                  key: VLLM_URL
            - name: VLLM_TLS_VERIFY
              valueFrom:
                secretKeyRef:
                  name: llama-stack-inference-model-secret
                  key: VLLM_TLS_VERIFY
            - name: VLLM_API_TOKEN
              valueFrom:
                secretKeyRef:
                  name: llama-stack-inference-model-secret
                  key: VLLM_API_TOKEN
            # --- Remote Milvus configuration from secret ---
            - name: MILVUS_ENDPOINT
              valueFrom:
                secretKeyRef:
                  name: milvus-secret
                  key: MILVUS_ENDPOINT
            - name: MILVUS_TOKEN
              valueFrom:
                secretKeyRef:
                  name: milvus-secret
                  key: MILVUS_TOKEN
            - name: MILVUS_CONSISTENCY_LEVEL
              valueFrom:
                secretKeyRef:
                  name: milvus-secret
                  key: MILVUS_CONSISTENCY_LEVEL
          name: llama-stack
          port: 8321
        distribution:
          name: rh-dev

Use this example to enable the inline FAISS vector store. This configuration stores vector data locally within the Llama Stack container using an embedded SQLite database.

  1. In the OpenShift web console, select Administrator Quick Create ( quick create icon ) Import YAML, and create a CR similar to the following:

    apiVersion: llamastack.io/v1alpha1
    kind: LlamaStackDistribution
    metadata:
      name: lsd-llama-faiss-inline
    spec:
      replicas: 1
      server:
        containerSpec:
          resources:
            requests:
              cpu: "250m"
              memory: "500Mi"
            limits:
              cpu: "8"
              memory: "12Gi"
          env:
            # vLLM inference model configuration
            - name: INFERENCE_MODEL
              valueFrom:
                secretKeyRef:
                  name: llama-stack-inference-model-secret
                  key: INFERENCE_MODEL
            - name: VLLM_URL
              valueFrom:
                secretKeyRef:
                  name: llama-stack-inference-model-secret
                  key: VLLM_URL
            - name: VLLM_TLS_VERIFY
              valueFrom:
                secretKeyRef:
                  name: llama-stack-inference-model-secret
                  key: VLLM_TLS_VERIFY
            - name: VLLM_API_TOKEN
              valueFrom:
                secretKeyRef:
                  name: llama-stack-inference-model-secret
                  key: VLLM_API_TOKEN
    
            # Enable inline FAISS with SQLite backend
            - name: ENABLE_FAISS
              value: faiss
            - name: FAISS_KVSTORE_DB_PATH
              value: /opt/app-root/src/.llama/distributions/rh/sqlite_vec.db
    
            # Recommended workarounds for SQLite accessibility and version check
            - name: LLAMA_STACK_CONFIG_DIR
              value: /opt/app-root/src/.llama/distributions/rh
          name: llama-stack
          port: 8321
        distribution:
          name: rh-dev
Note

The FAISS_KVSTORE_DB_PATH environment variable defines the local path where the FAISS SQLite backend stores its index data. Ensure that this directory exists and is writable inside the container. Inline FAISS is only suitable for experimental or testing use cases.

  1. Click Create.

Verification

  • In the left-hand navigation, click Workloads Pods and verify that the Llama Stack pod is running in the correct namespace.
  • To verify that the Llama Stack server is running, click the pod name and select the Logs tab. Look for output similar to the following:

    INFO     2025-05-15 11:23:52,750 __main__:498 server: Listening on ['::', '0.0.0.0']:8321
    INFO:     Started server process [1]
    INFO:     Waiting for application startup.
    INFO     2025-05-15 11:23:52,765 __main__:151 server: Starting up
    INFO:     Application startup complete.
    INFO:     Uvicorn running on http://['::', '0.0.0.0']:8321 (Press CTRL+C to quit)
  • Confirm that a Service resource for the Llama Stack backend is present in your namespace and points to the running pod: Networking Services.
Tip

If you switch from Inline Milvus to Remote Milvus, delete the existing pod to ensure the new environment variables and backing store are picked up cleanly.

3.7. Ingesting content into a Llama model

You can quickly customize and prototype your retrievable content by ingesting raw text into your model from inside a Jupyter notebook. This approach avoids building a separate ingestion pipeline. By using the Llama Stack SDK, you can embed and store text in your vector store in real time, enabling immediate RAG workflows.

Prerequisites

  • You have installed OpenShift 4.19 or newer.
  • You have deployed a Llama 3.2 model with a vLLM model server.
  • You have created a LlamaStackDistribution instance (Llama Stack).
  • You have created a workbench within a project.
  • You have opened a Jupyter notebook and it is running in your workbench environment.
  • You have installed the llama_stack_client version 0.3.1 or later in your workbench environment.
  • If you use a remote vector store, your environment has network access to that service through OpenShift.

Procedure

  1. In a new notebook cell, install the client:

    %pip install llama_stack_client
  2. Import LlamaStackClient and create a client instance:

    from llama_stack_client import LlamaStackClient
    client = LlamaStackClient(base_url="<your deployment endpoint>")
  3. List the available models:

    # Fetch all registered models
    models = client.models.list()
  4. Verify that the list includes your Llama model and an embedding model. For example:

    [Model(identifier='llama-32-3b-instruct', metadata={}, api_model_type='llm', provider_id='vllm-inference', provider_resource_id='llama-32-3b-instruct', type='model', model_type='llm'),
     Model(identifier='ibm-granite/granite-embedding-125m-english', metadata={'embedding_dimension': 768.0}, api_model_type='embedding', provider_id='sentence-transformers', provider_resource_id='ibm-granite/granite-embedding-125m-english', type='model', model_type='embedding')]
  5. Select one LLM and one embedding model:

    model_id = next(m.identifier for m in models if m.model_type == "llm")
    embedding_model = next(m for m in models if m.model_type == "embedding")
    embedding_model_id = embedding_model.identifier
    embedding_dimension = int(embedding_model.metadata["embedding_dimension"])
  6. (Optional) Create a vector store (choose one). Skip this step if you already have one.

    Example 3.1. Option 1: Inline Milvus Lite (embedded)

    vector_store_name = "my_inline_db"
    vector_store = client.vector_stores.create(
        name=vector_store_name,
        extra_body={
            "embedding_model": embedding_model_id,
            "embedding_dimension": embedding_dimension,
            "provider_id": "milvus",   # inline Milvus Lite
        },
    )
    vector_store_id = vector_store.id
    print(f"Registered inline Milvus Lite DB: {vector_store_id}")
    Note

    Use inline Milvus Lite for development and small datasets. Persistence and scale are limited compared to remote Milvus.

Example 3.2. Option 2: Remote Milvus (recommended for production)

vector_store_name = "my_remote_db"
vector_store = client.vector_stores.create(
    name=vector_store_name,
    extra_body={
        "embedding_model": embedding_model_id,
        "embedding_dimension": embedding_dimension,
        "provider_id": "milvus-remote",  # remote Milvus provider
    },
)
vector_store_id = vector_store.id
print(f"Registered remote Milvus DB: {vector_store_id}")
Note

Ensure your LlamaStackDistribution sets MILVUS_ENDPOINT (gRPC :19530) and MILVUS_TOKEN.

Example 3.3. Option 3: Inline FAISS (SQLite backend)

vector_store_name = "my_faiss_db"
vector_store = client.vector_stores.create(
    name=vector_store_name,
    extra_body={
        "embedding_model": embedding_model_id,
        "embedding_dimension": embedding_dimension,
        "provider_id": "faiss",   # inline FAISS provider
    },
)
vector_store_id = vector_store.id
print(f"Registered inline FAISS DB: {vector_store_id}")
Note

Inline FAISS (available in OpenShift AI 3.0 and later) is a lightweight, in-process vector store with SQLite-based persistence. It is best for local experimentation, disconnected environments, or single-node RAG deployments.

  1. If you already have a vector store, set its identifier:

    # For an existing vector store:
    # vector_store_id = "<your existing vector store ID>"
  2. Define raw text to ingest:

    raw_text = """Llama Stack can embed raw text into a vector store for retrieval.
    This example ingests a small passage for demonstration."""
  3. Ingest raw text by using the Vector Store Files API:

    items = [
        {
            "id": "raw_text_001",
            "text": raw_text,
            "mime_type": "text/plain",
            "metadata": {"source": "example_passage"},
        }
    ]
    result = client.vector_stores.files.create(
        vector_store_id=vector_store_id,
        items=items,
        chunk_size_in_tokens=100,
    )
    print("Text ingestion result:", result)
  4. Ingest an HTML source:

    html_item = [
        {
            "id": "doc_html_001",
            "text": "https://www.paulgraham.com/greatwork.html",
            "mime_type": "text/html",
            "metadata": {"note": "Example URL"},
        }
    ]
    result = client.vector_stores.files.create(
        vector_store_id=vector_store_id,
        items=html_item,
        chunk_size_in_tokens=50,
    )
    print("HTML ingestion result:", result)

Verification

  • Review the output to confirm successful ingestion. A typical response includes file or chunk counts and any warnings or errors.
  • The model list returned by client.models.list() includes your Llama 3.2 model and an embedding model.

3.8. Querying ingested content in a Llama model

You can use the Llama Stack SDK in your Jupyter notebook to query ingested content by running retrieval-augmented generation (RAG) queries on text or HTML stored in your vector store. You can perform one-off lookups or start multi-turn conversational flows without setting up a separate retrieval service.

Prerequisites

  • You have installed OpenShift 4.19 or newer.
  • You have enabled GPU support in OpenShift AI. This includes installing the Node Feature Discovery operator and NVIDIA GPU Operators. For more information, see Installing the Node Feature Discovery operator and Enabling NVIDIA GPUs.
  • If you are using GPU acceleration, you have at least one NVIDIA GPU available.
  • You have activated the Llama Stack Operator in OpenShift AI.
  • You have deployed an inference model, for example, the llama-3.2-3b-instruct model.
  • You have created a LlamaStackDistribution instance to enable RAG functionality.
  • You have created a workbench within a project and opened a running Jupyter notebook.
  • You have installed llama_stack_client version 0.3.1 or later in your workbench environment.
  • You have already ingested content into a vector store.
Note

This procedure requires that you have already ingested some text, HTML, or document data into a vector store, and that this content is available for retrieval. If no content is ingested, queries return empty results.

Procedure

  1. In a new notebook cell, install the client:

    %pip install -q llama_stack_client
  2. In a new notebook cell, import Agent, AgentEventLogger, and LlamaStackClient:

    from llama_stack_client import Agent, AgentEventLogger, LlamaStackClient
  3. Create a client instance by setting your deployment endpoint:

    client = LlamaStackClient(base_url="<your deployment endpoint>")
  4. List available models:

    models = client.models.list()
  5. Select an LLM (and, if needed below, capture an embedding model for store registration):

    model_id = next(m.identifier for m in models if m.model_type == "llm")
    
    embedding = next((m for m in models if m.model_type == "embedding"), None)
    if embedding:
        embedding_model_id = embedding.identifier
        embedding_dimension = int(embedding.metadata.get("embedding_dimension", 768))
  6. If you do not already have a vector store ID, register a vector store (choose one):

    Example 3.4. Option 1: Inline Milvus Lite (embedded)

    vector_store_name = "my_inline_db"
    vector_store = client.vector_stores.create(
        name=vector_store_name,
        extra_body={
            "embedding_model": embedding_model_id,
            "embedding_dimension": embedding_dimension,
            "provider_id": "milvus",   # inline Milvus Lite
        },
    )
    vector_store_id = vector_store.id
    print(f"Registered inline Milvus Lite DB: {vector_store_id}")
    Note

    Use inline Milvus Lite for development and small datasets. Persistence and scale are limited compared to remote Milvus.

Example 3.5. Option 2: Remote Milvus (recommended for production)

vector_store_name = "my_remote_db"
vector_store = client.vector_stores.create(
    name=vector_store_name,
    extra_body={
        "embedding_model": embedding_model_id,
        "embedding_dimension": embedding_dimension,
        "provider_id": "milvus-remote",  # remote Milvus provider
    },
)
vector_store_id = vector_store.id
print(f"Registered remote Milvus DB: {vector_store_id}")
Note

Ensure your LlamaStackDistribution sets MILVUS_ENDPOINT (gRPC :19530) and MILVUS_TOKEN.

Example 3.6. Option 3: Inline FAISS (SQLite backend)

vector_store_name = "my_faiss_db"
vector_store = client.vector_stores.create(
    name=vector_store_name,
    extra_body={
        "embedding_model": embedding_model_id,
        "embedding_dimension": embedding_dimension,
        "provider_id": "faiss",   # inline FAISS provider
    },
)
vector_store_id = vector_store.id
print(f"Registered inline FAISS DB: {vector_store_id}")
Note

Inline FAISS (available in OpenShift AI 3.0 and later) is a lightweight, in-process vector store with SQLite-based persistence. It is best for local experimentation, disconnected environments, or single-node RAG deployments.

  1. If you already have a vector store, set its identifier:

    # For an existing store:
    # vector_store_id = "<your existing vector store ID>"
  2. Query the ingested content by using the OpenAI-compatible Responses API with file search:

    query = "What benefits do the ingested passages provide for retrieval?"
    
    response = client.responses.create(
        model=model_id,
        input=query,
        tools=[
            {
                "type": "file_search",
                "vector_store_ids": [vector_store_id],
            }
        ],
    )
    print("Responses API result:", getattr(response, "output_text", response))
  3. Query the ingested content by using the high-level Agent API:

    agent = Agent(
        client,
        model=model_id,
        instructions="You are a helpful assistant.",
        tools=[
            {
                "name": "builtin::rag/knowledge_search",
                "args": {"vector_store_ids": [vector_store_id]},
            }
        ],
    )
    
    prompt = "How do you do great work?"
    print("Prompt>", prompt)
    
    session_id = agent.create_session("rag_session")
    stream = agent.create_turn(
        messages=[{"role": "user", "content": prompt}],
        session_id=session_id,
        stream=True,
    )
    
    for log in AgentEventLogger().log(stream):
        log.print()

Verification

  • The notebook prints query results for both the Responses API and the Agent API.
  • No errors appear in the output, confirming the model can retrieve and respond to ingested content from your vector store.

You can transform your source documents with a Docling-enabled pipeline and ingest the output into a Llama Stack vector store by using the Llama Stack SDK. This modular approach separates document preparation from ingestion, yet still delivers an end-to-end, retrieval-augmented generation (RAG) workflow.

The pipeline registers a vector store and downloads the source PDFs, then splits them for parallel processing and converts each batch to Markdown with Docling. It generates sentence-transformer embeddings from the Markdown and stores them in the vector store, making the documents searchable through Llama Stack.

Prerequisites

  • You have installed OpenShift 4.19 or newer.
  • You have enabled GPU support in OpenShift AI. This includes installing the Node Feature Discovery operator and NVIDIA GPU Operators. For more information, see Installing the Node Feature Discovery operator and Enabling NVIDIA GPUs.
  • You have logged in to the OpenShift web console.
  • You have a project and access to pipelines in the OpenShift AI dashboard.
  • You have created and configured a pipeline server within the project that contains your workbench.
  • You have activated the Llama Stack Operator in OpenShift AI.
  • You have deployed an inference model, for example, the llama-3.2-3b-instruct model.
  • You have configured a Llama Stack deployment by creating a LlamaStackDistribution instance to enable RAG functionality.
  • You have created a workbench within a project.
  • You have opened a Jupyter notebook and it is running in your workbench environment.
  • You have installed the llama_stack_client version 0.3.1 or later in your workbench environment.
  • You have installed local object storage buckets and created connections, as described in Adding a connection to your project.
  • You have compiled to YAML a pipeline that includes a Docling transform, either one of the RAG demo samples or your own custom pipeline.
  • Your project quota allows between 500 millicores (0.5 CPU) and 4 CPU cores for the pipeline run.
  • Your project quota allows from 2 GiB up to 6 GiB of RAM for the pipeline run.
  • If you are using GPU acceleration, you have at least one NVIDIA GPU available.

Procedure

  1. In a new notebook cell, install the client:

    %pip install -q llama_stack_client
  2. In a new notebook cell, import Agent, AgentEventLogger, and LlamaStackClient:

    from llama_stack_client import Agent, AgentEventLogger, LlamaStackClient
  3. In a new notebook cell, assign your deployment endpoint to the base_url parameter to create a LlamaStackClient instance:

    client = LlamaStackClient(base_url="<your deployment endpoint>")
  4. List the available models:

    models = client.models.list()
  5. Select the first LLM and the first embedding model:

    model_id = next(m.identifier for m in models if m.model_type == "llm")
    embedding_model = next(m for m in models if m.model_type == "embedding")
    embedding_model_id = embedding_model.identifier
    embedding_dimension = int(embedding_model.metadata.get("embedding_dimension", 768))
  6. Register a vector store (choose one option). Skip this step if your pipeline registers the store automatically.

    Example 3.7. Option 1: Inline Milvus Lite (embedded)

    vector_store_name = "my_inline_db"
    vector_store = client.vector_stores.create(
        name=vector_store_name,
        extra_body={
            "embedding_model": embedding_model_id,
            "embedding_dimension": embedding_dimension,
            "provider_id": "milvus",   # inline Milvus Lite
        },
    )
    vector_store_id = vector_store.id
    print(f"Registered inline Milvus Lite DB: {vector_store_id}")
    Note

    Inline Milvus Lite is best for development. Data durability and scale are limited compared to remote Milvus.

Example 3.8. Option 2: Remote Milvus (recommended for production)

vector_store_name = "my_remote_db"
vector_store = client.vector_stores.create(
    name=vector_store_name,
    extra_body={
        "embedding_model": embedding_model_id,
        "embedding_dimension": embedding_dimension,
        "provider_id": "milvus-remote",  # remote Milvus provider
    },
)
vector_store_id = vector_store.id
print(f"Registered remote Milvus DB: {vector_store_id}")
Note

Ensure your LlamaStackDistribution includes MILVUS_ENDPOINT and MILVUS_TOKEN (gRPC :19530).

Example 3.9. Option 3: Inline FAISS (SQLite backend)

vector_store_name = "my_faiss_db"
vector_store = client.vector_stores.create(
    name=vector_store_name,
    extra_body={
        "embedding_model": embedding_model_id,
        "embedding_dimension": embedding_dimension,
        "provider_id": "faiss",   # inline FAISS provider
    },
)
vector_store_id = vector_store.id
print(f"Registered inline FAISS DB: {vector_store_id}")
Note

Inline FAISS (available in OpenShift AI 3.0 and later) is a lightweight, in-process vector store with SQLite-based persistence. It is best for local experimentation, disconnected environments, or single-node RAG deployments.

Important

If you are using the sample Docling pipeline from the RAG demo repository, the pipeline registers the vector store automatically and you can skip the previous step. If you are using your own pipeline, you must register the vector store yourself.

  1. In the OpenShift web console, import the YAML file containing your Docling pipeline into your project, as described in Importing a pipeline.
  2. Create a pipeline run to execute your Docling pipeline, as described in Executing a pipeline run. The pipeline run inserts your PDF documents into the vector store. If you run the Docling pipeline from the RAG demo samples repository, you can optionally customize the following parameters before starting the pipeline run:

    • base_url: The base URL to fetch PDF files from.
    • pdf_filenames: A comma-separated list of PDF filenames to download and convert.
    • num_workers: The number of parallel workers.
    • vector_store_id: The vector store identifier.
    • service_url: The Milvus service URL (only for remote Milvus).
    • embed_model_id: The embedding model to use.
    • max_tokens: The maximum tokens for each chunk.
    • use_gpu: Enable or disable GPU acceleration.

Verification

  1. In your Jupyter notebook, query the LLM with a question that relates to the ingested content. For example:

    from llama_stack_client import Agent, AgentEventLogger
    import uuid
    
    rag_agent = Agent(
        client,
        model=model_id,
        instructions="You are a helpful assistant",
        tools=[
            {
                "name": "builtin::rag/knowledge_search",
                "args": {"vector_store_ids": [vector_store_id]},
            }
        ],
    )
    
    prompt = "What can you tell me about the birth of word processing?"
    print("prompt>", prompt)
    
    session_id = rag_agent.create_session(session_name=f"s{uuid.uuid4().hex}")
    
    response = rag_agent.create_turn(
        messages=[{"role": "user", "content": prompt}],
        session_id=session_id,
        stream=True,
    )
    
    for log in AgentEventLogger().log(response):
        log.print()
  2. Query chunks from the vector store:

    query_result = client.vector_io.query(
        vector_store_id=vector_store_id,
        query="what do you know about?",
    )
    print(query_result)

Verification

  • The pipeline run completes successfully in your project.
  • Document embeddings are stored in the vector store and are available for retrieval.
  • No errors or warnings appear in the pipeline logs or your notebook output.

3.10. About Llama stack search types

Llama Stack supports keyword, vector, and hybrid search modes for retrieving context in retrieval-augmented generation (RAG) workloads. Each mode offers different tradeoffs in precision, recall, semantic depth, and computational cost.

3.10.1. Supported search modes

3.10.2. Retrieval database support

Milvus is the supported retrieval database for Llama Stack. It currently provides vector search. However, keyword and hybrid search capabilities are not currently supported.

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top