Transform product discovery with AI recommendations
Integrate AI-driven product recommendations, automated review summarization, and enhanced search capabilities into an e-commerce storefront.
This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.
Transform product discovery with AI recommendations Copy linkLink copied!
Integrate AI-driven product recommendations, automated review summarization, and enhanced search capabilities into an e-commerce storefront.
Detailed description Copy linkLink copied!
This quickstart shows how an e-commerce storefront can seamlessly integrate AI-driven product recommendations, automated review summarization, and enhanced search capabilities to improve customer engagement and conversion rates.
- Product recommendations deliver personalized suggestions based on browsing history and product similarity, helping customers discover what they love.
- Review summaries distill countless reviews into actionable information, accelerating buying decisions.
- Intelligent search uses a hybrid approach with semantic and symbolic search understanding customer intent, making it easier to find the perfect item.
See how customers can get a better experience while business owners unlock higher click-through rates, better conversations and strong customer loyalty.
This quickstart is a complete, cloud-native product recommender system showcasing search, recommendations, reviews, and a Kubeflow training pipeline on OpenShift AI. Technical components include:
- Backend (FastAPI) with PostgreSQL + pgvector + Feast
- Frontend (React) with semantic text/image search
- Training pipeline (Kubeflow Pipelines) to build and push embeddings
- Helm charts for one-command install/uninstall on OpenShift
Architecture diagrams Copy linkLink copied!
- Feature Store: Feast (offline Parquet, online Postgres + pgvector)
- Embeddings: Two-tower training + BGE text encoding for search
- Search: Approximate Nearest Neighbor search over semantic vector embeddings
- Images: Synthetic catalog images; text-to-image generated assets
Requirements Copy linkLink copied!
Prerequisites Copy linkLink copied!
- Access to an OpenShift cluster (with OpenShift AI installed)
- CLI tools:
ocandhelm - Container registry access to push images (e.g., quay.io)
Recommended OpenShift AI components enabled: DataSciencePipelines, Feast Operator, Model Registry, KServe/ModelMesh (Managed in your DataScienceCluster).
Minimum hardware requirements Copy linkLink copied!
- CPU: 6-8 cores
- Memory: 16-20Gi
- Storage: 150-200Gi
Minimum software requirements Copy linkLink copied!
- OpenShift 4.17.0+ cluster with OpenShift AI
- oc CLI 4.17.0+ and Helm 3.x
- Access to quay.io to be able to pull down container images
Required user permissions Copy linkLink copied!
- Namespace admin permissions in the target OpenShift project
- Container registry access to pull images from quay.io and registry.redhat.io
- OpenShift AI access to create DataSciencePipelines and Feast components
- Storage provisioning rights to create persistent volumes (PVCs)
Deploy Copy linkLink copied!
- Clone and enter the repo
git clone https://github.com/<your-username>/product-recommender-system.git cd product-recommender-system/helm
git clone https://github.com/<your-username>/product-recommender-system.git
cd product-recommender-system/helm
- Install
make install NAMESPACE=<namespace> minio.userId=<minio user Id> minio.password=<minio password> OLLAMA_MODEL=<ollama model name> MODEL_ENDPOINT=<http://model-url.com/v1>
make install NAMESPACE=<namespace> minio.userId=<minio user Id> minio.password=<minio password> OLLAMA_MODEL=<ollama model name> MODEL_ENDPOINT=<http://model-url.com/v1>
This deploys: Postgres+pgvector, Feast registry/secret, backend, frontend, and the training pipeline server.
- Access routes (after pods Ready)
Delete Copy linkLink copied!
make uninstall NAMESPACE=<ns>
make uninstall NAMESPACE=<ns>
Additional details Copy linkLink copied!
Configuration you’ll change most often Copy linkLink copied!
- Images
- Backend+Frontend:
frontendBackendImageinhelm/product-recommender-system/values.yaml - Training:
pipelineJobImage(training container image) - Core library (as a base in backend image):
applicationImage(if used)
- Backend+Frontend:
- LLM for review generation (optional)
- Set
llm.secret.data.LLM_API_KEY(or bring your own secret) - Backend env:
USE_LLM_FOR_REVIEWS,LLM_API_BASE,LLM_MODEL,LLM_TIMEOUT
- Set
- Database/Feast integration
- DB connection comes from the
pgvectorsecret (created by the chart) - Feast TLS secret name:
feast-feast-recommendation-registry-tls(mounted in backend & training)
- DB connection comes from the
How search works Copy linkLink copied!
- Semantic Approximate Nearest Neighbor search over item text embeddings (BGE)
If you add more modalities (e.g., category vectors), stack only equal-dimension tensors or compute per-field similarities and fuse (max/weighted) without stacking.
AI Review Summarization Copy linkLink copied!
- What it does: Uses an LLM to condense recent product reviews into a short, helpful summary covering sentiment, pros, cons, and an overall recommendation.
- Endpoint:
GET /products/{product_id}/reviews/summarize— returns AI-generated summary text.
- Notes:
- Requires at least 4 reviews to produce a summary; otherwise returns a friendly message.
- Review summary generated real time upon clicking the 'AI Summarize' button on the product page.
Detailed docs live in component READMEs:
recommendation-core/README.mdrecommendation-training/README.mdbackend/README.mdfrontend/README.mdhelm/README.md
Contributions Copy linkLink copied!
- Contributions welcome via PRs; please update component READMEs when changing behavior