Red Hat AI Inference Server 3.4
Early Access
Release Notes: 3.4 Early Access (EA1)
Overview of the new features included in the 3.4 Early Access (EA1) release
Getting started
Getting started with Red Hat AI Inference Server
Product life cycle
Understand the product life cycle to plan deployments and support applications using the product
Plan
Deploying Red Hat AI Inference Server in a disconnected environment
Deploy Red Hat AI Inference Server in a disconnected environment using OpenShift Container Platform and a disconnected mirror image registry
Supported product and hardware configurations
Supported product and hardware configurations for deploying Red Hat AI software
Validated models
Red Hat AI validated models
Inference Operations
Inference serving language models in OCI-compliant model containers
Inferencing OCI-compliant models in Red Hat AI Inference Server
Inference serving Mistral 3 models
Inference serving Mistral 3 models with Red Hat AI Inference Server
Inference serving geospatial foundation models
Inference serving geospatial foundation models with Red Hat AI Inference Server
Extending Red Hat AI Inference Server with tool calling capabilities
Configuring tool calling and chat templates for AI Inference Server
vLLM server arguments
Server arguments for running Red Hat AI Inference Server
Related Products
Red Hat Enterprise Linux AI
Switch to the Red Hat Enterprise Linux AI documentation
Red Hat OpenShift AI
Switch to the Red Hat OpenShift AI documentation
Red Hat AI Enterprise
Switch to the Red Hat AI Enterprise documentation
Additional Resources
Red Hat AI Foundations
Explore no-cost courses to boost your AI knowledge and get hands-on experience with Red Hat AI products while earning a certificate
Red Hat AI learning hub
Explore the curated set of third‑party models validated for Red Hat AI products, ready for fast, reliable deployment