Chapter 1. About OpenShift Container Platform deployments


You can deploy Red Hat AI Inference Server in OpenShift Container Platform clusters with supported AI accelerators that have full access to the internet.

Note

Install the NVIDIA GPU Operator or AMD GPU Operator as appropriate for the underlying host AI accelerators that are available in the cluster.

Deploying Red Hat AI Inference Server in OpenShift Container Platform requires installing the Node Feature Discovery (NFD) Operator to detect hardware capabilities, then installing the appropriate GPU operator for your accelerator hardware. After the operators are configured, you can deploy inference workloads using Red Hat AI Inference Server container images.

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top