Chapter 1. About OpenShift Container Platform deployments
You can deploy Red Hat AI Inference Server in OpenShift Container Platform clusters with supported AI accelerators that have full access to the internet.
Install the NVIDIA GPU Operator or AMD GPU Operator as appropriate for the underlying host AI accelerators that are available in the cluster.
Deploying Red Hat AI Inference Server in OpenShift Container Platform requires installing the Node Feature Discovery (NFD) Operator to detect hardware capabilities, then installing the appropriate GPU operator for your accelerator hardware. After the operators are configured, you can deploy inference workloads using Red Hat AI Inference Server container images.