Chapter 3. Validated models for use with IBM Z and IBM Spyre AI accelerators


The following large language models are supported for IBM Z systems with IBM Spyre AI accelerators.

Note

IBM Spyre AI accelerator cards support FP16 format model weights only. For compatible models, the Red Hat AI Inference Server inference engine automatically converts weights to FP16 at startup. No additional configuration is needed.

Expand
Table 3.1. Decoder models for use with IBM Spyre AI accelerators
ModelHugging Face model card

granite-3.3-8b-instruct

ibm-granite/granite-3.3-8b-instruct

Important

Pre-built IBM Granite models run with the specific Python packages that are included in the Red Hat AI Inference Server Spyre container image. The models are tied to fixed configurations for Spyre card count, batch size, and input/output context sizes.

Updating or replacing Python packages in the Red Hat AI Inference Server Spyre container image is not supported.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat