Red Hat AI Model Optimization Toolkit


Red Hat AI Inference 3.4

Compressing large language models with the LLM Compressor library

Red Hat AI Documentation Team

Abstract

Describes the LLM Compressor library and how you can use it to optimize and compress large language models before inferencing.

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat Documentation

Legal Notice

Theme

© 2026 Red Hat
Back to top