此内容没有您所选择的语言版本。

Introduction to Red Hat AI


Red Hat AI 2025

Red Hat AI is a portfolio of products and services that accelerates the development and deployment of AI solutions across hybrid cloud environments

Abstract

With Red Hat AI, organizations have the flexibility and consistency to deploy and manage both predictive and generative AI models wherever it makes the most sense for their AI workload strategy.

Chapter 1. Introduction to Red Hat AI

Red Hat AI is a portfolio of products and services that accelerates time to market and reduces the operational cost of delivering artificial intelligence (AI) solutions across hybrid cloud environments. It enables organizations to efficiently tune small, fit-for-purpose models by using enterprise-relevant data and to flexibly deploy models where the data resides.

Red Hat AI allows organizations to manage and monitor the lifecycle of both predictive and generative AI (gen AI) models at scale, from single-server deployments to highly distributed platforms. The portfolio is powered by open source technologies and a partner ecosystem that focuses on performance, stability, and GPU support across various infrastructures.

With Red Hat AI, organizations have the flexibility and consistency to deploy and manage both predictive and gen AI models wherever it makes the most sense for their AI workload strategy. The portfolio provides the capabilities and services to support each stage of the AI adoption journey, from initial single-server deployments to highly scaled-out distributed platform architectures. It also provides support for various hardware accelerators, original equipment manufacturers (OEMs), and cloud providers to deliver a stable, optimized, and high performance platform across various infrastructures.

Access to the latest innovations is complemented by Red Hat’s AI partner ecosystem, which offers an array of partner products and services that are tested, supported, and certified to perform with our technologies and help customers solve their business and technical challenges.

Red Hat AI includes:

  • Red Hat Enterprise Linux AI: A platform that allows you to develop enterprise applications on open source Large Language Models (LLMs).

    Red Hat Enterprise Linux AI can help customers at the beginning of their AI journey, who haven’t defined their business use cases yet. The AI platform is built to develop, test, and run generative AI (gen AI) foundation models.

  • Red Hat OpenShift AI: An integrated MLOps platform that enables organizations to manage the artificial intelligence and machine learning (AI/ML) lifecycle across hybrid cloud and edge environments, helping teams bring models from experimentation to production faster.

    Red Hat OpenShift AI is built for customers who are ready to scale their AI applications. This AI platform can help manage the lifecycle of both predictive and gen AI models across hybrid cloud environments.

1.1. Red Hat Enterprise Linux AI

Red Hat Enterprise Linux AI (RHEL AI) empowers organizations to customize and contribute directly to Large Language Models (LLMs). RHEL AI is built from the InstructLab project, which uses a novel approach to fine-tuning called LAB (Large-Scale Alignment for Chatbots). The LAB method uses synthetic data generation (SDG) with a multi-phase training framework to produce high-quality fine-tuned LLMs.

You can install RHEL AI as a bootable Red Hat Enterprise Linux (RHEL) container image. Each image is configured for specific hardware accelerators, including NVIDIA, AMD, and Intel, and contains various inference-serving and fine-tuning tools.

You can use your own data to create seed files, generate synthetic data, and train a Granite starter model that you can deploy and interact with.

1.1.1. Key benefits of RHEL AI

1.1.1.1. Installation and deployment
  • Efficient installation using the RHEL bootable containerized operating system. The RHEL AI image contains various open source fine-tuning tools that enable you to efficiently customize the Granite starter models provided by Red Hat.
  • RHEL AI provides images for deploying on Bare metal, AWS, Azure, IBM Cloud, and GCP.
  • You can purchase RHEL AI from the AWS and Azure marketplaces and deploy it on any of their GPU-enabled instances.
  • You can locally download, deploy, and chat with various models provided by Red Hat and IBM.
1.1.1.2. Model customization
  • You can use the Synthetic Data Generation (SDG) process, where teacher LLMs use human-generated data to generate a large quantity of artificial data, which can then be used to train other LLMs.
  • You can use multi-phase training, a fine-tuning framework where a model is trained on a dataset, and evaluated, in separate phases called checkpoints. The fully fine-tuned model is the best performing checkpoint from the final phase.
  • You can use various model evaluation benchmarks, including MMLU, MT_BENCH, and DK_BENCH.

1.2. Red Hat OpenShift AI

Red Hat OpenShift AI is a comprehensive MLOps platform designed to streamline AI/ML development and operations across hybrid cloud environments and at the edge. It fosters collaboration between data scientists and developers while ensuring IT oversight, enabling organizations to efficiently build, train, fine-tune, and deploy predictive and generative AI models.

Offered as a self-managed or cloud service, OpenShift AI builds on the robust foundation of Red Hat OpenShift, providing a trusted platform for securely deploying AI-enabled applications and ML models at scale—across public clouds, on-premises, and edge environments.

By leveraging a broad technology ecosystem, Red Hat OpenShift AI accelerates AI/ML innovation, ensures operational consistency, enhances hybrid cloud flexibility, and upholds transparency, choice, and responsible AI practices.

1.2.1. Key benefits of OpenShift AI

  • Simplified AI adoption: Reduces the complexities of building and delivering AI models and applications that are accurate, reliable, and secure.
  • Enterprise-ready open source tools: Provides a fully supported, secure enterprise version of open-source AI tools, ensuring seamless integration and interoperability.
  • Accelerated innovation: Gives organizations access to the latest AI technologies, helping them stay competitive in a rapidly evolving market.
  • Extensive partner ecosystem: Enables organizations to select best-of-breed technologies from a certified AI ecosystem, increasing flexibility and choice.

1.2.2. Features for data scientists, developers, and MLOps engineers

  • Integrated development environments (IDEs): Provides access to IDEs like JupyterLab, with pre-configured libraries like TensorFlow, PyTorch, and Scikit-learn.
  • Data science pipelines: Supports end-to-end ML workflows by using containerized pipeline orchestration.
  • Accelerated computing: Integrated support for GPUs and Intel Gaudi AI accelerators to speed up model training and inference.
  • Model deployment and serving: Deploy models in a variety of environments and integrate them into applications by using APIs.

1.2.3. Features for IT operations administrators

  • Seamless OpenShift integration: Leverages OpenShift identity providers and resource allocation tools for secure and efficient user management.
  • Accelerator management: Enables efficient resource scheduling for GPU and AI accelerator usage.
  • Flexible deployment: Available as a self-managed solution or as a managed service in Red Hat OpenShift Dedicated and Red Hat OpenShift Service on AWS (ROSA).
  • Scalability and security: Provides enterprise-grade security features and governance controls for AI workloads.

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Red Hat logoGithubRedditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

© 2024 Red Hat, Inc.