How to run and deploy LLMs using Red Hat OpenShift AI on a Red Hat OpenShift Service on AWS cluster

Learn how to install the Red Hat® OpenShift® AI (RHOAI) operator and Jupyter notebook, create an Amazon S3 bucket, and run the LLM model on a Red Hat OpenShift Service on AWS (ROSA) cluster.

Disclaimer: this content is authored by Red Hat experts, but has not yet been tested on every supported configuration.

This learning path is for operations teams or system administrators.

Developers might want to check out how to create a natural language processing (NLP) application using Red Hat OpenShift AI on developers.redhat.com.

Get started on developers.redhat.com

How to run and deploy LLMs using Red Hat OpenShift AI on a Red Hat OpenShift Service on AWS cluster

Learn how to install the Red Hat® OpenShift® AI (RHOAI) operator and Jupyter notebook, create an Amazon S3 bucket, and run the LLM model on a Red Hat OpenShift Service on AWS (ROSA) cluster.

Disclaimer: this content is authored by Red Hat experts, but has not yet been tested on every supported configuration.

Overview

Large Language Models (LLMs) are a specific type of generative artificial intelligence (AI) focused on processing and generating human language. They can understand, generate, and manipulate human language in response to various tasks and prompts.


This learning path is an example of how to run and deploy LLMs on a Red Hat OpenShift Service on AWS (ROSA) cluster, which is our managed OpenShift platform on AWS, using Red Hat OpenShift AI (RHOAI), our OpenShift platform for managing the entire lifecycle of AI/ML projects. And we will utilize the Amazon S3 bucket to store the model output. In essence, here we will first install RHOAI operator and Jupyter notebook, create the S3 bucket, and then run the model.

What do you need before starting?

What is included in this learning path?

  • Prerequisites
  • Installing RHOAI and Jupyter notebook
  • Creating and granting access to S3 bucket
  • Training LLM model
  • Future research
  • Performing hyperparameter tuning 

What will you get?

  • Experience running and deploying LLMs on a ROSA cluster
  • An understanding of how to use OpenShift AI to manage the lifecycle of AI/ML projects
  • Familiarity with how to use the Amazon S3 bucket to store the model output
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2026 Red Hat
맨 위로 이동