How to run and deploy LLMs using Red Hat OpenShift AI on a Red Hat OpenShift Service on AWS cluster
Learn how to install the Red Hat® OpenShift® AI (RHOAI) operator and Jupyter notebook, create an Amazon S3 bucket, and run the LLM model on a Red Hat OpenShift Service on AWS (ROSA) cluster.
Disclaimer: this content is authored by Red Hat experts, but has not yet been tested on every supported configuration.
This learning path is for operations teams or system administrators.
Developers might want to check out how to create a natural language processing (NLP) application using Red Hat OpenShift AI on developers.redhat.com.
Prerequisites for deploying LLMs using Red Hat OpenShift AI on a Red Hat OpenShift Service on AWS cluster
Before beginning, you’ll need a Red Hat® OpenShift® Service on AWS (ROSA) cluster. If you don’t have one already, visit our learning path Getting Started with Red Hat OpenShift Service on AWS (ROSA) to view instructions for deploying a cluster using the console interface, which you can access with the button below.
What will you learn?
- How to deploy a ROSA cluster
- Accessing the cluster console
What do you need before starting?
Steps for meeting the prerequisites
- Deploy your ROSA cluster.
- In this learning path, we’ll use a single-AZ ROSA 4.15.10 cluster with m5.4xlarge node with auto-scaling enabled up to 10 nodes. The cluster has 64 vCPUs with ~278Gi memory.
- Note that ROSA and RHOAI also support GPU, however, for the sake of simplicity, we’ll be using only CPU for compute in this guide.
- Please be sure that you have admin access to the cluster.
- Access the cluster console.
You are now ready to install Red Hat OpenShift AI (RHOAI) and Jupyter notebook in the next resource.