Red Hat OpenShift AI Self-Managed 3.2

이 콘텐츠는 선택한 언어로 제공되지 않습니다.

What's New

Release notes

Features, enhancements, resolved issues, and known issues associated with this release

Get started

Get started with projects, workbenches, and pipelines in OpenShift AI

Get set up to create projects, launch workbenches, and deploy your first model on OpenShift AI

Plan

Prepare your platform and hardware for Red Hat AI

Review compatibility matrices, accelerator support, deployment targets, and update policy prior to installation

Choose a validated model for reliable serving

Explore the curated set of third‑party models validated for Red Hat AI products, ready for fast, reliable deployment

Install

Deploy or decommission OpenShift AI on your cluster

Install via Operator or CLI, enable required components, verify the deployment, and cleanly uninstall when needed

Deploy or decommission OpenShift AI in disconnected environments

Install via Operator or CLI, enable required components, verify the deployment, and cleanly uninstall when needed

Upgrades are not supported in OpenShift AI 3.2

Currently there is no upgrade path for OpenShift AI 2.x to 3.2. See our FAQ for more information

Administer

Operate a governed, multi‑tenant AI platform at scale

Provision secure workbenches and custom images for teams

Use CRDs or dashboard to publish images and provision resourced workbenches

Administer OpenShift AI platform access, apps, and operations

Administer access, apps, resources, and accelerators; maintain logging, audit, and backups

Deliver consistent ML features to models with Feature Store

Use Feature Store to define, store, and serve reusable machine learning features to models

Understand, control, and audit usage telemetry in OpenShift AI

Help administrators decide what usage data is collected, see what’s included, and enable or disable telemetry

Provision hardware configurations and resources for data science projects

Enable supported hardware configurations for your data science workloads

Configuring your model-serving platform

Configure your model-serving platform in Red Hat OpenShift AI Self-Managed

Managing and monitoring models

Manage and monitor models in Red Hat OpenShift AI Self-Managed

Activate the LlamaStack operator for AI applications

Operate Llama Stack: activate the operator and expose OpenAI‑compatible RAG APIs

Configure user access, storage, and telemetry in OpenShift AI

As an administrator, configure user access, customize the dashboard, and manage specialized resources for data science and AI engineering projects

Provision and secure access to model registries

Use the OpenShift AI dashboard to create registries, set access with RBAC groups, and manage model and version lifecycle so teams can register, share, and promote models to serving with traceability

Choose production‑ready OpenShift AI APIs

Plan which APIs to build on and how to upgrade with minimal risk by mapping each OpenShift AI endpoint to a support tier that defines stability and deprecation timelines

Develop

Register, version, and promote models with the model registry

Store, version, and promote models with metadata for cross‑project sharing and traceability

Discover, evaluate, register, and deploy models from the model catalog

Use the model catalog to discover, evaluate, register, and deploy models for rapid customization and testing

Deploy the RAG stack for projects

Enable LlamaStack, GPUs, and vLLM, ingest data in a vector store and expose secure endpoints

Experimenting with models in the gen AI playground

Experiment with models in the gen AI playground in Red Hat OpenShift AI Self-Managed

Accelerate data processing and training with distributed workloads

Distribute data and ML jobs for faster results, larger datasets, and GPU‑aware auto‑scaling and monitoring

Connect your workbench to S3-compatible object storage

Create a connection, configure an S3 client, and list, read, write, and copy objects from notebooks

Organize projects, collaborate in workbenches, and deploy models

Organize projects, collaborate in workbenches, build notebooks, train/deploy models, and automate pipelines

Use the Red Hat data science IDE images effectively

Launch a workbench, pick an IDE, and develop with prebuilt images or custom environments

Build, schedule, and track machine learning pipelines

Define KFP‑based pipelines, version and schedule runs, and track artifacts in S3‑compatible storage

Enable and manage connected applications from the OpenShift AI dashboard

Enable applications, connect with keys, remove unused tiles, and access Jupyter from the dashboard

Train

Prototype and customize AI applications collaboratively

Customize models to build generative AI applications

Customize AI models that are specific to your domain-specific use case, from setting up your development environment to building and deploying models for use in generative AI applications

Evaluate

Ensure trustworthy, compliant AI through evaluation and safety guardrails

Evaluating AI systems with LM-Eval

Configure LMEvalJobs, select tasks, run evaluations, and retrieve metrics to compare model performance

Maintain Safety

Enabling AI safety with Guardrails

Ensure safety in your OpenShift AI models

Monitor

Monitoring your AI Systems

Monitor model bias and data drift by configuring metrics, thresholds, and visualizations in OpenShift AI

Deploy

Deploy and operate model services

Deploy large models using the single-model serving platform (KServe RawDeployment)

Deploy models with KServe—choose RawDeployment or Knative, set resources and runtimes, and expose authenticated endpoints

Learn

맨 위로 이동
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2026 Red Hat