Red Hat OpenShift AI Self-Managed 3.0
Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
What's New
Release notes
Features, enhancements, resolved issues, and known issues associated with this release
Get started
Get started with projects, workbenches, and pipelines in OpenShift AI
Get set up to create projects, launch workbenches, and deploy your first model on OpenShift AI
Learn
Deploy
Deploy and operate model services
Deploy large models using the single-model serving platform (KServe RawDeployment)
Deploy models with KServe—choose RawDeployment or Knative, set resources and runtimes, and expose authenticated endpoints
Deploy large models using the model-serving platform
Deploy models, set resources and runtimes, and expose authenticated endpoints
Monitor
Monitoring your AI systems
Monitor model bias and data drift by configuring metrics, thresholds, and visualizations in OpenShift AI
Maintain Safety
Ensuring AI safety with guardrails
Orchestrate detectors to filter LLM inputs/outputs, auto‑configure security, and expose guarded endpoints
Evaluate
Ensure trustworthy, compliant AI through evaluation and safety guardrails
Evaluating AI systems with LM-Eval
Configure LMEvalJobs, select tasks, run evaluations, and retrieve metrics to compare model performance
Train
Prototype and customize AI applications collaboratively
Customize models to build generative AI applications
Customize AI models that are specific to your domain-specific use case, from setting up your development environment to building and deploying models for use in generative AI applications
Develop
Register, version, and promote models with the model registry
Store, version, and promote models with metadata for cross‑project sharing and traceability
Discover, evaluate, register, and deploy models from the model catalog
Use the model catalog to discover, evaluate, register, and deploy models for rapid customization and testing
Deploy the RAG stack for data science projects
Enable LlamaStack, GPUs, and vLLM, ingest data in a vector store and expose secure endpoints
Accelerate data processing and training with distributed workloads
Distribute data and ML jobs for faster results, larger datasets, and GPU‑aware auto‑scaling and monitoring
Experimenting with RAG in the AI playground
Using the AI playground to experiment with RAG using models from your catalog
Connect your workbench to S3-compatible object storage
Create a connection, configure an S3 client, and list, read, write, and copy objects from notebooks
Organize projects, collaborate in workbenches, and deploy models
Organize projects, collaborate in workbenches, build notebooks, train/deploy models, and automate pipelines
Build, schedule, and track machine learning pipelines
Define KFP‑based pipelines, version and schedule runs, and track artifacts in S3‑compatible storage
Use the Red Hat data science IDE images effectively
Launch a workbench, pick an IDE, and develop with prebuilt images or custom environments
Enable and manage connected applications from the OpenShift AI dashboard
Enable applications, connect with keys, remove unused tiles, and access Jupyter from the dashboard
Administer
Operate a governed, multi‑tenant AI platform at scale
Provision secure workbenches and custom images for teams
Use CRDs or dashboard to publish images and provision resourced workbenches
Administer OpenShift AI platform access, apps, and operations
Administer access, apps, resources, and accelerators; maintain logging, audit, and backups
Deliver consistent ML features to models with Feature Store
Use Feature Store to define, store, and serve reusable machine learning features to models
Understand, control, and audit usage telemetry in OpenShift AI
Help administrators decide what usage data is collected, see what’s included, and enable or disable telemetry
Provision hardware configurations and resources for data science projects
Enable supported hardware configurations for your data science workloads
Configure single‑ and multi‑model serving for your cluster
Enable single‑model, multi‑model, or NVIDIA NIM serving platforms with serving runtimes and deployment modes
Activate the LlamaStack operator for AI applications
Operate Llama Stack: activate the operator and expose OpenAI‑compatible RAG APIs
Configure user access, storage, and telemetry in OpenShift AI
As an administrator, configure user access, customize the dashboard, and manage specialized resources for data science and AI engineering projects
Enable the model registry to track, version, and deploy models
Enable the model registry so teams can register models and versions, capture metadata and provenance, and promote approved versions to serving with consistent governance
Provision and secure access to model registries
Use the OpenShift AI dashboard to create registries, set access with RBAC groups, and manage model and version lifecycle so teams can register, share, and promote models to serving with traceability
Choose production‑ready OpenShift AI APIs
Plan which APIs to build on and how to upgrade with minimal risk by mapping each OpenShift AI endpoint to a support tier that defines stability and deprecation timelines
Configure your model-serving platform
Enable the model-serving platforms with serving runtimes
Install
Deploy or decommission OpenShift AI on your cluster
Install via Operator or CLI, enable required components, verify the deployment, and cleanly uninstall when needed
Deploy or decommission OpenShift AI in disconnected environments
Install via Operator or CLI, enable required components, verify the deployment, and cleanly uninstall when needed
Upgrades are not supported in OpenShift AI 3.0
As the OpenShift AI 3.0 release introduces significant changes, and is a fast release, we want to ensure a smooth migration path from 2.x stable (eg 2.25) to the first stable 3.x release. As a result, upgrade from 2.x to 3.0 is not available
Plan
Prepare your platform and hardware for Red Hat AI
Review compatibility matrices, accelerator support, deployment targets, and update policy prior to installation
Choose a validated model for reliable serving
Explore the curated set of third‑party models validated for Red Hat AI products, ready for fast, reliable deployment