Este conteúdo não está disponível no idioma selecionado.
Chapter 3. Customizing your environment using Operators and Operator Bundles
You can customize the OpenShift Container Platform deployment by selecting one or more Operators or Operator bundles during the installation.
3.1. Customizing with Operators Copiar o linkLink copiado para a área de transferência!
Operators are used to package, deploy, and manage services and applications. Before starting the installation, familiarize yourself with the Assisted Installer Operators, including their prerequisites and limitations. If you require advanced options, install the Operators after you have installed the cluster.
The additional requirements specified below apply to each Operator individually. If you select more than one Operator, or if the Assisted Installer automatically selects an Operator due to dependencies, the total required resources is the sum of the resource requirements for each Operator.
Additional resources
3.1.1. OpenShift Virtualization Operator Copiar o linkLink copiado para a área de transferência!
You can deploy OpenShift Virtualization to perform the following tasks:
- Create and manage Linux and Windows virtual machines (VMs).
- Run pod and VM workloads alongside each other in a cluster.
- Connect to VMs through a variety of consoles and CLI tools.
- Import and clone existing VMs.
- Manage network interface controllers and storage drives attached to VMs.
- Live migrate VMs between nodes.
The OpenShift Virtualization Operator requires backend storage and might automatically activate a storage Operator in the background, according to the following criteria:
- None - If the CPU architecture is ARM64, no storage Operator is activated.
- LVM Storage - For single-node OpenShift clusters on any other CPU architecture deploying OpenShift Container Platform 4.12 or higher.
- Local Storage Operator (LSO) - For all other deployments.
Prerequisites
- Requires enabled CPU virtualization support in the firmware on all nodes.
- Requires an additional 360 MiB of memory and 2 CPU cores for each compute (worker) node.
- Requires an additional 150 MiB of memory and 4 CPU cores for each control plane node.
Requires Red Hat OpenShift Data Foundation (recommended for creating additional on-premise clusters), Logical Volume Manager Storage, or another persistent storage service.
ImportantDeploying OpenShift Virtualization without Red Hat OpenShift Data Foundation results in the following scenarios:
- Multi-node cluster: No storage is configured. You must configure storage after the OpenShift Data Foundation configuration.
- Single-node OpenShift: Logical Volume Manager Storage (LVM Storage) is installed.
You must review the prerequisites to ensure that your environment has sufficient additional resources for OpenShift Virtualization.
- OpenShift Virtualization is not supported on the following platforms: Nutanix, vSphere.
- OpenShift Virtualization is not compatible with the following CPU architectures: S390X, PPC64LE.
- OpenShift Virtualization is supported from OpenShift Container Platform 4:14 and later.
3.1.2. Migration Toolkit for Virtualization Operator Copiar o linkLink copiado para a área de transferência!
The Migration Toolkit for Virtualization Operator allows you to migrate virtual machines at scale to a local or remote Red Hat OpenShift Virtualization cluster. You can perform the migration from any of the following source providers:
- VMware vSphere
- Red Hat Virtualization (RHV)
- Red Hat OpenShift Virtualization
- OpenStack
When you select the Migration Toolkit for Virtualization Operator, the Assisted Installer automatically activates the OpenShift Virtualization Operator. For a Single-node OpenShift installation, the Assisted Installer also activates the LVM Storage Operator.
You can install the Migration Toolkit for Virtualization Operator on OpenShift Container Platform using the Assisted Installer, either independently or as part of the OpenShift Virtualization Operator bundle.
Prerequisites
- Requires OpenShift Container Platform version 4.14 or later.
- Requires an x86_64 CPU architecture.
- Requires an additional 1024 MiB of memory and 1 CPU core for each control plane node and worker node.
- Requires the additional resources specified for the OpenShift Virtualization Operator, installed together with OpenShift Virtualization. For details, see the prerequisites in the OpenShift Virtualization Operator section.
Post-installation steps
After completing the installation, the Migration menu appears in the navigation pane of the Red Hat OpenShift web console.
The Migration menu provides access to the Migration Toolkit for Virtualization. Use the toolkit to create and execute a migration plan with the relevant source and destination providers.
For details, see either of the following chapters in the Migration Toolkit for Virtualization Guide:
Additional resources
3.1.3. Multicluster engine for Kubernetes Operator Copiar o linkLink copiado para a área de transferência!
You can deploy the multicluster engine for Kubernetes to perform the following tasks in a large, multi-cluster environment:
- Provision and manage additional Kubernetes clusters from your initial cluster.
- Use hosted control planes to reduce management costs and optimize cluster deployment by decoupling the control and data planes.
- Use GitOps Zero Touch Provisioning to manage remote edge sites at scale.
You can deploy the multicluster engine with OpenShift Data Foundation on all OpenShift Container Platform clusters.
Prerequisites
- Requires an additional 16384 MiB of memory and 4 CPU cores for each compute (worker) node.
- Requires an additional 16384 MiB of memory and 4 CPU cores for each control plane node.
Requires OpenShift Data Foundation (recommended for creating additional on-premise clusters), LVM Storage, or another persistent storage service.
ImportantDeploying multicluster engine without OpenShift Data Foundation results in the following scenarios:
- Multi-node cluster: No storage is configured. You must configure storage after the installation process.
- Single-node OpenShift: LVM Storage is installed.
You must review the prerequisites to ensure that your environment has enough additional resources for the multicluster engine.
3.1.4. Logical Volume Manager Storage Operator Copiar o linkLink copiado para a área de transferência!
You can use LVM Storage to dynamically provision block storage on a limited resources cluster.
Prerequisites
- Requires at least 1 non-boot drive per host.
- Requires 100 MiB of additional RAM.
- Requires 1 additional CPU core for each non-boot drive.
3.1.5. Red Hat OpenShift Data Foundation Operator Copiar o linkLink copiado para a área de transferência!
You can use OpenShift Data Foundation for file, block, and object storage. This storage option is recommended for all OpenShift Container Platform clusters. OpenShift Data Foundation requires a separate subscription.
Prerequisites
- There are at least 3 compute (workers) nodes, each with 19 additional GiB of memory and 8 additional CPU cores.
- There are at least 2 drives per compute node. For each drive, there is an additional 5 GB of RAM.
- You comply to the additional requirements specified here: Planning your deployment.
You cannot install the OpenShift Data Foundation Operator on Oracle third-party platforms such as Oracle® Cloud Infrastructure or Oracle® Compute Cloud@Customer.
3.1.6. OpenShift Artificial Intelligence (AI) Operator Copiar o linkLink copiado para a área de transferência!
Red Hat® OpenShift® Artificial Intelligence (AI) is a flexible, scalable artificial intelligence (AI) and machine learning (ML) platform that enables enterprises to create and deliver AI-enabled applications at scale across hybrid cloud environments. Red Hat® OpenShift® AI enables the following functionality:
- Data acquisition and preparation.
- Model training and fine-tuning.
- Model serving and model monitoring.
- Hardware acceleration.
The OpenShift AI Operator enables you to install Red Hat® OpenShift® AI on your OpenShift Container Platform cluster. From OpenShift Container Platform version 4.17 and later, you can use the Assisted Installer to deploy the OpenShift AI Operator to your cluster during the installation.
You can install the OpenShift Artificial Intelligence (AI) Operator either separately or as part of the OpenShift AI Operator bundle.
The integration of the OpenShift AI Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
The prerequisites for installing the OpenShift AI Operator separately are as follows:
- You are installing OpenShift Container Platform version 4.17 or later.
You meet the following miminum requirements for the OpenShift AI Operator:
- Requires at least 2 compute (worker) nodes, each with 32 additional GiB of memory and 8 additional CPU cores.
- Requires at least 1 supported GPU. Both AMD and NVIDIA GPUs are supported.
- You meet the additional minimum requirements specified for the dependent Red Hat OpenShift Data Foundation Operator.
- You meet the additional requirements specified here: Requirements for OpenShift AI.
- See the additional prerequisites for the OpenShift AI Operator bundle, if you are installing the Operator as part of the bundle.
You cannot install the OpenShift AI Operator on Oracle third-party platforms such as Oracle® Cloud Infrastructure or Oracle® Compute Cloud@Customer.
Additional resources
3.1.7. OpenShift sandboxed containers Operator Copiar o linkLink copiado para a área de transferência!
The OpenShift sandboxed containers Operator provides an additional virtual machine (VM) isolation layer for pods, which manages the installation, configuration, and updating of sandboxed containers runtime (Kata containers
) on Red Hat OpenShift clusters. You can install the sandboxed containers runtime in an Red Hat OpenShift cluster by using the Assisted Installer.
The integration of the OpenShift sandboxed containers Operator into the Assisted Installer is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
Prerequisites
The required functionality for the OpenShift Container Platform is supported by two main components:
- OpenShift Container Platform: Use OpenShift Container Platform version 4.17 or later to install OpenShift sandboxed containers on an Red Hat OpenShift cluster using the Assisted Installer. To learn more about the requirements for OpenShift sandboxed containers, see "Additional Resources".
Kata runtime: This includes Red Hat Enterprise Linux CoreOS (RHCOS) and updates with every OpenShift Container Platform release. The Operator depends on the features that come with the RHCOS host and the environment it runs in.
NoteYou must install Red Hat Enterprise Linux CoreOS (RHCOS) on the worker nodes. RHEL nodes are not supported.
Additional resources
3.1.8. Kubernetes NMState Operator Copiar o linkLink copiado para a área de transferência!
NMState is a declarative NetworkManager API designed for configuring network settings using YAML or JSON-based instructions. The Kubernetes NMState Operator allows you to configure network interface types, DNS, and routing on the cluster nodes using NMState.
You can install the Kubernetes NMState Operator on OpenShift Container Platform using the Assisted Installer, either separately or as part of the OpenShift Virtualization Operator bundle. Installing the Kubernetes NMState Operator with the Assisted Installer automatically creates a kubernetes-nmstate
instance, which deploys the NMState State Controller as a daemon set across all of the cluster nodes. The daemons on the cluster nodes periodically report on the state of each node’s network interfaces to the API server.
Prerequisites
- Supports OpenShift Container Platform 4.12 or later.
- Requires an x86_64 CPU architecture.
- Cannot be installed on the Nutanix and Oracle Cloud Infrastructure platforms.
3.1.9. Fence Agents Remediation Operator Copiar o linkLink copiado para a área de transferência!
You can use Fence Agents Remediation Operator to automatically recover unhealthy nodes on environments with a traditional API end-point. When a node in the OpenShift Container Platform cluster becomes unhealthy or unresponsive, the Fence Agents Remediation Operator utilizes an external set of fencing agents to isolate it from the rest of the cluster. A fencing agent then resets the unhealthy node in an attempt to resolve transient hardware or software issues. Before or during the reboot process, the Fence Agents Remediation Operator safely moves workloads (pods) running on the unhealthy node to other healthy nodes in the cluster.
You can only install the Fence Agents Remediation Operator as part of the Virtualization Operator bundle.
The integration of the Fence Agents Remediation Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- See the prerequisites for the Virtualization Operator bundle.
Procedure
Post-installation steps
-
Create the
FenceAgentsRemediationTemplate
custom resource to define the required fencing agents and remediation parameters. For details, see Configuring the Fence Agents Remediation Operator. -
Configure the
NodeHealthCheck
custom resource by either replacing the defaultSelfNodeRemediation
provider withFenceAgentsRemediation
or by addingFenceAgentsRemediation
as an additional remediation provider.
Additional resources
3.1.10. Kube Descheduler Operator Copiar o linkLink copiado para a área de transferência!
The Kube Descheduler Operator is a Kubernetes operator that automates the deployment, configuration, and management of the Kubernetes Descheduler within a cluster. You can use the Kube Descheduler Operator to evict pods (workloads) based on specific strategies, so that the pods can be rescheduled onto more appropriate nodes.
You can benefit from descheduling running pods in situations such as the following:
- Nodes are underutilized or overutilized.
- Pods and node affinity requirements, such as taints or labels, have changed and the original scheduling decisions are no longer appropriate for certain nodes.
- Node failure requires pods to be moved.
- New nodes are added to clusters.
- Pods have been restarted excessively.
You can only install the Kube Descheduler Operator as part of the Virtualization Operator bundle.
The integration of the Kube Descheduler Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- See the prerequisites for the Virtualization Operator bundle.
Procedure
Additional resources
3.1.11. Local Storage Operator Copiar o linkLink copiado para a área de transferência!
The Local Storage Operator (LSO) enables the provisioning of persistent storage through local volumes. Local persistent volumes provide access to local storage devices, such as drives or partitions, by using the standard persistent volume claim interface.
You can perform the following actions using Local Storage Operator (LSO):
- Assign the storage devices to the storage classes without modifying the device configuration.
- Statically provision PVs and storage classes by configuring the LocalVolume custom resource (CR).
- Create workloads and PVCs while being aware of the underlying storage topology.
Selecting the OpenShift Virtualization Operator, either independently or as part of the Virtualization bundle, automatically activates Local Storage Operator (LSO) in the background.
Prerequisites
- See the prerequisites for the Virtualization Operator bundle.
Procedure
Additional resources
3.1.12. Node Health Check Operator Copiar o linkLink copiado para a área de transferência!
The Node Health Check Operator monitors node conditions based on a defined set of criteria to assess their health status. When detecting an issue, the Operator delegates remediation tasks to the appropriate remediation provider to remediate the unhealthy nodes. The Assisted Installer supports the following remediation providers:
- Self Node Remediation Operator - An internal solution for rebooting unhealthy nodes.
- Fence Agents Remediation Operator - Leverages external management capabilities to forcefully isolate and reboot nodes.
You can only install the Node Health Check Operator as part the Virtualization Operator bundle.
The integration of the Node Health Check Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- See the prerequisites for the Virtualization Operator bundle.
Procedure
Additional resources
3.1.13. Node Maintenance Operator Copiar o linkLink copiado para a área de transferência!
The Node Maintenance Operator facilitates planned maintenance by placing nodes into maintenance mode.
The Node Maintenance Operator watches for new or deleted NodeMaintenance
custom resources (CRs). When it detects a new NodeMaintenance
CR, it prevents new workloads from being scheduled on that node, and cordons off the node from the rest of the cluster. The Operator then evicts all pods that can be evicted from the node. When the administrator deletes the NodeMaintenance
CR associated with the node, maintenance ends and the Operator makes the node available for new workloads.
You can only install the Node Maintenance Operator as part of the Virtualization Operator bundle.
The integration of the Node Maintenance Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- See the prerequisites for the Virtualization Operator bundle.
Procedure
Additional resources
3.1.14. Self Node Remediation Operator Copiar o linkLink copiado para a área de transferência!
The Self Node Remediation Operator automatically reboots unhealthy nodes. This remediation strategy minimizes downtime for stateful applications and ReadWriteOnce (RWO) volumes, and restores compute capacity if transient failures occur.
You can use the Self Node Remediation Operator as a remediation provider for the Node Health Check Operator. Currently, it is only possible to install the Self Node Remediation as a standalone Operator through the API.
The integration of the Self Node Remediation Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Procedure
Additional resources
3.1.15. AMD GPU Operator Copiar o linkLink copiado para a área de transferência!
The Advanced Micro Devices (AMD) Graphics Processing Unit (GPU) Operator simplifies the deployment and management of AMD Instinct™ GPUs within a Red Hat OpenShift Container Platform cluster. The hardware acceleration capabilities of the Operator automate several key tasks, making it easier to create artificial intelligence and machine learning (AI/ML) applications. Accelerating specific areas of GPU functions can minimize CPU processing and memory usage, improving overall application speed, memory consumption, and bandwidth restrictions.
You can install the AMD GPU Operator separately or as part of the OpenShift AI Operator bundle. Selecting the AMD GPU Operator automatically activates the Kernel Module Management Operator.
The integration of the AMD GPU Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- Requires at least 1 supported AMD GPU.
- See the additional prerequisites for the OpenShift AI Operator bundle if you are installing the Operator as part of the bundle.
Procedure
Additional resources
3.1.16. Authorino Operator Copiar o linkLink copiado para a área de transferência!
The Authorino Operator provides an easy way to install Authorino, providing configurability options at the time of installation.
Authorino is a Kubernetes-native, external authorization service designed to secure APIs and applications. It intercepts requests to services and determines whether to allow or deny access based on configured authentication and authorization policies. Authorino provides a centralized and declarative way to manage access control for your Kubernetes-based applications without requiring code changes.
You can only install the Authorino Operator as part of the OpenShift AI Operator bundle.
The integration of the Authorino Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- See the prerequisites for the OpenShift AI Operator bundle.
Procedure
Additional resources
3.1.17. Kernel Module Management Operator Copiar o linkLink copiado para a área de transferência!
The Kernel Module Management (KMM) Operator manages, builds, signs, and deploys out-of-tree kernel modules and device plugins on OpenShift Container Platform clusters.
KMM adds a new Module CRD which describes an out-of-tree kernel module and its associated device plugin. You can use Module resources to configure how to load the module, define ModuleLoader images for kernel versions, and include instructions for building and signing modules for specific kernel versions.
KMM is designed to accommodate multiple kernel versions at once for any kernel module, allowing for seamless node upgrades and reduced application downtime.
You can install the Kernel Module Management Operator either independently or as part of the OpenShift AI Operator bundle.
The integration of the Kernel Module Management Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- If you are installing the Operator as part of the OpenShift AI Operator bundle, see the bundle prerequisites.
- If you are installing the Operator separately, there are no additional prerequisites.
Procedure
Additional resources
3.1.18. Node Feature Discovery Operator Copiar o linkLink copiado para a área de transferência!
The Node Feature Discovery (NFD) Operator automates the deployment and management of the Node Feature Discovery (NFD) add-on. The Node Feature Discovery add-on detects the configurations and hardware features of each node in an OpenShift Container Platform cluster. The add-on labels each node with hardware-specific information such as vendor, kernel configuration, or operating system version, making the cluster aware of the underlying hardware and software capabilities of the nodes.
With the Node Feature Discovery (NFD) Operator, administrators can easily gather information about the nodes to use for scheduling, resource management, and more by controlling the life-cycle of Node Feature Discovery (NFD).
You can install the Node Feature Discovery Operator separately or as part of the OpenShift AI Operator bundle.
The integration of the Node Feature Discovery Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- If you are installing the Operator as part of the OpenShift AI Operator bundle, see the bundle prerequisites.
- If you are installing the Operator separately, there are no additional prerequisites.
Procedure
3.1.19. NVIDIA GPU Operator Copiar o linkLink copiado para a área de transferência!
The NVIDIA GPU Operator uses the operator framework within Kubernetes to automate the management of all NVIDIA software components needed to provision Graphical Processing Units (GPUs).
Some of these software components are as follows:
- NVIDIA drivers to enable Compute Unified Device Architecture (CUDA).
- The Kubernetes device plugin for GPUs.
- The NVIDIA Container Toolkit.
- Automatic node labelling using GPU Feature Discovery (GFD)
- GPU monitoring through the Data Center GPU Manager (DCGM).
In OpenShift Container Platform, the Operator provides a consistent, automated, and cloud-native way to leverage the power of NVIDIA GPUs for artificial intelligence, machine learning, high-performance computing, and other GPU-accelerated workloads.
You can install the NVIDIA GPU Operator either separately or as part of the OpenShift AI Operator bundle. Selecting the NVIDIA GPU Operator automatically activates the Node Feature Discovery Operator.
The integration of the NVIDIA GPU Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- Requires at least 1 supported NVIDIA GPU.
- See the additional prerequisites for the OpenShift AI Operator bundle if you are installing the Operator as part of the bundle.
Procedure
3.1.20. OpenShift Pipelines Operator Copiar o linkLink copiado para a área de transferência!
Red Hat OpenShift Pipelines is a cloud-native, continuous integration and continuous delivery (CI/CD) solution based on Kubernetes resources. It uses Tekton building blocks to automate deployments across multiple platforms by abstracting away the underlying implementation details. Tekton introduces various standard custom resource definitions (CRDs) for defining CI/CD pipelines that are portable across Kubernetes distributions.
The Red Hat OpenShift Pipelines Operator handles the installation and management of OpenShift Pipelines. The Operator supports the following use cases:
- Continuous Integration (CI) - Automating code compilation, testing, and static analysis.
- Continuous Delivery/Deployment (CD) - Automating the deployment of applications to various environments (development, staging, production).
- Microservices Development - Supporting decentralized teams working on microservice-based architectures.
- Building Container Images - Efficiently building and pushing container images to registries.
- Orchestrating Complex Workflows - Defining multi-step processes for building, testing, and deploying applications across different platforms.
You can only install the OpenShift Pipelines Operator as part of the OpenShift AI Operator bundle.
The integration of the OpenShift Pipelines Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- See the prerequisites for the OpenShift AI Operator bundle.
Procedure
3.1.21. OpenShift Serverless Operator Copiar o linkLink copiado para a área de transferência!
The Red Hat OpenShift Serverless Operator enables you to install and use the following components on your OpenShift Container Platform cluster:
- Knative Serving - Deploys and automatically scales stateless, containerized applications according to demand. It simplifies code deployment, and handles web requests and background processes.
- Knative Eventing - Provides the building blocks for an event-driven architecture on Kubernetes. It enables loose coupling between services by allowing them to communicate asynchronously through events, rather than through direct calls.
- Knative Broker for Apache Kafka - This is a specific implementation of a Knative Broker. It provides a robust, scalable, and high-performance mechanism for routing events within Knative Eventing, in environments where Apache Kafka is the preferred message broker.
The OpenShift Serverless Operator manages Knative custom resource definitions (CRDs) for your cluster and enables you to configure them without directly modifying individual config maps for each component.
You can only install the OpenShift Serverless Operator as part of the OpenShift AI Operator bundle.
The integration of the OpenShift Serverless Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- See the prerequisites for the OpenShift AI Operator bundle.
Procedure
Additional resources
3.1.22. OpenShift Service Mesh Operator Copiar o linkLink copiado para a área de transferência!
Red Hat OpenShift Service Mesh addresses a variety of problems in a microservice architecture by creating a centralized point of control in an application. It adds a transparent layer on existing distributed applications without requiring any changes to the application code.
Microservice architectures split the work of enterprise applications into modular services, which can make scaling and maintenance easier. However, as an enterprise application built on a microservice architecture grows in size and complexity, it becomes difficult to understand and manage. Service Mesh can address those architecture problems by capturing or intercepting traffic between services and can modify, redirect, or create new requests to other services.
Service Mesh provides an easy way to create a network of deployed services that provides discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring. A service mesh also provides more complex operational functionality, including A/B testing, canary releases, access control, and end-to-end authentication
Red Hat OpenShift Service Mesh requires the use of the Red Hat OpenShift Service Mesh Operator which allows you to connect, secure, control, and observe the microservices that comprise your applications. You can also install other Operators to enhance your service mesh experience. Service mesh is based on the open source Istio project.
You can only install the OpenShift Service Mesh Operator as part of the OpenShift AI Operator bundle.
The integration of the OpenShift Service Mesh Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- See the prerequisites for the OpenShift AI Operator bundle.
- See Preparing to install service mesh in the OpenShift Container Platform documentation.
Procedure
Additional resources
3.1.23. Cluster Observability Operator Copiar o linkLink copiado para a área de transferência!
The Cluster Observability Operator (COO) is an optional component of the OpenShift Container Platform designed for creating and managing highly customizable monitoring stacks. It enables cluster administrators to automate configuration and management of monitoring needs extensively, offering a more tailored and detailed view of each namespace compared to the default OpenShift Container Platform monitoring system.
The Cluster Observability Operator deploys the following monitoring components:
- Prometheus - A highly available Prometheus instance capable of sending metrics to an external endpoint by using remote write.
- Thanos Querier (optional) - Enables querying of Prometheus instances from a central location.
- Alertmanager (optional) - Provides alert configuration capabilities for different services.
- UI plugins (optional) - Enhances the observability capabilities with plugins for monitoring, logging, distributed tracing and troubleshooting.
- Korrel8r (optional) - Provides observability signal correlation, powered by the open source Korrel8r project.
You can install the Cluster Observability Operator Operator separately through the Assisted Installer API or as part of the Virtualization bundle in the Assisted Installer web console. For more information about the use of this Operator in OpenShift Container Platform, see "Additional resources".
The integration of the Cluster Observability Operator (COO) into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- See the prerequisites for the Virtualization Operator bundle.
3.1.24. MetalLB Operator Copiar o linkLink copiado para a área de transferência!
You can install the MetalLB Operator to enable the use of LoadBalancer
services in environments that do not have a built-in cloud load balancer, such as bare-metal clusters.
When you create a LoadBalancer
service, MetalLB assigns an external IP address from a predefined pool. MetalLB advertises this IP address on the host network, making the service reachable from outside the cluster. When external traffic enters your OpenShift Container Platform cluster through the MetalLB LoadBalancer
service, the return traffic to the client has the external IP address of the load balancer as the source IP.
You can install the MetalLB Operator separately through the Assisted Installer API or as part of the Virtualization bundle in the Assisted Installer web console. For more information about the use of this Operator in OpenShift Container Platform, see "Additional resources".
The integration of the MetalLB Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- See the prerequisites for the Virtualization Operator bundle.
3.1.25. NUMA Resources Operator Copiar o linkLink copiado para a área de transferência!
Non-Uniform Memory Access (NUMA) is a compute platform architecture that allows different CPUs to access different regions of memory at different speeds. NUMA resource topology refers to the locations of CPUs, memory, and PCI devices relative to each other in the compute node. Colocated resources are said to be in the same NUMA zone. For high-performance applications, the cluster needs to process pod workloads in a single NUMA zone.
The NUMA Resources Operator allows you to schedule high-performance workloads in the same NUMA zone. It deploys a node resources exporting agent that reports on available cluster node NUMA resources, and a secondary scheduler that manages the workloads.
You can install the NUMA Resources Operator separately through the Assisted Installer API or as part of the Virtualization bundle in the Assisted Installer web console. For more information about the use of this Operator in OpenShift Container Platform, see "Additional resources".
The integration of the NUMA Resources Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- See the prerequisites for the Virtualization Operator bundle.
Procedure
Post-installation steps
Create the NUMAResourcesOperator
custom resource and deploy the NUMA-aware secondary pod scheduler. For details, see Scheduling NUMA-aware workloads in "Additional resources".
3.1.26. OpenShift API for Data Protection (OADP) Operator Copiar o linkLink copiado para a área de transferência!
The OpenShift API for Data Protection (OADP) product safeguards customer applications on OpenShift Container Platform. It offers comprehensive disaster recovery protection, covering OpenShift Container Platform applications, application-related cluster resources, persistent volumes, and internal images. OADP is also capable of backing up both containerized applications and virtual machines (VMs). OADP does not serve as a disaster recovery solution for etcd or OpenShift Operators.
You can install the OADP Operator separately through the Assisted Installer API or as part of the Virtualization bundle in the Assisted Installer web console. For more information about the use of this Operator in OpenShift Container Platform, see "Additional resources".
The integration of the OpenShift API for Data Protection Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- See the prerequisites for the Virtualization Operator bundle.
3.2. Customizing with Operator Bundles Copiar o linkLink copiado para a área de transferência!
An Operator bundle is a recommended packaging format that combines related Operators to deliver a comprehensive set of capabilities. By selecting a bundle, administrators can extend functionality beyond that of a single Operator.
This approach makes the Assisted Installer more opinionated, offering an optimized platform for each selected bundle. It reduces the adoption barrier and minimizes the expertise required for customers to quickly access essential features. Additionally, it establishes a single, well-tested, and widely recognized path for platform deployments.
Meanwhile, individual Operators remain independent and free of unnecessary dependencies, ensuring a lightweight and flexible solution for small or specialized deployments, such as on single-node OpenShift.
When an administrator specifies an Operator bundle, the Assisted Installer automatically provisions the associated Operators included in the bundle. These Operators are predefined and cannot be deselected, ensuring consistency. Administrators can modify the selection after the installation has completed.
Additional resources
3.2.1. Virtualization Operator bundle Copiar o linkLink copiado para a área de transferência!
Virtualization lets you create multiple simulated environments or resources from a single, physical hardware system. The Virtualization Operator bundle provides a recommended and proven path for virtualization platform deployments, minimizing obstacles. The solution supports the addition of nodes and Day-2 administrative operations.
The Virtualization Operator bundle prompts the Assisted Installer to install the following Operators together:
- Fence Agents Remediation Operator - Externally fences failed nodes using power controllers.
- Kube Descheduler Operator - Evicts pods to reschedule them on more suitable nodes.
- Local Storage Operator - Allows provisioning of persistent storage by using local volumes.
- Migration Toolkit for Virtualization Operator - Enables you to migrate virtual machines from VMware vSphere, Red Hat Virtualization, or OpenStack to OpenShift Virtualization running on Red Hat OpenShift Container Platform.
- Kubernetes NMState Operator - Enables you to configure various network interface types, DNS, and routing on cluster nodes.
- Node Health Check Operator - Identifies unhealthy nodes.
- Node Maintenance Operator - Places nodes into maintenance mode.
- OpenShift Virtualization Operator - Runs virtual machines alongside containers on one platform.
- Cluster Observability Operator - Provides observability and monitoring capabilities for your OpenShift cluster.
- MetalLB Operator - Provides load balancer services for bare metal OpenShift clusters.
- NUMA Resources Operator - Provides NUMA-aware scheduling to improve workload performance on NUMA systems.
- OpenShift API for Data Protection (OADP) Operator - Enables the backup and restoral of OpenShift Container Platform cluster resources and persistent volumes.
The Virtualization Operator bundle is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- You are installing OpenShift Container Platform version 4.14 or later.
- There is enabled CPU virtualization support in BIOS (Intel-VT/AMD-V) on all nodes.
- Each control plane (master) node has an an additional 1024 MiB of memory and 3 CPU cores.
- Each compute (worker) node has an additional 1024 MiB of memory and 5 CPU cores.
- You have included the additional resources required to support the selected storage Operator.
- You are installing a cluster of three or more nodes. The Virtualization Operator bundle is not available on single-node OpenShift.
Procedure
Additional resources
3.2.2. OpenShift AI Operator bundle Copiar o linkLink copiado para a área de transferência!
The OpenShift AI Operator bundle enables the training, serving, monitoring, and management of Artificial Intelligence (AI) and Machine Learning (ML) models and applications. It simplifies the deployment of AI and ML components on your OpenShift cluster.
The OpenShift AI Operator bundle bundle prompts the Assisted Installer to install the following Operators together:
- AMD GPU Operator - Automates the management of AMD software components needed to provision and monitor Graphics Processing Units (GPUs).
- Authorino Operator - Provides a lightweight external authorization service for tailor-made Zero Trust API security.
- Kernel Module Management Operator - Manages kernel modules and associated device plugins.
- Local Storage Operator - Allows provisioning of persistent storage by using local volumes.
- Node Feature Discovery Operator - Manages the detection of hardware features and configuration by labeling nodes with hardware-specific information.
- NVIDIA GPU Operator - Automates the management of NVIDIA software components needed to provision and monitor GPUs.
- OpenShift AI Operator - Trains, serves, monitors and manages AI/ML models and applications.
- Red Hat OpenShift Data Foundation Operator - Provides persistent software-defined storage for hybrid applications.
- OpenShift Pipelines Operator - Provides a cloud-native continuous integration and delivery (CI/CD) solution for building pipelines using Tekton.
- OpenShift Serverless Operator - Deploys workflow applications based on the CNCF (Cloud Native Computing Foundation) Serverless Workflow specification.
- OpenShift Service Mesh Operator - Provides behavioral insight and operational control over a service mesh.
Prerequisites
- The installation of the NVIDIA GPU, AMD GPU, and Kernel Module Management Operators depends on the Graphics Processing Unit (GPU) detected on your hosts following host discovery.
Procedure
Additional resources