Chapter 3. Customizing your environment using Operators and Operator Bundles
You can customize the OpenShift Container Platform deployment by selecting one or more Operators or Operator bundles during the installation.
3.1. Customizing with Operators Copy linkLink copied to clipboard!
The Assisted Installer Operators are used to package, deploy, and manage services and applications in OpenShift Container Platform. Through the Assisted Installer, you can install the following Operators, categorized according to their functionality.
Before starting the installation, familiarize yourself with the Assisted Installer Operators, including their prerequisites and limitations.
The additional requirements apply to each Operator individually. If you select more than one Operator, or if the Assisted Installer automatically selects an Operator due to dependencies, the total required resources is the sum of the resource requirements for each Operator.
If you require advanced options, install the Operators after you have installed the cluster.
3.1.1. Storage Operators Copy linkLink copied to clipboard!
The Storage category contains the following Operators:
- Local Storage Operator
- Logical Volume Manager Storage Operator
- OpenShift Data Foundation Operator
- OpenShift API for Data Protection (OADP) Operator
3.1.1.1. Installing the Local Storage Operator Copy linkLink copied to clipboard!
The Local Storage Operator (LSO) enables the provisioning of persistent storage through local volumes. Local persistent volumes provide access to local storage devices, such as drives or partitions, by using the standard persistent volume claim interface.
You can perform the following actions using Local Storage Operator (LSO):
- Assign the storage devices to the storage classes without modifying the device configuration.
- Statically provision PVs and storage classes by configuring the LocalVolume custom resource (CR).
- Create workloads and PVCs while being aware of the underlying storage topology.
Selecting the OpenShift Virtualization Operator, either independently or as part of the Virtualization bundle, automatically activates Local Storage Operator (LSO) in the background.
While installing the Local Storage Operator (LSO) through the OCP OperatorHub, you can manually enable cluster monitoring for the Operator. Because the Assisted Installer does not include this setting, it automatically enables cluster monitoring for the Operator during installation, without the option of disabling it.
Prerequisites
- See the prerequisites for the Virtualization Operator bundle.
3.1.1.2. Installing the Logical Volume Manager Storage Operator Copy linkLink copied to clipboard!
You can use LVM Storage to dynamically provision block storage on a limited resources cluster.
While installing the Logical Volume Manager Storage Operator through the OCP OperatorHub, you can manually enable cluster monitoring for the Operator. Because the Assisted Installer does not include this setting, it automatically enables cluster monitoring for the Operator during installation, without the option of disabling it.
Prerequisites
- Requires at least 1 non-boot drive per host.
- Requires 100 MiB of additional RAM.
- Requires 1 additional CPU core for each non-boot drive.
3.1.1.3. Installing the Red Hat OpenShift Data Foundation Operator Copy linkLink copied to clipboard!
You can use OpenShift Data Foundation for file, block, and object storage. This storage option is recommended for all OpenShift Container Platform clusters. OpenShift Data Foundation requires a separate subscription.
While installing the Red Hat OpenShift Data Foundation Operator through the OCP OperatorHub, you can manually enable cluster monitoring for the Operator. Because the Assisted Installer does not include this setting, it automatically enables cluster monitoring for the Operator during installation, without the option of disabling it.
Prerequisites
- There are at least 3 compute (workers) nodes, each with 19 additional GiB of memory and 8 additional CPU cores.
- There are at least 2 drives per compute node. For each drive, there is an additional 5 GB of RAM.
- You comply to the additional requirements specified here: Planning your deployment.
You cannot install the OpenShift Data Foundation Operator on Oracle third-party platforms such as Oracle® Cloud Infrastructure or Oracle® Compute Cloud@Customer.
3.1.1.4. Installing the OpenShift API for Data Protection (OADP) Operator Copy linkLink copied to clipboard!
The OpenShift API for Data Protection (OADP) product safeguards customer applications on OpenShift Container Platform. It offers comprehensive disaster recovery protection, covering OpenShift Container Platform applications, application-related cluster resources, persistent volumes, and internal images. OADP is also capable of backing up both containerized applications and virtual machines (VMs). OADP does not serve as a disaster recovery solution for etcd or OpenShift Operators.
Using the Assisted Installer, you can install the OADP Operator either separately through the API or as part of the Virtualization Operator bundle in the web console. For more information about the use of this Operator in OpenShift Container Platform, see "Additional resources".
Prerequisites
- See the prerequisites for the Virtualization Operator bundle.
3.1.2. Virtualization Operators Copy linkLink copied to clipboard!
The Virtualization category contains the following Operators:
- OpenShift Virtualization Operator
- Migration Toolkit for Virtualization Operator
- OpenShift sandboxed containers Operator
3.1.2.1. Installing the OpenShift Virtualization Operator Copy linkLink copied to clipboard!
You can deploy OpenShift Virtualization to perform the following tasks:
- Create and manage Linux and Windows virtual machines (VMs).
- Run pod and VM workloads alongside each other in a cluster.
- Connect to VMs through a variety of consoles and CLI tools.
- Import and clone existing VMs.
- Manage network interface controllers and storage drives attached to VMs.
- Live migrate VMs between nodes.
The OpenShift Virtualization Operator requires backend storage and might automatically activate a storage Operator in the background, according to the following criteria:
- None - If the CPU architecture is ARM64, no storage Operator is activated.
- LVM Storage - For single-node OpenShift clusters on any other CPU architecture deploying OpenShift Container Platform 4.12 or higher.
- Local Storage Operator (LSO) - For all other deployments.
Prerequisites
- Requires enabled CPU virtualization support in the firmware on all nodes.
- Requires an additional 360 MiB of memory and 2 CPU cores for each compute (worker) node.
- Requires an additional 150 MiB of memory and 4 CPU cores for each control plane node.
Requires Red Hat OpenShift Data Foundation (recommended for creating additional on-premise clusters), Logical Volume Manager Storage, or another persistent storage service.
ImportantDeploying OpenShift Virtualization without Red Hat OpenShift Data Foundation results in the following scenarios:
- Multi-node cluster: No storage is configured. You must configure storage after the OpenShift Data Foundation configuration.
- Single-node OpenShift: Logical Volume Manager Storage (LVM Storage) is installed.
You must review the prerequisites to ensure that your environment has sufficient additional resources for OpenShift Virtualization.
- OpenShift Virtualization is not supported on the following platforms: Nutanix, vSphere.
- OpenShift Virtualization is not compatible with the following CPU architectures: S390X, PPC64LE.
- OpenShift Virtualization is supported from OpenShift Container Platform 4:14 and later.
3.1.2.2. Installing the Migration Toolkit for Virtualization Operator Copy linkLink copied to clipboard!
The Migration Toolkit for Virtualization Operator allows you to migrate virtual machines at scale to a local or remote Red Hat OpenShift Virtualization cluster. You can perform the migration from any of the following source providers:
- VMware vSphere
- Red Hat Virtualization (RHV)
- Red Hat OpenShift Virtualization
- OpenStack
When you select the Migration Toolkit for Virtualization Operator, the Assisted Installer automatically activates the OpenShift Virtualization Operator. For a Single-node OpenShift installation, the Assisted Installer also activates the LVM Storage Operator.
You can install the Migration Toolkit for Virtualization Operator on OpenShift Container Platform using the Assisted Installer, either independently or as part of the OpenShift Virtualization Operator bundle.
Prerequisites
- Requires OpenShift Container Platform version 4.14 or later.
- Requires an x86_64 CPU architecture.
- Requires an additional 1024 MiB of memory and 1 CPU core for each control plane node and worker node.
- Requires the additional resources specified for the OpenShift Virtualization Operator, installed together with OpenShift Virtualization. For details, see the prerequisites in the OpenShift Virtualization Operator section.
Next steps
After completing the installation, the Migration menu appears in the navigation pane of the Red Hat OpenShift web console.
The Migration menu provides access to the Migration Toolkit for Virtualization. Use the toolkit to create and execute a migration plan with the relevant source and destination providers.
For details, see either of the following chapters in the Migration Toolkit for Virtualization Guide:
3.1.2.3. Installing the OpenShift sandboxed containers Operator Copy linkLink copied to clipboard!
The OpenShift sandboxed containers Operator provides an additional virtual machine (VM) isolation layer for pods, which manages the installation, configuration, and updating of sandboxed containers runtime (Kata containers) on Red Hat OpenShift clusters. You can install the sandboxed containers runtime in an Red Hat OpenShift cluster by using the Assisted Installer.
Prerequisites
The required functionality for the OpenShift Container Platform is supported by two main components:
- OpenShift Container Platform: Use OpenShift Container Platform version 4.17 or later to install OpenShift sandboxed containers on an Red Hat OpenShift cluster using the Assisted Installer. To learn more about the requirements for OpenShift sandboxed containers, see OpenShift sandboxed containers.
Kata runtime: This includes Red Hat Enterprise Linux CoreOS (RHCOS) and updates with every OpenShift Container Platform release. The Operator depends on the features that come with the RHCOS host and the environment that it runs in.
NoteYou must install Red Hat Enterprise Linux CoreOS (RHCOS) on the worker nodes. RHEL nodes are not supported.
3.1.3. Artificial Intelligence (AI) Operators Copy linkLink copied to clipboard!
The AI category contains the following Operators:
- Red Hat® OpenShift® Artificial Intelligence (AI) Operator
- AMD GPU Operator
- NVIDIA GPU Operator
3.1.3.1. Installing the OpenShift Artificial Intelligence (AI) Operator Copy linkLink copied to clipboard!
Red Hat® OpenShift® Artificial Intelligence (AI) is a flexible, scalable artificial intelligence (AI) and machine learning (ML) platform that enables enterprises to create and deliver AI-enabled applications at scale across hybrid cloud environments. Red Hat® OpenShift® AI enables the following functionality:
- Data acquisition and preparation.
- Model training and fine-tuning.
- Model serving and model monitoring.
- Hardware acceleration.
The OpenShift AI Operator enables you to install Red Hat® OpenShift® AI on your OpenShift Container Platform cluster. From OpenShift Container Platform version 4.17 and later, you can use the Assisted Installer to deploy the OpenShift AI Operator to your cluster during the installation.
You can install the OpenShift Artificial Intelligence (AI) Operator either separately or as part of the OpenShift AI Operator bundle.
The integration of the OpenShift AI Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
The prerequisites for installing the OpenShift AI Operator separately are as follows:
- You are installing OpenShift Container Platform version 4.17 or later.
You meet the following miminum requirements for the OpenShift AI Operator:
- Requires at least 2 compute (worker) nodes, each with 32 additional GiB of memory and 8 additional CPU cores.
- Requires at least 1 supported GPU. Both AMD and NVIDIA GPUs are supported.
- You meet the additional minimum requirements specified for the dependent Red Hat OpenShift Data Foundation Operator.
- You meet the additional requirements specified here: Requirements for OpenShift AI.
- See the additional prerequisites for the OpenShift AI Operator bundle, if you are installing the Operator as part of the bundle.
You cannot install the OpenShift AI Operator on Oracle third-party platforms such as Oracle® Cloud Infrastructure or Oracle® Compute Cloud@Customer.
3.1.3.2. Installing the AMD GPU Operator Copy linkLink copied to clipboard!
The Advanced Micro Devices (AMD) Graphics Processing Unit (GPU) Operator simplifies the deployment and management of AMD Instinct™ GPUs within a Red Hat OpenShift Container Platform cluster. The hardware acceleration capabilities of the Operator automate several key tasks, making it easier to create artificial intelligence and machine learning (AI/ML) applications. Accelerating specific areas of GPU functions can minimize CPU processing and memory usage, improving overall application speed, memory consumption, and bandwidth restrictions.
In the Assisted Installer, you can install the AMD GPU Operator separately or as part of the OpenShift AI Operator bundle. Selecting the AMD GPU Operator automatically activates the Kernel Module Management Operator.
While installing the AMD GPU Operator through the OCP OperatorHub, you can manually enable cluster monitoring for the Operator. Because the Assisted Installer does not include this setting, it automatically enables cluster monitoring for the Operator during installation, without the option of disabling it.
The integration of the AMD GPU Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- Requires at least 1 supported AMD GPU.
- See the additional prerequisites for the OpenShift AI Operator bundle if you are installing the Operator as part of the bundle.
3.1.3.3. Installing the NVIDIA GPU Operator Copy linkLink copied to clipboard!
The NVIDIA GPU Operator uses the operator framework within Kubernetes to automate the management of all NVIDIA software components needed to provision Graphical Processing Units (GPUs).
Some of these software components are as follows:
- NVIDIA drivers to enable Compute Unified Device Architecture (CUDA).
- The Kubernetes device plugin for GPUs.
- The NVIDIA Container Toolkit.
- Automatic node labelling using GPU Feature Discovery (GFD)
- GPU monitoring through the Data Center GPU Manager (DCGM).
In OpenShift Container Platform, the Operator provides a consistent, automated, and cloud-native way to leverage the power of NVIDIA GPUs for artificial intelligence, machine learning, high-performance computing, and other GPU-accelerated workloads.
You can install the NVIDIA GPU Operator either separately or as part of the OpenShift AI Operator bundle. Selecting the NVIDIA GPU Operator automatically activates the Node Feature Discovery Operator.
The integration of the NVIDIA GPU Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- Requires at least 1 supported NVIDIA GPU.
- See the additional prerequisites for the OpenShift AI Operator bundle if you are installing the Operator as part of the bundle.
3.1.4. Network Operators Copy linkLink copied to clipboard!
The Network category contains the following Operators:
- Kubernetes NMState Operator
- OpenShift Service Mesh Operator
- MetalLB Operator
3.1.4.1. Installing the Kubernetes NMState Operator Copy linkLink copied to clipboard!
NMState is a declarative NetworkManager API designed for configuring network settings using YAML or JSON-based instructions. The Kubernetes NMState Operator allows you to configure network interface types, DNS, and routing on the cluster nodes using NMState.
You can install the Kubernetes NMState Operator on OpenShift Container Platform using the Assisted Installer, either separately or as part of the OpenShift Virtualization Operator bundle. Installing the Kubernetes NMState Operator with the Assisted Installer automatically creates a kubernetes-nmstate instance, which deploys the NMState State Controller as a daemon set across all of the cluster nodes. The daemons on the cluster nodes periodically report on the state of each node’s network interfaces to the API server.
Prerequisites
- Supports OpenShift Container Platform 4.12 or later.
- Requires an x86_64 CPU architecture.
- Cannot be installed on the Nutanix and Oracle Cloud Infrastructure platforms.
3.1.4.2. Installing the OpenShift Service Mesh Operator Copy linkLink copied to clipboard!
Red Hat OpenShift Service Mesh addresses a variety of problems in a microservice architecture by creating a centralized point of control in an application. It adds a transparent layer on existing distributed applications without requiring any changes to the application code.
Microservice architectures split the work of enterprise applications into modular services, which can make scaling and maintenance easier. However, as an enterprise application built on a microservice architecture grows in size and complexity, it becomes difficult to understand and manage. Service Mesh can address those architecture problems by capturing or intercepting traffic between services and can modify, redirect, or create new requests to other services.
Service Mesh provides an easy way to create a network of deployed services that provides discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring. A service mesh also provides more complex operational functionality, including A/B testing, canary releases, access control, and end-to-end authentication
Red Hat OpenShift Service Mesh requires the use of the Red Hat OpenShift Service Mesh Operator which allows you to connect, secure, control, and observe the microservices that comprise your applications. You can also install other Operators to enhance your service mesh experience. Service mesh is based on the open source Istio project.
Using the Assisted Installer, you can install the OpenShift Service Mesh Operator either separately through the API or as part of the OpenShift AI Operator bundle in the web console.
The integration of the OpenShift Service Mesh Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- See the prerequisites for the OpenShift AI Operator bundle.
- See Preparing to install service mesh in the OpenShift Container Platform documentation.
3.1.4.3. Installing the MetalLB Operator Copy linkLink copied to clipboard!
You can install the MetalLB Operator to enable the use of LoadBalancer services in environments that do not have a built-in cloud load balancer, such as bare-metal clusters.
When you create a LoadBalancer service, MetalLB assigns an external IP address from a predefined pool. MetalLB advertises this IP address on the host network, making the service reachable from outside the cluster. When external traffic enters your OpenShift Container Platform cluster through the MetalLB LoadBalancer service, the return traffic to the client has the external IP address of the load balancer as the source IP.
Using the Assisted Installer, you can install the MetalLB Operator either separately through the API or as part of the Virtualization Operator bundle in the web console. For more information about the use of this Operator in OpenShift Container Platform, see "Additional resources".
Prerequisites
- See the prerequisites for the Virtualization Operator bundle.
3.1.5. Remediation Operators Copy linkLink copied to clipboard!
The Remediation category contains the following Operators:
- Node Health Check Operator
- Fence Agents Remediation Operator
- Self Node Remediation Operator
3.1.5.1. Installing the Node Health Check Operator Copy linkLink copied to clipboard!
The Node Health Check Operator monitors node conditions based on a defined set of criteria to assess their health status. When detecting an issue, the Operator delegates remediation tasks to the appropriate remediation provider to remediate the unhealthy nodes. The Assisted Installer supports the following remediation providers:
- Self Node Remediation Operator - An internal solution for rebooting unhealthy nodes.
- Fence Agents Remediation Operator - Leverages external management capabilities to forcefully isolate and reboot nodes.
Using the Assisted Installer, you can install the Node Health Check Operator either separately through the API or as part of the Virtualization Operator bundle in the web console.
Prerequisites
- See the prerequisites for the Virtualization Operator bundle.
3.1.5.2. Installing the Fence Agents Remediation Operator Copy linkLink copied to clipboard!
You can use Fence Agents Remediation Operator to automatically recover unhealthy nodes on environments with a traditional API end-point. When a node in the OpenShift Container Platform cluster becomes unhealthy or unresponsive, the Fence Agents Remediation Operator utilizes an external set of fencing agents to isolate it from the rest of the cluster. A fencing agent then resets the unhealthy node in an attempt to resolve transient hardware or software issues. Before or during the reboot process, the Fence Agents Remediation Operator safely moves workloads (pods) running on the unhealthy node to other healthy nodes in the cluster.
Using the Assisted Installer, you can install the Fence Agents Remediation Operator either separately through the API or as part of the Virtualization Operator bundle in the web console.
Prerequisites
- See the prerequisites for the Virtualization Operator bundle.
Procedure
Next steps
-
Create the
FenceAgentsRemediationTemplatecustom resource to define the required fencing agents and remediation parameters. For details, see Configuring the Fence Agents Remediation Operator. -
Configure the
NodeHealthCheckcustom resource by either replacing the defaultSelfNodeRemediationprovider withFenceAgentsRemediationor by addingFenceAgentsRemediationas an additional remediation provider.
3.1.5.3. Installing the Self Node Remediation Operator Copy linkLink copied to clipboard!
The Self Node Remediation Operator automatically reboots unhealthy nodes. This remediation strategy minimizes downtime for stateful applications and ReadWriteOnce (RWO) volumes, and restores compute capacity if transient failures occur.
You can use the Self Node Remediation Operator as a remediation provider for the Node Health Check Operator. Currently, it is only possible to install the Self Node Remediation Operator through the API.
The integration of the Self Node Remediation Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Procedure
3.1.6. Security and Access Control Operators Copy linkLink copied to clipboard!
The Security & Access Control category contains the following Operators:
- Authorino Operator
- Kernel Module Management Operator
3.1.6.1. Installing the Authorino Operator Copy linkLink copied to clipboard!
The Authorino Operator provides an easy way to install Authorino, providing configurability options at the time of installation.
Authorino is a Kubernetes-native, external authorization service designed to secure APIs and applications. It intercepts requests to services and determines whether to allow or deny access based on configured authentication and authorization policies. Authorino provides a centralized and declarative way to manage access control for your Kubernetes-based applications without requiring code changes.
Using the Assisted Installer, you can install the Authorino Operator either separately through the API or as part of the OpenShift AI Operator bundle in the web console.
The integration of the Authorino Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- See the prerequisites for the OpenShift AI Operator bundle.
3.1.6.2. Installing the Kernel Module Management Operator Copy linkLink copied to clipboard!
The Kernel Module Management (KMM) Operator manages, builds, signs, and deploys out-of-tree kernel modules and device plugins on OpenShift Container Platform clusters.
KMM adds a new Module CRD which describes an out-of-tree kernel module and its associated device plugin. You can use Module resources to configure how to load the module, define ModuleLoader images for kernel versions, and include instructions for building and signing modules for specific kernel versions.
KMM is designed to accommodate multiple kernel versions at once for any kernel module, allowing for seamless node upgrades and reduced application downtime.
You can install the Kernel Module Management Operator either independently or as part of the OpenShift AI Operator bundle.
The integration of the Kernel Module Management Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- If you are installing the Operator as part of the OpenShift AI Operator bundle, see the bundle prerequisites.
- If you are installing the Operator separately, there are no additional prerequisites.
3.1.7. CI/CD and Development Productivity Operators Copy linkLink copied to clipboard!
The CI/CD & Dev Productivity category contains the following Operators:
- OpenShift Pipelines Operator
- OpenShift Serverless Operator
3.1.7.1. Installing the OpenShift Pipelines Operator Copy linkLink copied to clipboard!
Red Hat OpenShift Pipelines is a cloud-native, continuous integration and continuous delivery (CI/CD) solution based on Kubernetes resources. It uses Tekton building blocks to automate deployments across multiple platforms by abstracting away the underlying implementation details. Tekton introduces various standard custom resource definitions (CRDs) for defining CI/CD pipelines that are portable across Kubernetes distributions.
The Red Hat OpenShift Pipelines Operator handles the installation and management of OpenShift Pipelines. The Operator supports the following use cases:
- Continuous Integration (CI) - Automating code compilation, testing, and static analysis.
- Continuous Delivery/Deployment (CD) - Automating the deployment of applications to various environments (development, staging, production).
- Microservices Development - Supporting decentralized teams working on microservice-based architectures.
- Building Container Images - Efficiently building and pushing container images to registries.
- Orchestrating Complex Workflows - Defining multi-step processes for building, testing, and deploying applications across different platforms.
Using the Assisted Installer, you can install the OpenShift Pipelines Operator either separately through the API or as part of the OpenShift AI Operator bundle in the web console.
The integration of the OpenShift Pipelines Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- See the prerequisites for the OpenShift AI Operator bundle.
3.1.7.2. Installing the OpenShift Serverless Operator Copy linkLink copied to clipboard!
The Red Hat OpenShift Serverless Operator enables you to install and use the following components on your OpenShift Container Platform cluster:
- Knative Serving - Deploys and automatically scales stateless, containerized applications according to demand. It simplifies code deployment, and handles web requests and background processes.
- Knative Eventing - Provides the building blocks for an event-driven architecture on Kubernetes. It enables loose coupling between services by allowing them to communicate asynchronously through events, rather than through direct calls.
- Knative Broker for Apache Kafka - This is a specific implementation of a Knative Broker. It provides a robust, scalable, and high-performance mechanism for routing events within Knative Eventing, in environments where Apache Kafka is the preferred message broker.
The OpenShift Serverless Operator manages Knative custom resource definitions (CRDs) for your cluster and enables you to configure them without directly modifying individual config maps for each component.
Using the Assisted Installer, you can install the OpenShift Serverless Operator either separately through the API or as part of the OpenShift AI Operator bundle in the web console.
The integration of the OpenShift Serverless Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- See the prerequisites for the OpenShift AI Operator bundle.
3.1.8. Platform Operations and Lifecycle Operators Copy linkLink copied to clipboard!
The Platform Operations & Lifecycle category contains the following Operators:
- Multicluster engine Operator
- Node Maintenance Operator
- Cluster Observability Operator
- OpenShift Loki Operator
- OpenShift Logging Operator
3.1.8.1. Installing the Multicluster engine for Kubernetes Operator Copy linkLink copied to clipboard!
You can deploy the multicluster engine for Kubernetes to perform the following tasks in a large, multi-cluster environment:
- Provision and manage additional Kubernetes clusters from your initial cluster.
- Use hosted control planes to reduce management costs and optimize cluster deployment by decoupling the control and data planes.
- Use GitOps Zero Touch Provisioning to manage remote edge sites at scale.
You can deploy the multicluster engine with OpenShift Data Foundation on all OpenShift Container Platform clusters.
Prerequisites
- Requires an additional 16384 MiB of memory and 4 CPU cores for each compute (worker) node.
- Requires an additional 16384 MiB of memory and 4 CPU cores for each control plane node.
Requires OpenShift Data Foundation (recommended for creating additional on-premise clusters), LVM Storage, or another persistent storage service.
ImportantDeploying multicluster engine without OpenShift Data Foundation results in the following scenarios:
- Multi-node cluster: No storage is configured. You must configure storage after the installation process.
- Single-node OpenShift: LVM Storage is installed.
You must review the prerequisites to ensure that your environment has enough additional resources for the multicluster engine.
3.1.8.2. Installing the Node Maintenance Operator Copy linkLink copied to clipboard!
The Node Maintenance Operator facilitates planned maintenance by placing nodes into maintenance mode.
The Node Maintenance Operator watches for new or deleted NodeMaintenance custom resources (CRs). When it detects a new NodeMaintenance CR, it prevents new workloads from being scheduled on that node, and cordons off the node from the rest of the cluster. The Operator then evicts all pods that can be evicted from the node. When the administrator deletes the NodeMaintenance CR associated with the node, maintenance ends and the Operator makes the node available for new workloads.
Using the Assisted Installer, you can install the Node Maintenance Operator either separately through the API or as part of the Virtualization Operator bundle in the web console.
Prerequisites
- See the prerequisites for the Virtualization Operator bundle.
3.1.8.3. Installing the Cluster Observability Operator Copy linkLink copied to clipboard!
The Cluster Observability Operator (COO) is an optional component of the OpenShift Container Platform designed for creating and managing highly customizable monitoring stacks. It enables cluster administrators to automate configuration and management of monitoring needs extensively, offering a more tailored and detailed view of each namespace compared to the default OpenShift Container Platform monitoring system.
The Cluster Observability Operator deploys the following monitoring components:
- Prometheus - A highly available Prometheus instance capable of sending metrics to an external endpoint by using remote write.
- Thanos Querier (optional) - Enables querying of Prometheus instances from a central location.
- Alertmanager (optional) - Provides alert configuration capabilities for different services.
- UI plugins (optional) - Enhances the observability capabilities with plugins for monitoring, logging, distributed tracing and troubleshooting.
- Korrel8r (optional) - Provides observability signal correlation, powered by the open source Korrel8r project.
Using the Assisted Installer, you can install the Cluster Observability Operator Operator either separately through the API or as part of the Virtualization Operator bundle in the web console. Install the Cluster Observability Operator Operator together with OpenShift Logging and OpenShift Loki Operators, to support Red Hat OpenShift Virtualization Engine deployments. For more information about the use of this Operator in OpenShift Container Platform, see "Additional resources".
While installing the Cluster Observability Operator Operator through the OCP OperatorHub, you can manually enable cluster monitoring for the Operator. Because the Assisted Installer does not include this setting, it automatically enables cluster monitoring for the Operator during installation, without the option of disabling it.
Prerequisites
- See the prerequisites for the Virtualization Operator bundle.
3.1.8.4. Installing the OpenShift Loki Operator Copy linkLink copied to clipboard!
The OpenShift Loki Operator provides a log aggregation system designed to store and query logs from all applications and infrastructure components. The Operator implements this functionality through the LokiStack custom resource (CR).
The LokiStack CR manages Loki, which is a scalable, highly-available, multi-tenant log aggregation system. The resource also includes a web proxy with OpenShift Container Platform authentication, which enforces multi-tenancy and facilitates the saving and indexing of data in Loki log stores. You can configure Loki as the backend to store all collected flows with a maximal level of detail.
With the Assisted Installer, you can install the OpenShift Loki Operator either separately through the API or as part of the Virtualization Operator bundle in the web console. Install the OpenShift Loki Operator together with the Cluster Observability Operator and OpenShift Logging Operators, to support Red Hat OpenShift Virtualization Engine deployments.
Prerequisites
- See the prerequisites for the Virtualization Operator bundle.
3.1.8.5. Installing the OpenShift Logging Operator Copy linkLink copied to clipboard!
The OpenShift Logging Operator installs and manages the entire logging stack for your cluster. It works automatically with the OpenShift Container Platform security settings, ensuring that the system collects and stores logs without compromising multi-tenant isolation.
The Operator deploys collector agents, such as Vector or Fluentd, as pods on every node. These collectors use specific cluster roles to securely gather data from three distinct sources:
-
collect-audit-logs: Gathers Kubernetes and OpenShift API logs and security-related events. -
collect-application-logs: Gathers logs from user-created projects and containers. -
collect-infrastructure-logs: Gathers logs from the platform itself, including nodes and control plane services.
Once collected, the Operator manages the pipelines that filter and route this data. You can send your logs to internal storage such as Loki or Elasticsearch, or forward them to an external logging service. Throughout this process, the Operator ensures that all data handling follows the cluster’s built-in security and access control policies.
Using the Assisted Installer, you can install the OpenShift Logging Operator either separately through the API or as part of the Virtualization Operator bundle in the web console. Install the OpenShift Logging Operator together with the Cluster Observability Operator and OpenShift Loki Operators, to support Red Hat OpenShift Virtualization Engine deployments.
Prerequisites
- See the prerequisites for the Virtualization Operator bundle.
3.1.9. Scheduling Operators Copy linkLink copied to clipboard!
The Scheduling category contains the following Operators:
- Node Feature Discovery (NFD) Operator
- Kube Descheduler Operator
- NUMA Resources Operator
3.1.9.1. Installing the Node Feature Discovery Operator Copy linkLink copied to clipboard!
The Node Feature Discovery (NFD) Operator automates the deployment and management of the Node Feature Discovery (NFD) add-on. The Node Feature Discovery add-on detects the configurations and hardware features of each node in an OpenShift Container Platform cluster. The add-on labels each node with hardware-specific information such as vendor, kernel configuration, or operating system version, making the cluster aware of the underlying hardware and software capabilities of the nodes.
With the Node Feature Discovery (NFD) Operator, administrators can easily gather information about the nodes to use for scheduling, resource management, and more by controlling the life-cycle of Node Feature Discovery (NFD).
You can install the Node Feature Discovery Operator separately or as part of the OpenShift AI Operator bundle.
While installing the Node Feature Discovery Operator through the OCP OperatorHub, you can manually enable cluster monitoring for the Operator. Because the Assisted Installer does not include this setting, it automatically enables cluster monitoring for the Operator during installation, without the option of disabling it.
The integration of the Node Feature Discovery Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- If you are installing the Operator as part of the OpenShift AI Operator bundle, see the bundle prerequisites.
- If you are installing the Operator separately, there are no additional prerequisites.
3.1.9.2. Installing the Kube Descheduler Operator Copy linkLink copied to clipboard!
The Kube Descheduler Operator is a Kubernetes Operator that automates the deployment, configuration, and management of the Kubernetes Descheduler within a cluster. You can use the Kube Descheduler Operator to evict pods (workloads) based on specific strategies, so that the pods can be rescheduled onto more appropriate nodes.
You can benefit from descheduling running pods in situations such as the following:
- Nodes are underutilized or overutilized.
- Pods and node affinity requirements, such as taints or labels, have changed and the original scheduling decisions are no longer appropriate for certain nodes.
- Node failure requires pods to be moved.
- New nodes are added to clusters.
- Pods have been restarted excessively.
Using the Assisted Installer, you can install the Kube Descheduler Operator either separately through the API or as part of the Virtualization Operator bundle in the web console.
While installing the Kube Descheduler Operator through the OCP OperatorHub, you can manually enable cluster monitoring for the Operator. Because the Assisted Installer does not include this setting, it automatically enables cluster monitoring for the Operator during installation, without the option of disabling it.
Prerequisites
- See the prerequisites for the Virtualization Operator bundle.
3.1.9.3. Installing the NUMA Resources Operator Copy linkLink copied to clipboard!
Non-Uniform Memory Access (NUMA) is a compute platform architecture that allows different CPUs to access different regions of memory at different speeds. NUMA resource topology refers to the locations of CPUs, memory, and PCI devices relative to each other in the compute node. Colocated resources are said to be in the same NUMA zone. For high-performance applications, the cluster needs to process pod workloads in a single NUMA zone.
The NUMA Resources Operator allows you to schedule high-performance workloads in the same NUMA zone. It deploys a node resources exporting agent that reports on available cluster node NUMA resources, and a secondary scheduler that manages the workloads.
Using the Assisted Installer, you can install the NUMA Resources Operator either separately through the API or as part of the Virtualization Operator bundle in the web console. For more information about the use of this Operator in OpenShift Container Platform, see Additional resources.
While installing the NUMA Resources Operator through the OCP OperatorHub, you can manually enable cluster monitoring for the Operator. Because the Assisted Installer does not include this setting, it automatically enables cluster monitoring for the Operator during installation, without the option of disabling it.
Prerequisites
- See the prerequisites for the Virtualization Operator bundle.
Procedure
Next steps
Create the NUMAResourcesOperator custom resource and deploy the NUMA-aware secondary pod scheduler. For details, see Scheduling NUMA-aware workloads (RHOCP).
3.2. Customizing with Operator Bundles Copy linkLink copied to clipboard!
An Operator bundle is a recommended packaging format that combines related Operators to deliver a comprehensive set of capabilities. By selecting a bundle, administrators can extend functionality beyond that of a single Operator.
This approach makes the Assisted Installer more opinionated, offering an optimized platform for each selected bundle. It reduces the adoption barrier and minimizes the expertise required for customers to quickly access essential features. Additionally, it establishes a single, well-tested, and widely recognized path for platform deployments.
Meanwhile, individual Operators remain independent and free of unnecessary dependencies, ensuring a lightweight and flexible solution for small or specialized deployments, such as on single-node OpenShift.
When an administrator specifies an Operator bundle, the Assisted Installer automatically provisions the associated Operators included in the bundle. These Operators are predefined and cannot be deselected, ensuring consistency. Administrators can modify the selection after the installation has completed.
3.2.1. Installing the Virtualization Operator bundle Copy linkLink copied to clipboard!
Virtualization lets you create multiple simulated environments or resources from a single, physical hardware system. The Virtualization Operator bundle provides a recommended and proven path for virtualization platform deployments, minimizing obstacles. The solution supports the addition of nodes and Day-2 administrative operations.
The Virtualization Operator bundle prompts the Assisted Installer to install the following Operators together:
- Fence Agents Remediation Operator - Externally fences failed nodes using power controllers.
- Kube Descheduler Operator - Evicts pods to reschedule them on more suitable nodes.
- Local Storage Operator - Allows provisioning of persistent storage by using local volumes.
- Migration Toolkit for Virtualization Operator - Enables you to migrate virtual machines from VMware vSphere, Red Hat Virtualization, or OpenStack to OpenShift Virtualization running on Red Hat OpenShift Container Platform.
- Kubernetes NMState Operator - Enables you to configure various network interface types, DNS, and routing on cluster nodes.
- Node Health Check Operator - Identifies unhealthy nodes.
- Node Maintenance Operator - Places nodes into maintenance mode.
- OpenShift Virtualization Operator - Runs virtual machines alongside containers on one platform.
- Cluster Observability Operator - Provides observability and monitoring capabilities for your OpenShift cluster.
- MetalLB Operator - Provides load balancer services for bare metal OpenShift clusters.
- NUMA Resources Operator - Provides NUMA-aware scheduling to improve workload performance on NUMA systems.
- OpenShift API for Data Protection (OADP) Operator - Enables the backup and restoral of OpenShift Container Platform cluster resources and persistent volumes.
Prerequisites
- You are installing OpenShift Container Platform version 4.14 or later.
- There is enabled CPU virtualization support in BIOS (Intel-VT/AMD-V) on all nodes.
- Each control plane (master) node has an an additional 1024 MiB of memory and 3 CPU cores.
- Each compute (worker) node has an additional 1024 MiB of memory and 5 CPU cores.
- You have included the additional resources required to support the selected storage Operator.
- You are installing a cluster of three or more nodes. The Virtualization Operator bundle is not available on single-node OpenShift.
3.2.2. Installing the OpenShift AI Operator bundle Copy linkLink copied to clipboard!
The OpenShift AI Operator bundle enables the training, serving, monitoring, and management of Artificial Intelligence (AI) and Machine Learning (ML) models and applications. It simplifies the deployment of AI and ML components on your OpenShift cluster.
The OpenShift AI Operator bundle bundle prompts the Assisted Installer to install the following Operators together:
- AMD GPU Operator - Automates the management of AMD software components needed to provision and monitor Graphics Processing Units (GPUs).
- Authorino Operator - Provides a lightweight external authorization service for tailor-made Zero Trust API security.
- Kernel Module Management Operator - Manages kernel modules and associated device plugins.
- Local Storage Operator - Allows provisioning of persistent storage by using local volumes.
- Node Feature Discovery Operator - Manages the detection of hardware features and configuration by labeling nodes with hardware-specific information.
- NVIDIA GPU Operator - Automates the management of NVIDIA software components needed to provision and monitor GPUs.
- OpenShift AI Operator - Trains, serves, monitors and manages AI/ML models and applications.
- Red Hat OpenShift Data Foundation Operator - Provides persistent software-defined storage for hybrid applications.
- OpenShift Pipelines Operator - Provides a cloud-native continuous integration and delivery (CI/CD) solution for building pipelines using Tekton.
- OpenShift Serverless Operator - Deploys workflow applications based on the CNCF (Cloud Native Computing Foundation) Serverless Workflow specification.
- OpenShift Service Mesh Operator - Provides behavioral insight and operational control over a service mesh.
The OpenShift AI Operator bundle is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- The installation of the NVIDIA GPU, AMD GPU, and Kernel Module Management Operators depends on the Graphics Processing Unit (GPU) detected on your hosts following host discovery.