Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 3. Installing
3.1. Preparing your cluster for OpenShift Virtualization
Review this section before you install OpenShift Virtualization to ensure that your cluster meets the requirements.
3.1.1. Supported platforms
You can use the following platforms with OpenShift Virtualization:
- Amazon Web Services bare metal instances.
3.1.1.1. OpenShift Virtualization on Red Hat OpenShift Service on AWS
You can run OpenShift Virtualization on a Red Hat OpenShift Service on AWS (ROSA) Classic cluster.
Before you set up your cluster, review the following summary of supported features and limitations:
- Installing
You can install the cluster by using installer-provisioned infrastructure, ensuring that you specify bare-metal instance types for the worker nodes. For example, you can use the
c5n.metal
type value for a machine based on x86_64 architecture.For more information, see the Red Hat OpenShift Service on AWS documentation about installing on AWS.
- Accessing virtual machines (VMs)
-
There is no change to how you access VMs by using the
virtctl
CLI tool or the Red Hat OpenShift Service on AWS web console. You can expose VMs by using a
NodePort
orLoadBalancer
service.- The load balancer approach is preferable because Red Hat OpenShift Service on AWS automatically creates the load balancer in AWS and manages its lifecycle. A security group is also created for the load balancer, and you can use annotations to attach existing security groups. When you remove the service, Red Hat OpenShift Service on AWS removes the load balancer and its associated resources.
- Networking
- If your application requires a flat layer 2 network or control over the IP pool, consider using OVN-Kubernetes secondary overlay networks.
- Storage
You can use any storage solution that is certified by the storage vendor to work with the underlying platform.
ImportantAWS bare-metal and ROSA clusters might have different supported storage solutions. Ensure that you confirm support with your storage vendor.
Using Amazon Elastic File System (EFS) or Amazon Elastic Block Store (EBS) with OpenShift Virtualization might cause performance and functionality limitations as shown in the following table:
Table 3.1. EFS and EBS performance and functionality limitations Feature EBS volume EFS volume Shared storage solutions gp2
gp3
io2
VM live migration
Not available
Not available
Available
Available
Available
Fast VM creation by using cloning
Available
Not available
Available
VM backup and restore by using snapshots
Available
Not available
Available
Consider using CSI storage, which supports ReadWriteMany (RWX), cloning, and snapshots to enable live migration, fast VM creation, and VM snapshots capabilities.
3.1.2. Hardware and operating system requirements
Review the following hardware and operating system requirements for OpenShift Virtualization.
3.1.2.1. CPU requirements
Supported by Red Hat Enterprise Linux (RHEL) 9.
See Red Hat Ecosystem Catalog for supported CPUs.
NoteIf your worker nodes have different CPUs, live migration failures might occur because different CPUs have different capabilities. You can mitigate this issue by ensuring that your worker nodes have CPUs with the appropriate capacity and by configuring node affinity rules for your virtual machines.
See Configuring a required node affinity rule for details.
- Support for AMD and Intel 64-bit architectures (x86-64-v2).
- Support for Intel 64 or AMD64 CPU extensions.
- Intel VT or AMD-V hardware virtualization extensions enabled.
- NX (no execute) flag enabled.
3.1.2.2. Operating system requirements
- Red Hat Enterprise Linux CoreOS (RHCOS) installed on worker nodes.
3.1.2.3. Storage requirements
- Supported by Red Hat OpenShift Service on AWS.
-
If the storage provisioner supports snapshots, you must associate a
VolumeSnapshotClass
object with the default storage class.
3.1.2.3.1. About volume and access modes for virtual machine disks
If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode.
For best results, use the ReadWriteMany
(RWX) access mode and the Block
volume mode. This is important for the following reasons:
-
ReadWriteMany
(RWX) access mode is required for live migration. -
The
Block
volume mode performs significantly better than theFilesystem
volume mode. This is because theFilesystem
volume mode uses more storage layers, including a file system layer and a disk image file. These layers are not necessary for VM disk storage.
You cannot live migrate virtual machines with the following configurations:
-
Storage volume with
ReadWriteOnce
(RWO) access mode - Passthrough features such as GPUs
Set the evictionStrategy
field to None
for these virtual machines. The None
strategy powers down VMs during node reboots.
3.1.3. Live migration requirements
-
Shared storage with
ReadWriteMany
(RWX) access mode. Sufficient RAM and network bandwidth.
NoteYou must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation:
Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)
The default number of migrations that can run in parallel in the cluster is 5.
- If the virtual machine uses a host model CPU, the nodes must support the virtual machine’s host model CPU.
- A dedicated Multus network for live migration is highly recommended. A dedicated network minimizes the effects of network saturation on tenant workloads during migration.
3.1.4. Physical resource overhead requirements
OpenShift Virtualization is an add-on to Red Hat OpenShift Service on AWS and imposes additional overhead that you must account for when planning a cluster. Each cluster machine must accommodate the following overhead requirements in addition to the Red Hat OpenShift Service on AWS requirements. Oversubscribing the physical resources in a cluster can affect performance.
The numbers noted in this documentation are based on Red Hat’s test methodology and setup. These numbers can vary based on your own individual setup and environments.
Memory overhead
Calculate the memory overhead values for OpenShift Virtualization by using the equations below.
Cluster memory overhead
Memory overhead per infrastructure node ≈ 150 MiB
Memory overhead per worker node ≈ 360 MiB
Additionally, OpenShift Virtualization environment resources require a total of 2179 MiB of RAM that is spread across all infrastructure nodes.
Virtual machine memory overhead
Memory overhead per virtual machine ≈ (1.002 × requested memory) \ + 218 MiB \ 1 + 8 MiB × (number of vCPUs) \ 2 + 16 MiB × (number of graphics devices) \ 3 + (additional memory overhead) 4
- 1
- Required for the processes that run in the
virt-launcher
pod. - 2
- Number of virtual CPUs requested by the virtual machine.
- 3
- Number of virtual graphics cards requested by the virtual machine.
- 4
- Additional memory overhead:
- If your environment includes a Single Root I/O Virtualization (SR-IOV) network device or a Graphics Processing Unit (GPU), allocate 1 GiB additional memory overhead for each device.
- If Secure Encrypted Virtualization (SEV) is enabled, add 256 MiB.
- If Trusted Platform Module (TPM) is enabled, add 53 MiB.
CPU overhead
Calculate the cluster processor overhead requirements for OpenShift Virtualization by using the equation below. The CPU overhead per virtual machine depends on your individual setup.
Cluster CPU overhead
CPU overhead for infrastructure nodes ≈ 4 cores
OpenShift Virtualization increases the overall utilization of cluster level services such as logging, routing, and monitoring. To account for this workload, ensure that nodes that host infrastructure components have capacity allocated for 4 additional cores (4000 millicores) distributed across those nodes.
CPU overhead for worker nodes ≈ 2 cores + CPU overhead per virtual machine
Each worker node that hosts virtual machines must have capacity for 2 additional cores (2000 millicores) for OpenShift Virtualization management workloads in addition to the CPUs required for virtual machine workloads.
Virtual machine CPU overhead
If dedicated CPUs are requested, there is a 1:1 impact on the cluster CPU overhead requirement. Otherwise, there are no specific rules about how many CPUs a virtual machine requires.
Storage overhead
Use the guidelines below to estimate storage overhead requirements for your OpenShift Virtualization environment.
Cluster storage overhead
Aggregated storage overhead per node ≈ 10 GiB
10 GiB is the estimated on-disk storage impact for each node in the cluster when you install OpenShift Virtualization.
Virtual machine storage overhead
Storage overhead per virtual machine depends on specific requests for resource allocation within the virtual machine. The request could be for ephemeral storage on the node or storage resources hosted elsewhere in the cluster. OpenShift Virtualization does not currently allocate any additional ephemeral storage for the running container itself.
Example
As a cluster administrator, if you plan to host 10 virtual machines in the cluster, each with 1 GiB of RAM and 2 vCPUs, the memory impact across the cluster is 11.68 GiB. The estimated on-disk storage impact for each node in the cluster is 10 GiB and the CPU impact for worker nodes that host virtual machine workloads is a minimum of 2 cores.
Additional resources
3.2. Installing OpenShift Virtualization
Install OpenShift Virtualization to add virtualization functionality to your Red Hat OpenShift Service on AWS cluster.
3.2.1. Installing the OpenShift Virtualization Operator
Install the OpenShift Virtualization Operator by using the Red Hat OpenShift Service on AWS web console or the command line.
3.2.1.1. Installing the OpenShift Virtualization Operator by using the web console
You can deploy the OpenShift Virtualization Operator by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
- Install Red Hat OpenShift Service on AWS 4 on your cluster.
-
Log in to the Red Hat OpenShift Service on AWS web console as a user with
cluster-admin
permissions. - Create a machine pool based on a bare metal compute node instance type. For more information, see "Creating a machine pool" in the Additional resources of this section.
Procedure
-
From the Administrator perspective, click Operators
OperatorHub. - In the Filter by keyword field, type Virtualization.
- Select the OpenShift Virtualization Operator tile with the Red Hat source label.
- Read the information about the Operator and click Install.
On the Install Operator page:
- Select stable from the list of available Update Channel options. This ensures that you install the version of OpenShift Virtualization that is compatible with your Red Hat OpenShift Service on AWS version.
For Installed Namespace, ensure that the Operator recommended namespace option is selected. This installs the Operator in the mandatory
openshift-cnv
namespace, which is automatically created if it does not exist.WarningAttempting to install the OpenShift Virtualization Operator in a namespace other than
openshift-cnv
causes the installation to fail.For Approval Strategy, it is highly recommended that you select Automatic, which is the default value, so that OpenShift Virtualization automatically updates when a new version is available in the stable update channel.
While it is possible to select the Manual approval strategy, this is inadvisable because of the high risk that it presents to the supportability and functionality of your cluster. Only select Manual if you fully understand these risks and cannot use Automatic.
WarningBecause OpenShift Virtualization is only supported when used with the corresponding Red Hat OpenShift Service on AWS version, missing OpenShift Virtualization updates can cause your cluster to become unsupported.
-
Click Install to make the Operator available to the
openshift-cnv
namespace. - When the Operator installs successfully, click Create HyperConverged.
- Optional: Configure Infra and Workloads node placement options for OpenShift Virtualization components.
- Click Create to launch OpenShift Virtualization.
Verification
-
Navigate to the Workloads
Pods page and monitor the OpenShift Virtualization pods until they are all Running. After all the pods display the Running state, you can use OpenShift Virtualization.
Additional resources
3.2.1.2. Installing the OpenShift Virtualization Operator by using the command line
Subscribe to the OpenShift Virtualization catalog and install the OpenShift Virtualization Operator by applying manifests to your cluster.
3.2.1.2.1. Subscribing to the OpenShift Virtualization catalog by using the CLI
Before you install OpenShift Virtualization, you must subscribe to the OpenShift Virtualization catalog. Subscribing gives the openshift-cnv
namespace access to the OpenShift Virtualization Operators.
To subscribe, configure Namespace
, OperatorGroup
, and Subscription
objects by applying a single manifest to your cluster.
Prerequisites
- Install Red Hat OpenShift Service on AWS 4 on your cluster.
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
Create the required
Namespace
,OperatorGroup
, andSubscription
objects for OpenShift Virtualization by running the following command:$ oc apply -f <file name>.yaml
You can configure certificate rotation parameters in the YAML file.
3.2.1.2.2. Deploying the OpenShift Virtualization Operator by using the CLI
You can deploy the OpenShift Virtualization Operator by using the oc
CLI.
Prerequisites
-
Subscribe to the OpenShift Virtualization catalog in the
openshift-cnv
namespace. -
Log in as a user with
cluster-admin
privileges. - Create a machine pool based on a bare metal compute node instance type.
Procedure
Create a YAML file that contains the following manifest:
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec:
Deploy the OpenShift Virtualization Operator by running the following command:
$ oc apply -f <file_name>.yaml
Verification
Ensure that OpenShift Virtualization deployed successfully by watching the
PHASE
of the cluster service version (CSV) in theopenshift-cnv
namespace. Run the following command:$ watch oc get csv -n openshift-cnv
The following output displays if deployment was successful:
Example output
NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.17.0 OpenShift Virtualization 4.17.0 Succeeded
Additional resources
3.2.2. Next steps
- The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.
3.3. Uninstalling OpenShift Virtualization
You uninstall OpenShift Virtualization by using the web console or the command line interface (CLI) to delete the OpenShift Virtualization workloads, the Operator, and its resources.
3.3.1. Uninstalling OpenShift Virtualization by using the web console
You uninstall OpenShift Virtualization by using the web console to perform the following tasks:
You must first delete all virtual machines, and virtual machine instances.
You cannot uninstall OpenShift Virtualization while its workloads remain on the cluster.
3.3.1.1. Deleting the HyperConverged custom resource
To uninstall OpenShift Virtualization, you first delete the HyperConverged
custom resource (CR).
Prerequisites
-
You have access to an Red Hat OpenShift Service on AWS cluster using an account with
cluster-admin
permissions.
Procedure
-
Navigate to the Operators
Installed Operators page. - Select the OpenShift Virtualization Operator.
- Click the OpenShift Virtualization Deployment tab.
-
Click the Options menu
beside
kubevirt-hyperconverged
and select Delete HyperConverged. - Click Delete in the confirmation window.
3.3.1.2. Deleting Operators from a cluster using the web console
Cluster administrators can delete installed Operators from a selected namespace by using the web console.
Prerequisites
-
You have access to an Red Hat OpenShift Service on AWS cluster web console using an account with
dedicated-admin
permissions.
Procedure
-
Navigate to the Operators
Installed Operators page. - Scroll or enter a keyword into the Filter by name field to find the Operator that you want to remove. Then, click on it.
On the right side of the Operator Details page, select Uninstall Operator from the Actions list.
An Uninstall Operator? dialog box is displayed.
Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates.
NoteThis action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs.
3.3.1.3. Deleting a namespace using the web console
You can delete a namespace by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
-
You have access to an Red Hat OpenShift Service on AWS cluster using an account with
cluster-admin
permissions.
Procedure
-
Navigate to Administration
Namespaces. - Locate the namespace that you want to delete in the list of namespaces.
- On the far right side of the namespace listing, select Delete Namespace from the Options menu .
- When the Delete Namespace pane opens, enter the name of the namespace that you want to delete in the field.
- Click Delete.
3.3.1.4. Deleting OpenShift Virtualization custom resource definitions
You can delete the OpenShift Virtualization custom resource definitions (CRDs) by using the web console.
Prerequisites
-
You have access to an Red Hat OpenShift Service on AWS cluster using an account with
cluster-admin
permissions.
Procedure
-
Navigate to Administration
CustomResourceDefinitions. -
Select the Label filter and enter
operators.coreos.com/kubevirt-hyperconverged.openshift-cnv
in the Search field to display the OpenShift Virtualization CRDs. - Click the Options menu beside each CRD and select Delete CustomResourceDefinition.
3.3.2. Uninstalling OpenShift Virtualization by using the CLI
You can uninstall OpenShift Virtualization by using the OpenShift CLI (oc
).
Prerequisites
-
You have access to an Red Hat OpenShift Service on AWS cluster using an account with
cluster-admin
permissions. -
You have installed the OpenShift CLI (
oc
). - You have deleted all virtual machines and virtual machine instances. You cannot uninstall OpenShift Virtualization while its workloads remain on the cluster.
Procedure
Delete the
HyperConverged
custom resource:$ oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv
Delete the OpenShift Virtualization Operator subscription:
$ oc delete subscription kubevirt-hyperconverged -n openshift-cnv
Delete the OpenShift Virtualization
ClusterServiceVersion
resource:$ oc delete csv -n openshift-cnv -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv
Delete the OpenShift Virtualization namespace:
$ oc delete namespace openshift-cnv
List the OpenShift Virtualization custom resource definitions (CRDs) by running the
oc delete crd
command with thedry-run
option:$ oc delete crd --dry-run=client -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv
Example output
customresourcedefinition.apiextensions.k8s.io "cdis.cdi.kubevirt.io" deleted (dry run) customresourcedefinition.apiextensions.k8s.io "hostpathprovisioners.hostpathprovisioner.kubevirt.io" deleted (dry run) customresourcedefinition.apiextensions.k8s.io "hyperconvergeds.hco.kubevirt.io" deleted (dry run) customresourcedefinition.apiextensions.k8s.io "kubevirts.kubevirt.io" deleted (dry run) customresourcedefinition.apiextensions.k8s.io "networkaddonsconfigs.networkaddonsoperator.network.kubevirt.io" deleted (dry run) customresourcedefinition.apiextensions.k8s.io "ssps.ssp.kubevirt.io" deleted (dry run) customresourcedefinition.apiextensions.k8s.io "tektontasks.tektontasks.kubevirt.io" deleted (dry run)
Delete the CRDs by running the
oc delete crd
command without thedry-run
option:$ oc delete crd -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv
Additional resources