Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 4. Installing
4.1. Preparing your cluster for OpenShift Virtualization Copier lienLien copié sur presse-papiers!
Before you install OpenShift Virtualization, review this section to ensure that your cluster meets the requirements.
4.1.1. Compatible platforms Copier lienLien copié sur presse-papiers!
You can use the following platforms with OpenShift Virtualization:
- On-premise bare metal servers. See Planning a bare metal cluster for OpenShift Virtualization.
-
Bare metal clusters installed on ARM64-based (
arm64, also known asaarch64) systems.
- IBM Z® or IBM® LinuxONE (s390x architecture) systems where an OpenShift Container Platform cluster is installed in logical partitions (LPARs). See Preparing to install on IBM Z and IBM LinuxONE.
- Cloud platforms
OpenShift Virtualization is also compatible with a variety of public cloud platforms. Each cloud platform has specific storage provider options available. The following table outlines which platforms are fully supported (GA) and which are currently offered as Technology Preview features.
ImportantInstalling OpenShift Virtualization on certain cloud platforms is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
| Vendor | Status | Storage | Related links |
|---|---|---|---|
| Amazon Web Services (AWS) | GA | Elastic Block Store (EBS), Red Hat OpenShift Data Foundation (ODF), Portworx, FSx (NetApp) | |
| Red Hat OpenShift Service on AWS (ROSA) | GA | EBS, Portworx, FSx (Q3), ODF |
|
| Oracle Cloud Infrastructure (OCI) | GA | OCI native storage |
|
| Azure Red Hat OpenShift (ARO) | GA | ODF |
|
| Google Cloud | Technology Preview | Google Cloud native storage |
|
For platform-specific networking information, see the networking overview.
Bare metal instances or servers offered by other cloud providers are not supported.
4.1.1.1. OpenShift Virtualization on AWS bare metal Copier lienLien copié sur presse-papiers!
You can run OpenShift Virtualization on an Amazon Web Services (AWS) bare metal OpenShift Container Platform cluster.
OpenShift Virtualization is also supported on Red Hat OpenShift Service on AWS (ROSA) Classic clusters, which have the same configuration requirements as AWS bare-metal clusters.
Before you set up your cluster, review the following summary of supported features and limitations:
- Installing
You can install the cluster by using installer-provisioned infrastructure, ensuring that you specify bare-metal instance types for the worker nodes. For example, you can use the
c5n.metaltype value for a machine based on x86_64 architecture. You specify bare-metal instance types by editing theinstall-config.yamlfile.For more information, see the OpenShift Container Platform documentation about installing on AWS.
- Accessing virtual machines (VMs)
-
There is no change to how you access VMs by using the
virtctlCLI tool or the OpenShift Container Platform web console. You can expose VMs by using a
NodePortorLoadBalancerservice.NoteThe load balancer approach is preferable because OpenShift Container Platform automatically creates the load balancer in AWS and manages its lifecycle. A security group is also created for the load balancer, and you can use annotations to attach existing security groups. When you remove the service, OpenShift Container Platform removes the load balancer and its associated resources.
- Networking
- You cannot use Single Root I/O Virtualization (SR-IOV) or bridge Container Network Interface (CNI) networks, including virtual LAN (VLAN). If your application requires a flat layer 2 network or control over the IP pool, consider using OVN-Kubernetes secondary overlay networks.
- Storage
You can use any storage solution that is certified by the storage vendor to work with the underlying platform.
ImportantAWS bare metal, Red Hat OpenShift Service on AWS, and Red Hat OpenShift Service on AWS classic architecture clusters might have different supported storage solutions. Ensure that you confirm support with your storage vendor.
Using Amazon Elastic File System (EFS) or Amazon Elastic Block Store (EBS) with OpenShift Virtualization might cause performance and functionality limitations as shown in the following table:
Expand Table 4.1. EFS and EBS performance and functionality limitations Feature EBS volume EFS volume Shared storage solutions gp2
gp3
io2
VM live migration
Not available
Not available
Available
Available
Available
Fast VM creation by using cloning
Available
Not available
Available
VM backup and restore by using snapshots
Available
Not available
Available
Consider using CSI storage, which supports ReadWriteMany (RWX), cloning, and snapshots to enable live migration, fast VM creation, and VM snapshots capabilities.
- Hosted control planes (HCPs)
- HCPs for OpenShift Virtualization are not currently supported on AWS infrastructure.
4.1.1.2. ARM64 compatibility Copier lienLien copié sur presse-papiers!
Using OpenShift Virtualization on an OpenShift Container Platform cluster installed on an ARM64 system is generally available (GA).
Before using OpenShift Virtualization on an ARM64-based system, consider the following limitations:
- Operating system
- Only Linux-based guest operating systems are supported.
- All virtualization limitations for RHEL also apply to OpenShift Virtualization. For more information, see How virtualization on ARM64 differs from AMD64 and Intel 64 in the RHEL documentation.
- Live migration
- Live migration is not supported on ARM64-based OpenShift Container Platform clusters.
- Hotplug is not supported on ARM64-based clusters because it depends on live migration.
- VM creation
- RHEL 10 supports instance types and preferences, but not templates.
- RHEL 9 supports templates, instance types, and preferences.
4.1.1.3. IBM Z and IBM LinuxONE compatibility Copier lienLien copié sur presse-papiers!
You can use OpenShift Virtualization in an OpenShift Container Platform cluster that is installed in logical partitions (LPARs) on an IBM Z® or IBM® LinuxONE (s390x architecture) system.
Some features are not currently available on s390x architecture, while others require workarounds or procedural changes. These lists are subject to change.
Currently unavailable features
The following features are currently not available on s390x architecture:
- Memory hot plugging and hot unplugging
- Node Health Check Operator
- SR-IOV Operator
- PCI passthrough
- OpenShift Virtualization cluster checkup framework
- OpenShift Virtualization on a cluster installed in FIPS mode
- IPv6
- IBM® Storage scale
- Hosted control planes for OpenShift Virtualization
- VM pages using HugePages
The following features are not applicable on s390x architecture:
- virtual Trusted Platform Module (vTPM) devices
- UEFI mode for VMs
- USB host passthrough
- Configuring virtual GPUs
- Creating and managing Windows VMs
- Hyper-V
Functionality differences
The following features are available for use on s390x architecture but function differently or require procedural changes:
- When deleting a virtual machine by using the web console, the grace period option is ignored.
-
When configuring the default CPU model, the
spec.defaultCPUModelvalue is"gen15b"for an IBM Z cluster. -
When configuring a downward metrics device, if you use a VM preference, the
spec.preference.namevalue must be set torhel.9.s390xor another available preference with the format*.s390x. -
When creating virtual machines from instance types, you are not allowed to set
spec.domain.memory.maxGuestbecause memory hot plugging is not supported on IBM Z®. -
Prometheus queries for VM guests could have inconsistent outcome in comparison to
x86.
4.1.2. Important considerations for any platform Copier lienLien copié sur presse-papiers!
Before you install OpenShift Virtualization on any platform, note the following caveats and considerations.
- Installation method considerations
- You can use any installation method, including user-provisioned, installer-provisioned, or Assisted Installer, to deploy OpenShift Container Platform. However, the installation method and the cluster topology might affect OpenShift Virtualization functionality, such as snapshots or live migration.
- Red Hat OpenShift Data Foundation
- If you deploy OpenShift Virtualization with Red Hat OpenShift Data Foundation, you must create a dedicated storage class for Windows virtual machine disks. See Optimizing ODF PersistentVolumes for Windows VMs for details.
- IPv6
OpenShift Virtualization support for single-stack IPv6 clusters is limited to the OVN-Kubernetes localnet and Linux bridge Container Network Interface (CNI) plugins.
ImportantDeploying OpenShift Virtualization on a single-stack IPv6 cluster is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
- FIPS mode
- If you install your cluster in FIPS mode, no additional setup is required for OpenShift Virtualization.
4.1.3. Hardware and operating system requirements Copier lienLien copié sur presse-papiers!
Review the following hardware and operating system requirements for OpenShift Virtualization.
4.1.3.1. CPU requirements Copier lienLien copié sur presse-papiers!
Supported by Red Hat Enterprise Linux (RHEL) 9.
See Red Hat Ecosystem Catalog for supported CPUs.
NoteIf your worker nodes have different CPUs, live migration failures might occur because different CPUs have different capabilities. You can mitigate this issue by ensuring that your worker nodes have CPUs with the appropriate capacity and by configuring node affinity rules for your virtual machines.
See Configuring a required node affinity rule for details.
-
Supports AMD64, Intel 64-bit (x86-64-v2), IBM Z® (
s390x), or ARM64-based (arm64oraarch64) architectures and their respective CPU extensions. -
Intel VT-x, AMD-V, or ARM virtualization extensions are enabled, or
s390xvirtualization support is enabled. - NX (no execute) flag is enabled.
-
If you use
s390xarchitecture, the default CPU model is set togen15b.
4.1.3.2. Operating system requirements Copier lienLien copié sur presse-papiers!
Red Hat Enterprise Linux CoreOS (RHCOS) installed on worker nodes.
See About RHCOS for details.
NoteRHEL worker nodes are not supported.
4.1.3.3. Storage requirements Copier lienLien copié sur presse-papiers!
- Supported by OpenShift Container Platform. See Optimizing storage.
- You must create a default OpenShift Virtualization or OpenShift Container Platform storage class. The purpose of this is to address the unique storage needs of VM workloads and offer optimized performance, reliability, and user experience. If both OpenShift Virtualization and OpenShift Container Platform default storage classes exist, the OpenShift Virtualization class takes precedence when creating VM disks.
To mark a storage class as the default for virtualization workloads, set the annotation storageclass.kubevirt.io/is-default-virt-class to "true".
-
If the storage provisioner supports snapshots, you must associate a
VolumeSnapshotClassobject with the default storage class.
4.1.3.3.1. About volume and access modes for virtual machine disks Copier lienLien copié sur presse-papiers!
If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode.
For a list of known storage providers for OpenShift Virtualization, see the Red Hat Ecosystem Catalog.
For best results, use the ReadWriteMany (RWX) access mode and the Block volume mode. This is important for the following reasons:
-
ReadWriteMany(RWX) access mode is required for live migration. The
Blockvolume mode performs significantly better than theFilesystemvolume mode. This is because theFilesystemvolume mode uses more storage layers, including a file system layer and a disk image file. These layers are not necessary for VM disk storage.For example, if you use Red Hat OpenShift Data Foundation, Ceph RBD volumes are preferable to CephFS volumes.
You cannot live migrate virtual machines with the following configurations:
-
Storage volume with
ReadWriteOnce(RWO) access mode - Passthrough features such as GPUs
Set the evictionStrategy field to None for these virtual machines. The None strategy powers down VMs during node reboots.
4.1.4. Live migration requirements Copier lienLien copié sur presse-papiers!
-
Shared storage with
ReadWriteMany(RWX) access mode. Sufficient RAM and network bandwidth.
NoteYou must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation:
Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)
Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)Copy to Clipboard Copied! Toggle word wrap Toggle overflow The default number of migrations that can run in parallel in the cluster is 5.
- If the virtual machine uses a host model CPU, the nodes must support the virtual machine’s host model CPU.
A dedicated Multus network for live migration is highly recommended. A dedicated network minimizes the effects of network saturation on tenant workloads during migration.
4.1.5. Physical resource overhead requirements Copier lienLien copié sur presse-papiers!
OpenShift Virtualization is an add-on to OpenShift Container Platform and imposes additional overhead that you must account for when planning a cluster.
Each cluster machine must accommodate the following overhead requirements in addition to the OpenShift Container Platform requirements. Oversubscribing the physical resources in a cluster can affect performance.
The numbers noted in this documentation are based on Red Hat’s test methodology and setup. These numbers can vary based on your own individual setup and environments.
4.1.5.1. Memory overhead Copier lienLien copié sur presse-papiers!
Calculate the memory overhead values for OpenShift Virtualization by using the equations below.
- Cluster memory overhead
Memory overhead per infrastructure node ≈ 150 MiB
Memory overhead per infrastructure node ≈ 150 MiBCopy to Clipboard Copied! Toggle word wrap Toggle overflow Memory overhead per worker node ≈ 360 MiB
Memory overhead per worker node ≈ 360 MiBCopy to Clipboard Copied! Toggle word wrap Toggle overflow Additionally, OpenShift Virtualization environment resources require a total of 2179 MiB of RAM that is spread across all infrastructure nodes.
- Virtual machine memory overhead
Memory overhead per virtual machine ≈ (0.002 × requested memory) \ + 218 MiB \ + 8 MiB × (number of vCPUs) \ + 16 MiB × (number of graphics devices) \ + (additional memory overhead)Memory overhead per virtual machine ≈ (0.002 × requested memory) \ + 218 MiB \ + 8 MiB × (number of vCPUs) \ + 16 MiB × (number of graphics devices) \ + (additional memory overhead)Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
218 MiBis required for the processes that run in thevirt-launcherpod. -
8 MiB × (number of vCPUs)refers to the number of virtual CPUs requested by the virtual machine. -
16 MiB × (number of graphics devices)refers to the number of virtual graphics cards requested by the virtual machine. Additional memory overhead:
- If your environment includes a Single Root I/O Virtualization (SR-IOV) network device or a Graphics Processing Unit (GPU), allocate 1 GiB additional memory overhead for each device.
- If Secure Encrypted Virtualization (SEV) is enabled, add 256 MiB.
- If Trusted Platform Module (TPM) is enabled, add 53 MiB.
-
4.1.5.2. CPU overhead Copier lienLien copié sur presse-papiers!
Calculate the cluster processor overhead requirements for OpenShift Virtualization by using the equation below. The CPU overhead per virtual machine depends on your individual setup.
- Cluster CPU overhead
CPU overhead for infrastructure nodes ≈ 4 cores
CPU overhead for infrastructure nodes ≈ 4 coresCopy to Clipboard Copied! Toggle word wrap Toggle overflow OpenShift Virtualization increases the overall utilization of cluster level services such as logging, routing, and monitoring. To account for this workload, ensure that nodes that host infrastructure components have capacity allocated for 4 additional cores (4000 millicores) distributed across those nodes.
CPU overhead for worker nodes ≈ 2 cores + CPU overhead per virtual machine
CPU overhead for worker nodes ≈ 2 cores + CPU overhead per virtual machineCopy to Clipboard Copied! Toggle word wrap Toggle overflow Each worker node that hosts virtual machines must have capacity for 2 additional cores (2000 millicores) for OpenShift Virtualization management workloads in addition to the CPUs required for virtual machine workloads.
- Virtual machine CPU overhead
- If dedicated CPUs are requested, there is a 1:1 impact on the cluster CPU overhead requirement. Otherwise, there are no specific rules about how many CPUs a virtual machine requires.
4.1.5.3. Storage overhead Copier lienLien copié sur presse-papiers!
Use the guidelines below to estimate storage overhead requirements for your OpenShift Virtualization environment.
- Cluster storage overhead
Aggregated storage overhead per node ≈ 10 GiB
Aggregated storage overhead per node ≈ 10 GiBCopy to Clipboard Copied! Toggle word wrap Toggle overflow 10 GiB is the estimated on-disk storage impact for each node in the cluster when you install OpenShift Virtualization.
- Virtual machine storage overhead
- Storage overhead per virtual machine depends on specific requests for resource allocation within the virtual machine. The request could be for ephemeral storage on the node or storage resources hosted elsewhere in the cluster. OpenShift Virtualization does not currently allocate any additional ephemeral storage for the running container itself.
- Example
- As a cluster administrator, if you plan to host 10 virtual machines in the cluster, each with 1 GiB of RAM and 2 vCPUs, the memory impact across the cluster is 11.68 GiB. The estimated on-disk storage impact for each node in the cluster is 10 GiB and the CPU impact for worker nodes that host virtual machine workloads is a minimum of 2 cores.
4.1.6. Single-node OpenShift differences Copier lienLien copié sur presse-papiers!
You can install OpenShift Virtualization on single-node OpenShift.
However, you should be aware that Single-node OpenShift does not support the following features:
- High availability
- Pod disruption
- Live migration
- Virtual machines or templates that have an eviction strategy configured
4.1.7. Object maximums Copier lienLien copié sur presse-papiers!
You must consider the following tested object maximums when planning your cluster:
4.1.8. Cluster high-availability options Copier lienLien copié sur presse-papiers!
You can configure one of the following high-availability (HA) options for your cluster:
Automatic high availability for installer-provisioned infrastructure (IPI) is available by deploying machine health checks.
NoteIn OpenShift Container Platform clusters installed using installer-provisioned infrastructure and with a properly configured
MachineHealthCheckresource, if a node fails the machine health check and becomes unavailable to the cluster, it is recycled. What happens next with VMs that ran on the failed node depends on a series of conditions. See Run strategies for more detailed information about the potential outcomes and how run strategies affect those outcomes.Currently, IPI is not supported on IBM Z®.
Automatic high availability for both IPI and non-IPI is available by using the Node Health Check Operator on the OpenShift Container Platform cluster to deploy the
NodeHealthCheckcontroller. The controller identifies unhealthy nodes and uses a remediation provider, such as the Self Node Remediation Operator or Fence Agents Remediation Operator, to remediate the unhealthy nodes. For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation.NoteFence Agents Remediation uses supported fencing agents to reset failed nodes faster than the Self Node Remediation Operator. This improves overall virtual machine high availability. For more information, see the OpenShift Virtualization - Fencing and VM High Availability Guide knowledgebase article.
High availability for any platform is available by using either a monitoring system or a qualified human to monitor node availability. When a node is lost, shut it down and run
oc delete node <lost_node>.NoteWithout an external monitoring system or a qualified human monitoring node health, virtual machines lose high availability.
4.2. Installing OpenShift Virtualization Copier lienLien copié sur presse-papiers!
Install OpenShift Virtualization to add virtualization functionality to your OpenShift Container Platform cluster.
If you install OpenShift Virtualization in a restricted environment with no internet connectivity, you must configure Operator Lifecycle Manager for disconnected environments.
If you have limited internet connectivity, you can configure proxy support in OLM to access the software catalog.
4.2.1. Installing the OpenShift Virtualization Operator Copier lienLien copié sur presse-papiers!
Install the OpenShift Virtualization Operator by using the OpenShift Container Platform web console or the command line.
4.2.1.1. Installing the OpenShift Virtualization Operator by using the web console Copier lienLien copié sur presse-papiers!
You can deploy the OpenShift Virtualization Operator by using the OpenShift Container Platform web console.
Prerequisites
- Install OpenShift Container Platform 4.21 on your cluster.
-
Log in to the OpenShift Container Platform web console as a user with
cluster-adminpermissions.
Procedure
-
From the Administrator perspective, click Ecosystem
Software Catalog. - In the Filter by keyword field, type Virtualization.
- Select the OpenShift Virtualization Operator tile with the Red Hat source label.
- Read the information about the Operator and click Install.
On the Install Operator page:
- Select stable from the list of available Update Channel options. This ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version.
For Installed Namespace, ensure that the Operator recommended namespace option is selected. This installs the Operator in the mandatory
openshift-cnvnamespace, which is automatically created if it does not exist.WarningAttempting to install the OpenShift Virtualization Operator in a namespace other than
openshift-cnvcauses the installation to fail.For Approval Strategy, it is highly recommended that you select Automatic, which is the default value, so that OpenShift Virtualization automatically updates when a new version is available in the stable update channel.
Selecting the Manual approval strategy is not recommended, as it poses a high risk to cluster support and functionality. Only select Manual if you fully understand these risks and cannot use Automatic.
WarningBecause OpenShift Virtualization is only supported when used with the corresponding OpenShift Container Platform version, missing OpenShift Virtualization updates can cause your cluster to become unsupported.
-
Click Install to make the Operator available to the
openshift-cnvnamespace. - When the Operator installs successfully, click Create HyperConverged.
- Optional: Configure Infra and Workloads node placement options for OpenShift Virtualization components.
- Click Create to launch OpenShift Virtualization.
Verification
-
Navigate to the Workloads
Pods page and monitor the OpenShift Virtualization pods until they are all Running. After all the pods display the Running state, you can use OpenShift Virtualization.
4.2.1.2. Installing the OpenShift Virtualization Operator by using the command line Copier lienLien copié sur presse-papiers!
Subscribe to the OpenShift Virtualization catalog and install the OpenShift Virtualization Operator by applying manifests to your cluster.
4.2.1.2.1. Subscribing to the OpenShift Virtualization catalog by using the CLI Copier lienLien copié sur presse-papiers!
Before you install OpenShift Virtualization, you must subscribe to the OpenShift Virtualization catalog. Subscribing gives the openshift-cnv namespace access to the OpenShift Virtualization Operators.
To subscribe, configure Namespace, OperatorGroup, and Subscription objects by applying a single manifest to your cluster.
Prerequisites
- Install OpenShift Container Platform 4.21 on your cluster.
-
Install the OpenShift CLI (
oc). -
Log in as a user with
cluster-adminprivileges.
Procedure
Create a YAML file that contains the following manifest:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Using the
stablechannel ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version.Create the required
Namespace,OperatorGroup, andSubscriptionobjects for OpenShift Virtualization by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
You must verify that the subscription creation was successful before you can proceed with installing OpenShift Virtualization.
Check that the
ClusterServiceVersion(CSV) object was created successfully. Run the following command and verify the output:oc get csv -n openshift-cnv
$ oc get csv -n openshift-cnvCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the CSV was created successfully, the output shows an entry that contains a
NAMEvalue ofkubevirt-hyperconverged-operator-*, aDISPLAYvalue ofOpenShift Virtualization, and aPHASEvalue ofSucceeded, as shown in the following example output:Example output:
NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.21.0 OpenShift Virtualization 4.21.0 kubevirt-hyperconverged-operator.v4.20.0 Succeeded
NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.21.0 OpenShift Virtualization 4.21.0 kubevirt-hyperconverged-operator.v4.20.0 SucceededCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the
HyperConvergedcustom resource (CR) has the correct version. Run the following command and verify the output:oc get hco -n openshift-cnv kubevirt-hyperconverged -o json | jq .status.versions
$ oc get hco -n openshift-cnv kubevirt-hyperconverged -o json | jq .status.versionsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
{ "name": "operator", "version": "4.21.0" }{ "name": "operator", "version": "4.21.0" }Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the
HyperConvergedCR conditions. Run the following command and check the output:oc get hco kubevirt-hyperconverged -n openshift-cnv -o json | jq -r '.status.conditions[] | {type,status}'$ oc get hco kubevirt-hyperconverged -n openshift-cnv -o json | jq -r '.status.conditions[] | {type,status}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You can configure certificate rotation parameters in the YAML file.
4.2.1.2.2. Deploying the OpenShift Virtualization Operator by using the CLI Copier lienLien copié sur presse-papiers!
You can deploy the OpenShift Virtualization Operator by using the oc CLI.
Prerequisites
-
Install the OpenShift CLI (
oc). -
Subscribe to the OpenShift Virtualization catalog in the
openshift-cnvnamespace. -
Log in as a user with
cluster-adminprivileges.
Procedure
Create a YAML file that contains the following manifest:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the OpenShift Virtualization Operator by running the following command:
oc apply -f <file_name>.yaml
$ oc apply -f <file_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Ensure that OpenShift Virtualization deployed successfully by watching the
PHASEof the cluster service version (CSV) in theopenshift-cnvnamespace. Run the following command:watch oc get csv -n openshift-cnv
$ watch oc get csv -n openshift-cnvCopy to Clipboard Copied! Toggle word wrap Toggle overflow The following output displays if deployment was successful:
NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.21.0 OpenShift Virtualization 4.21.0 Succeeded
NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.21.0 OpenShift Virtualization 4.21.0 SucceededCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.2. Next steps Copier lienLien copié sur presse-papiers!
- As a cluster administrator, you can run a self validation checkup to verify that the environment is fully functional and self-sustained before you deploy production workloads.
- The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.
4.3. Uninstalling OpenShift Virtualization Copier lienLien copié sur presse-papiers!
You uninstall OpenShift Virtualization by using the web console or the command-line interface (CLI) to delete the OpenShift Virtualization workloads, the Operator, and its resources.
4.3.1. Uninstalling OpenShift Virtualization by using the web console Copier lienLien copié sur presse-papiers!
You uninstall OpenShift Virtualization by using the web console to perform the following tasks:
You must first delete all virtual machines, and virtual machine instances.
You cannot uninstall OpenShift Virtualization while its workloads remain on the cluster.
4.3.1.1. Deleting the HyperConverged custom resource Copier lienLien copié sur presse-papiers!
To uninstall OpenShift Virtualization, you first delete the HyperConverged custom resource (CR).
Prerequisites
-
You have access to an OpenShift Container Platform cluster using an account with
cluster-adminpermissions.
Procedure
-
Navigate to the Ecosystem
Installed Operators page. - Select the OpenShift Virtualization Operator.
- Click the OpenShift Virtualization Deployment tab.
-
Click the Options menu
beside kubevirt-hyperconvergedand select Delete HyperConverged. - Click Delete in the confirmation window.
4.3.1.2. Deleting Operators from a cluster using the web console Copier lienLien copié sur presse-papiers!
Cluster administrators can delete installed Operators from a selected namespace by using the web console.
Prerequisites
-
You have access to the OpenShift Container Platform cluster web console using an account with
cluster-adminpermissions.
Procedure
-
Navigate to the Ecosystem
Installed Operators page. - Scroll or enter a keyword into the Filter by name field to find the Operator that you want to remove. Then, click on it.
On the right side of the Operator Details page, select Uninstall Operator from the Actions list.
An Uninstall Operator? dialog box is displayed.
Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates.
NoteThis action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs.
4.3.1.3. Deleting a namespace using the web console Copier lienLien copié sur presse-papiers!
You can delete a namespace by using the OpenShift Container Platform web console.
Prerequisites
-
You have access to the OpenShift Container Platform cluster using an account with
cluster-adminpermissions.
Procedure
-
Navigate to Administration
Namespaces. - Locate the namespace that you want to delete in the list of namespaces.
-
On the far right side of the namespace listing, select Delete Namespace from the Options menu
.
- When the Delete Namespace pane opens, enter the name of the namespace that you want to delete in the field.
- Click Delete.
4.3.1.4. Deleting OpenShift Virtualization custom resource definitions Copier lienLien copié sur presse-papiers!
You can delete the OpenShift Virtualization custom resource definitions (CRDs) by using the web console.
Prerequisites
-
You have access to the OpenShift Container Platform cluster using an account with
cluster-adminpermissions.
Procedure
-
Navigate to Administration
CustomResourceDefinitions. -
Select the Label filter and enter
operators.coreos.com/kubevirt-hyperconverged.openshift-cnvin the Search field to display the OpenShift Virtualization CRDs. -
Click the Options menu
beside each CRD and select Delete CustomResourceDefinition.
4.3.2. Uninstalling OpenShift Virtualization by using the CLI Copier lienLien copié sur presse-papiers!
You can uninstall OpenShift Virtualization by using the OpenShift CLI (oc).
Prerequisites
-
You have access to the OpenShift Container Platform cluster using an account with
cluster-adminpermissions. -
You have installed the OpenShift CLI (
oc). - You have deleted all virtual machines and virtual machine instances. You cannot uninstall OpenShift Virtualization while its workloads remain on the cluster.
Procedure
Delete the
HyperConvergedcustom resource:oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv
$ oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the OpenShift Virtualization Operator subscription:
oc delete subscription hco-operatorhub -n openshift-cnv
$ oc delete subscription hco-operatorhub -n openshift-cnvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the OpenShift Virtualization
ClusterServiceVersionresource:oc delete csv -n openshift-cnv -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv
$ oc delete csv -n openshift-cnv -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the OpenShift Virtualization namespace:
oc delete namespace openshift-cnv
$ oc delete namespace openshift-cnvCopy to Clipboard Copied! Toggle word wrap Toggle overflow List the OpenShift Virtualization custom resource definitions (CRDs) by running the
oc delete crdcommand with thedry-runoption:oc delete crd --dry-run=client -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv
$ oc delete crd --dry-run=client -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the CRDs by running the
oc delete crdcommand without thedry-runoption:oc delete crd -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv
$ oc delete crd -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnvCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4. Installing OpenShift Virtualization on IBM Cloud bare-metal nodes Copier lienLien copié sur presse-papiers!
Install OpenShift Virtualization on IBM Cloud bare-metal nodes using Assisted Installer. The cluster has 6 bare-metal nodes (3 control and 3 compute). An additional virtual machine is required for bootstrapping and to act as a Samba server, DHCP server, network gateway, and load balancer.
4.4.1. Prerequisites Copier lienLien copié sur presse-papiers!
- An account in IBM Cloud with permissions to order and operate bare-metal nodes.
- An IBM Cloud SSL VPN user, to access the SuperMicro IPMI interface of a node.
-
Install the OpenShift CLI (
oc).
4.4.2. Configuring IBM Cloud for the new cluster Copier lienLien copié sur presse-papiers!
Configure and provision the IBM Cloud environment to establish the operational framework and nodes for your OpenShift Virtualization cluster.
Procedure
- Create a new virtual server instance in IBM Cloud at Virtual Server for Classic to be the Bastion server. This instance is used to run the installation and provide environment services.
Change the default properties of the new virtual server instance to the following values. Use the provided defaults for all other values.
- Type of virtual server: Public
- Operating system: CentOS
- Your public SSH RSA key
- Note the private VLAN and subnet the virtual server instance is assigned to at VLANs.
Provision 6 bare-metal nodes in IBM Cloud at Bare metal server provision. Use the following values when provisioning the nodes:
- Domain: A subdomain you can add records to.
- Quantity: 6
- Location: The same location as the virtual server instance.
- Storage disks: RAID 1
- Network Interface: Private
- Private VLAN: The same as noted for the virtual server instance.
- Confirm all nodes are provisioned and ready at Device list.
-
Rename the control plane nodes to
control0-<domain-name>,control1-<domain-name>, andcontrol2-<domain-name>. Replace<domain-name>with the domain used when provisioning the nodes. -
Rename the compute nodes to
compute0-<domain-name>,compute1-<domain-name>, andcompute2-<domain-name>. Replace<domain-name>with the domain used when provisioning the nodes. - Configure the Bastion virtual server instance as a default network gateway.
Configure DHCP by editing
/etc/dhcp/dhcpd.confon the Bastion virtual server instance. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <dns_domain_name>
- The default domain name for DNS clients.
- <dns_ip_addresses>
- A comma-seperated list of DNS server IP addresses.
- <default_lease_value>
- The default number of seconds a client keeps an assigned address.
- <max_lease_value>
- The maximum number of seconds a client keeps an assigned address.
- <subnet_ip_address>
- The start of the subnet IP address range.
- <subnet_mask>
- The subnet mask of the subnet IP address range.
- <broad_ip_address>
- The broadcast IP address to use when to use sending a message to every device on the subnet.
- <default_gateway_ip_address>
- The default gateway of the subnet.
Restart DHCP on the Bastion virtual server instance:
systemctl restart dhcpd
$ systemctl restart dhcpdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable IP forwarding on the Bastion virtual server instance:
sysctl -w net.ipv4.ip_forward=1
$ sysctl -w net.ipv4.ip_forward=1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify IP forwarding is enabled on the Bastion virtual server instance:
sysctl -p /etc/sysctl.conf
$ sysctl -p /etc/sysctl.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the network service on the Bastion virtual server instance:
service network restart
$ service network restartCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify if
firewalldis enabled on the Bastion virtual server instance:firewall-cmd --state
$ firewall-cmd --stateCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the
firewalldservice is not enabled on the Bastion virtual server instance, enable the service:systemctl enable firewalld
$ systemctl enable firewalldCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the
firewalldservice:systemctl start firewalld
$ systemctl start firewalldCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add network address translation (NAT) rules to the
firewalldservice:firewall-cmd --add-masquerade --permanent
$ firewall-cmd --add-masquerade --permanentCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the
firewalldservice:firewall-cmd --reload
$ firewall-cmd --reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4.3. Initializing the new cluster configuration Copier lienLien copié sur presse-papiers!
Initialize the new cluster configuration using the OpenShift Virtualization Assisted Installer service and Samba on the Bastion virtual server instance.
Procedure
- Log in to the Assisted Installer service.
Create a new cluster. The new cluster has the following properties:
- Cluster name: The name used to identify the cluster under the base domain.
- Base domain: The domain used to provision the bare-metal nodes.
- Click Next.
- Click Generate Discovery ISO.
- Provide your public SSH RSA key when prompted.
-
Copy and save the generated
wgetcommand for the ISO file. This will be used later to connect to the cluster nodes. Install Samba server on the Bastion virtual server instance:
dnf install samba
$ dnf install sambaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable Samba server on the Bastion virtual server instance:
systemctl enable smb --now
$ systemctl enable smb --nowCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure NAT rules for the Samba server:
firewall-cmd --permanent --zone=FedoraWorkstation --add-service=samba firewall-cmd --reload
$ firewall-cmd --permanent --zone=FedoraWorkstation --add-service=samba $ firewall-cmd --reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure a root user password:
sudo smbpasswd -a root
$ sudo smbpasswd -a rootCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a share directory:
mkdir <share_directory>
$ mkdir <share_directory>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<share_directory>with the share directory name.-
Navigate to the share directory and download the Assisted Installer ISO file using the generated
wgetcommand.
4.4.4. Configuring cluster networking and access Copier lienLien copié sur presse-papiers!
Configure networking and access to allow for remote management of the cluster.
Procedure
Edit
/etc/samba/smb.confto use the following configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor a more detailed example of the
smb.conffile, see thesmb.conf.examplefile in the same directory.- Save the file.
Verify the new Samba configuration:
testparm
$ testparmCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the Samba service:
systemctl restart smb
$ systemctl restart smbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the Samba service is running and active:
systemctl status smb
$ systemctl status smbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure SSL VPN access to IBM Cloud:
- Perform the procedure at Getting started with IBM Cloud Virtual Private Networking in the IBM Cloud documentation.
- Download and install the MotionPro SSL VPN client.
Connect to the appropriate IBM Cloud endpoint:
sudo MotionPro --host $<vpn_endpoint> --user $<vpn_username> --passwd $<vpn_password>
$ sudo MotionPro --host $<vpn_endpoint> --user $<vpn_username> --passwd $<vpn_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <vpn_endpoint>
- The appropriate SSL VPN endpoint.
- <vpn_username>
- The SSL VPN user name you configured.
- <vpn_password>
The SSL VPN password you configured.
NoteConnecting to the IBM Cloud SSL VPN will disconnect you from any open VPN connections.
4.4.5. Completing the cluster configuration Copier lienLien copié sur presse-papiers!
Complete the cluster configuration by installing software on the control plane and compute nodes and configuring DNS for external access.
Procedure
For each bare-metal server, perform the following tasks:
Access the server using the IPMI console.
NoteThe IP address and credentials for IPMI console access is available in the Remote management section for each server.
Mount the Assisted Installer ISO file with the following attributes:
- Virtual Media: CD-ROM Image
- Share host: The private IP address of the Bastion server.
- Path to image: The location of the Assisted Installer ISO file.
- User: root
- Password: The root user password you configured.
- Click Save and Mount.
- Verify the ISO mounted successfully.
-
Restart the server by selecting Remote Control
Power Control Reset Server Perform Action.
- Return to the Assisted Installer service.
- Select the Install OpenShift Virtualization and Install OpenShift Data Foundation checkboxes in the Assisted Installer options.
Select a role for each host.
NoteThe cluster consists of 3 control plane and 3 compute nodes.
- Wait for the Assisted Installer interface to indicate each node is ready.
- Click Next.
- Select Cluster Managed Network.
- Select the API VIP and Ingress VIP checkboxes to obtain them from DHCP or leave them unchecked to enter static values.
- Click Install.
For each bare-metal server, perform the following tasks:
Access the server using the IPMI console.
NoteThe IP address and credentials for IPMI console access is available in the Remote management section for each server.
-
Select Virtual Media
CD-Rom Image. - Click Unmount.
-
Select Remote Control
Power Control Reset Server Perform Action to restart the server.
- Locate the Cluster Credentials section of the installation summary.
Perform the following tasks in the Cluster Credentials section:
-
Download the
kubeconfigfile. -
Save the
kubeadminpassword.
-
Download the
-
Install
haproxyon the Bastion virtual server instance. Configure
haproxyfor your environment. The following is an example configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <api_ip_address>:<api_port>
- The front end IP address and port used by the Kubernetes API server.
- <apiinternal_ip_address>:<apiinternal_port>
- The front end IP address and port used for internal cluster management.
- <frontend_secure_ip_address>:<frontend_secure_port>
- The front end IP address and port used for HTTPS traffic for hosted applications.
- <frontend_insecure_ip_address>:<frontend_insecure_port>
- The front end IP address and port used for HTTP traffic for hosted applications.
- <controlplaneapi_ip_address>:<controlplaneapi_port>
- The back end IP address and port used by the Kubernetes API server.
- <controlplaneapiinternal_ip_address>:<controlplaneapiinternal_port>
- The back end IP address and port used for internal cluster management.
- <backend_secure_ip_address>:<backend_secure_port>
- The back end IP address and port used for HTTPS traffic for hosted applications.
- <backend_insecure_ip_address>:<backend_insecure_port>
The back end IP address and port used for HTTP traffic for hosted applications.
NoteReplace the example values with values applicable to your network configuration.
-
Save the
haproxyconfiguration. Configure two DNS Address records (A records) for the subdomain that are externally available over the Internet:
<bastion_public_ip_address> api.<cluster_name>.<cluster_domain> <bastion_public_ip_address> *.apps..<cluster_name>.<cluster_domain>
<bastion_public_ip_address> api.<cluster_name>.<cluster_domain> <bastion_public_ip_address> *.apps..<cluster_name>.<cluster_domain>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <bastion_public_ip_address>
- The externally available IP address of the Bastion virtual server instance.
- <cluster_name>
- The name assigned to the cluster.
- <cluster_domain>
- The domain assigned to the cluster.
Verification
Perform the following tasks to verify cluster access using command line access:
Set your environment with the
kubeconfigfile:export KUBECONFIG=<kubeconfig_file_path>
$ export KUBECONFIG=<kubeconfig_file_path>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <kubeconfig_file_path>
-
The path to the downloaded
kubeconfigfile.
Check cluster node status:
oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe command output should show all nodes as
Readyin theSTATUScolumn and theROLEScolumn should show that control plane and compute nodes are present.Check the cluster version:
oc get clusterversion
$ oc get clusterversionCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe command output should say
Condition: Available.
Perform the following tasks to verify cluster access using the web console:
Paste the access URL provided by Assisted Installer into your web browser.
NoteBy default, clusters use self-signed certificates. This may cause your browser to display a message that says Connection not private or a similar warning. You can close this warning and continue.
- Navigate to the URL.
-
Log in to the cluster with the username
kubeadminand thekubeadminpassword provided in the Cluster Credentials section.