Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 4. Installing


4.1. Preparing your cluster for OpenShift Virtualization

Before you install OpenShift Virtualization, review this section to ensure that your cluster meets the requirements.

4.1.1. Compatible platforms

You can use the following platforms with OpenShift Virtualization:

Cloud platforms

OpenShift Virtualization is also compatible with a variety of public cloud platforms. Each cloud platform has specific storage provider options available. The following table outlines which platforms are fully supported (GA) and which are currently offered as Technology Preview features.

Important

Installing OpenShift Virtualization on certain cloud platforms is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Expand
VendorStatusStorageRelated links

Amazon Web Services (AWS)

GA

Elastic Block Store (EBS), Red Hat OpenShift Data Foundation (ODF), Portworx, FSx (NetApp)

Red Hat OpenShift Service on AWS (ROSA)

GA

EBS, Portworx, FSx (Q3), ODF

Oracle Cloud Infrastructure (OCI)

GA

OCI native storage

Azure Red Hat OpenShift (ARO)

GA

ODF

Google Cloud

Technology Preview

Google Cloud native storage

Tip

For platform-specific networking information, see the networking overview.

Bare metal instances or servers offered by other cloud providers are not supported.

4.1.1.1. OpenShift Virtualization on AWS bare metal

You can run OpenShift Virtualization on an Amazon Web Services (AWS) bare metal OpenShift Container Platform cluster.

Note

OpenShift Virtualization is also supported on Red Hat OpenShift Service on AWS (ROSA) Classic clusters, which have the same configuration requirements as AWS bare-metal clusters.

Before you set up your cluster, review the following summary of supported features and limitations:

Installing
  • You can install the cluster by using installer-provisioned infrastructure, ensuring that you specify bare-metal instance types for the worker nodes. For example, you can use the c5n.metal type value for a machine based on x86_64 architecture. You specify bare-metal instance types by editing the install-config.yaml file.

    For more information, see the OpenShift Container Platform documentation about installing on AWS.

Accessing virtual machines (VMs)
  • There is no change to how you access VMs by using the virtctl CLI tool or the OpenShift Container Platform web console.
  • You can expose VMs by using a NodePort or LoadBalancer service.

    Note

    The load balancer approach is preferable because OpenShift Container Platform automatically creates the load balancer in AWS and manages its lifecycle. A security group is also created for the load balancer, and you can use annotations to attach existing security groups. When you remove the service, OpenShift Container Platform removes the load balancer and its associated resources.

Networking
  • You cannot use Single Root I/O Virtualization (SR-IOV) or bridge Container Network Interface (CNI) networks, including virtual LAN (VLAN). If your application requires a flat layer 2 network or control over the IP pool, consider using OVN-Kubernetes secondary overlay networks.
Storage
  • You can use any storage solution that is certified by the storage vendor to work with the underlying platform.

    Important

    AWS bare metal, Red Hat OpenShift Service on AWS, and Red Hat OpenShift Service on AWS classic architecture clusters might have different supported storage solutions. Ensure that you confirm support with your storage vendor.

  • Using Amazon Elastic File System (EFS) or Amazon Elastic Block Store (EBS) with OpenShift Virtualization might cause performance and functionality limitations as shown in the following table:

    Expand
    Table 4.1. EFS and EBS performance and functionality limitations
    FeatureEBS volumeEFS volumeShared storage solutions
     

    gp2

    gp3

    io2

      

    VM live migration

    Not available

    Not available

    Available

    Available

    Available

    Fast VM creation by using cloning

    Available

    Not available

    Available

    VM backup and restore by using snapshots

    Available

    Not available

    Available

    Consider using CSI storage, which supports ReadWriteMany (RWX), cloning, and snapshots to enable live migration, fast VM creation, and VM snapshots capabilities.

Hosted control planes (HCPs)
  • HCPs for OpenShift Virtualization are not currently supported on AWS infrastructure.

4.1.1.2. ARM64 compatibility

Using OpenShift Virtualization on an OpenShift Container Platform cluster installed on an ARM64 system is generally available (GA).

Before using OpenShift Virtualization on an ARM64-based system, consider the following limitations:

Operating system
Live migration
  • Live migration is not supported on ARM64-based OpenShift Container Platform clusters.
  • Hotplug is not supported on ARM64-based clusters because it depends on live migration.
VM creation
  • RHEL 10 supports instance types and preferences, but not templates.
  • RHEL 9 supports templates, instance types, and preferences.

4.1.1.3. IBM Z and IBM LinuxONE compatibility

You can use OpenShift Virtualization in an OpenShift Container Platform cluster that is installed in logical partitions (LPARs) on an IBM Z® or IBM® LinuxONE (s390x architecture) system.

Some features are not currently available on s390x architecture, while others require workarounds or procedural changes. These lists are subject to change.

Currently unavailable features

The following features are currently not available on s390x architecture:

  • Memory hot plugging and hot unplugging
  • Node Health Check Operator
  • SR-IOV Operator
  • PCI passthrough
  • OpenShift Virtualization cluster checkup framework
  • OpenShift Virtualization on a cluster installed in FIPS mode
  • IPv6
  • IBM® Storage scale
  • Hosted control planes for OpenShift Virtualization
  • VM pages using HugePages

The following features are not applicable on s390x architecture:

  • virtual Trusted Platform Module (vTPM) devices
  • UEFI mode for VMs
  • USB host passthrough
  • Configuring virtual GPUs
  • Creating and managing Windows VMs
  • Hyper-V
Functionality differences

The following features are available for use on s390x architecture but function differently or require procedural changes:

4.1.2. Important considerations for any platform

Before you install OpenShift Virtualization on any platform, note the following caveats and considerations.

Installation method considerations
You can use any installation method, including user-provisioned, installer-provisioned, or Assisted Installer, to deploy OpenShift Container Platform. However, the installation method and the cluster topology might affect OpenShift Virtualization functionality, such as snapshots or live migration.
Red Hat OpenShift Data Foundation
If you deploy OpenShift Virtualization with Red Hat OpenShift Data Foundation, you must create a dedicated storage class for Windows virtual machine disks. See Optimizing ODF PersistentVolumes for Windows VMs for details.
IPv6

OpenShift Virtualization support for single-stack IPv6 clusters is limited to the OVN-Kubernetes localnet and Linux bridge Container Network Interface (CNI) plugins.

Important

Deploying OpenShift Virtualization on a single-stack IPv6 cluster is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

FIPS mode
If you install your cluster in FIPS mode, no additional setup is required for OpenShift Virtualization.

4.1.3. Hardware and operating system requirements

Review the following hardware and operating system requirements for OpenShift Virtualization.

4.1.3.1. CPU requirements

  • Supported by Red Hat Enterprise Linux (RHEL) 9.

    See Red Hat Ecosystem Catalog for supported CPUs.

    Note

    If your worker nodes have different CPUs, live migration failures might occur because different CPUs have different capabilities. You can mitigate this issue by ensuring that your worker nodes have CPUs with the appropriate capacity and by configuring node affinity rules for your virtual machines.

    See Configuring a required node affinity rule for details.

  • Supports AMD64, Intel 64-bit (x86-64-v2), IBM Z® (s390x), or ARM64-based (arm64 or aarch64) architectures and their respective CPU extensions.
  • Intel VT-x, AMD-V, or ARM virtualization extensions are enabled, or s390x virtualization support is enabled.
  • NX (no execute) flag is enabled.
  • If you use s390x architecture, the default CPU model is set to gen15b.

4.1.3.2. Operating system requirements

  • Red Hat Enterprise Linux CoreOS (RHCOS) installed on worker nodes.

    See About RHCOS for details.

    Note

    RHEL worker nodes are not supported.

4.1.3.3. Storage requirements

  • Supported by OpenShift Container Platform. See Optimizing storage.
  • You must create a default OpenShift Virtualization or OpenShift Container Platform storage class. The purpose of this is to address the unique storage needs of VM workloads and offer optimized performance, reliability, and user experience. If both OpenShift Virtualization and OpenShift Container Platform default storage classes exist, the OpenShift Virtualization class takes precedence when creating VM disks.
Note

To mark a storage class as the default for virtualization workloads, set the annotation storageclass.kubevirt.io/is-default-virt-class to "true".

  • If the storage provisioner supports snapshots, you must associate a VolumeSnapshotClass object with the default storage class.

If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode.

For a list of known storage providers for OpenShift Virtualization, see the Red Hat Ecosystem Catalog.

For best results, use the ReadWriteMany (RWX) access mode and the Block volume mode. This is important for the following reasons:

  • ReadWriteMany (RWX) access mode is required for live migration.
  • The Block volume mode performs significantly better than the Filesystem volume mode. This is because the Filesystem volume mode uses more storage layers, including a file system layer and a disk image file. These layers are not necessary for VM disk storage.

    For example, if you use Red Hat OpenShift Data Foundation, Ceph RBD volumes are preferable to CephFS volumes.

Important

You cannot live migrate virtual machines with the following configurations:

  • Storage volume with ReadWriteOnce (RWO) access mode
  • Passthrough features such as GPUs

Set the evictionStrategy field to None for these virtual machines. The None strategy powers down VMs during node reboots.

4.1.4. Live migration requirements

  • Shared storage with ReadWriteMany (RWX) access mode.
  • Sufficient RAM and network bandwidth.

    Note

    You must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation:

    Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)
    Copy to Clipboard Toggle word wrap

    The default number of migrations that can run in parallel in the cluster is 5.

  • If the virtual machine uses a host model CPU, the nodes must support the virtual machine’s host model CPU.
Note

A dedicated Multus network for live migration is highly recommended. A dedicated network minimizes the effects of network saturation on tenant workloads during migration.

4.1.5. Physical resource overhead requirements

OpenShift Virtualization is an add-on to OpenShift Container Platform and imposes additional overhead that you must account for when planning a cluster.

Each cluster machine must accommodate the following overhead requirements in addition to the OpenShift Container Platform requirements. Oversubscribing the physical resources in a cluster can affect performance.

Important

The numbers noted in this documentation are based on Red Hat’s test methodology and setup. These numbers can vary based on your own individual setup and environments.

4.1.5.1. Memory overhead

Calculate the memory overhead values for OpenShift Virtualization by using the equations below.

Cluster memory overhead
Memory overhead per infrastructure node ≈ 150 MiB
Copy to Clipboard Toggle word wrap
Memory overhead per worker node ≈ 360 MiB
Copy to Clipboard Toggle word wrap

Additionally, OpenShift Virtualization environment resources require a total of 2179 MiB of RAM that is spread across all infrastructure nodes.

Virtual machine memory overhead
Memory overhead per virtual machine ≈ (0.002 × requested memory) \
              + 218 MiB \
              + 8 MiB × (number of vCPUs) \
              + 16 MiB × (number of graphics devices) \
              + (additional memory overhead)
Copy to Clipboard Toggle word wrap
  • 218 MiB is required for the processes that run in the virt-launcher pod.
  • 8 MiB × (number of vCPUs) refers to the number of virtual CPUs requested by the virtual machine.
  • 16 MiB × (number of graphics devices) refers to the number of virtual graphics cards requested by the virtual machine.
  • Additional memory overhead:

    • If your environment includes a Single Root I/O Virtualization (SR-IOV) network device or a Graphics Processing Unit (GPU), allocate 1 GiB additional memory overhead for each device.
    • If Secure Encrypted Virtualization (SEV) is enabled, add 256 MiB.
    • If Trusted Platform Module (TPM) is enabled, add 53 MiB.

4.1.5.2. CPU overhead

Calculate the cluster processor overhead requirements for OpenShift Virtualization by using the equation below. The CPU overhead per virtual machine depends on your individual setup.

Cluster CPU overhead
CPU overhead for infrastructure nodes ≈ 4 cores
Copy to Clipboard Toggle word wrap

OpenShift Virtualization increases the overall utilization of cluster level services such as logging, routing, and monitoring. To account for this workload, ensure that nodes that host infrastructure components have capacity allocated for 4 additional cores (4000 millicores) distributed across those nodes.

CPU overhead for worker nodes ≈ 2 cores + CPU overhead per virtual machine
Copy to Clipboard Toggle word wrap

Each worker node that hosts virtual machines must have capacity for 2 additional cores (2000 millicores) for OpenShift Virtualization management workloads in addition to the CPUs required for virtual machine workloads.

Virtual machine CPU overhead
If dedicated CPUs are requested, there is a 1:1 impact on the cluster CPU overhead requirement. Otherwise, there are no specific rules about how many CPUs a virtual machine requires.

4.1.5.3. Storage overhead

Use the guidelines below to estimate storage overhead requirements for your OpenShift Virtualization environment.

Cluster storage overhead
Aggregated storage overhead per node ≈ 10 GiB
Copy to Clipboard Toggle word wrap

10 GiB is the estimated on-disk storage impact for each node in the cluster when you install OpenShift Virtualization.

Virtual machine storage overhead
Storage overhead per virtual machine depends on specific requests for resource allocation within the virtual machine. The request could be for ephemeral storage on the node or storage resources hosted elsewhere in the cluster. OpenShift Virtualization does not currently allocate any additional ephemeral storage for the running container itself.
Example
As a cluster administrator, if you plan to host 10 virtual machines in the cluster, each with 1 GiB of RAM and 2 vCPUs, the memory impact across the cluster is 11.68 GiB. The estimated on-disk storage impact for each node in the cluster is 10 GiB and the CPU impact for worker nodes that host virtual machine workloads is a minimum of 2 cores.

4.1.6. Single-node OpenShift differences

You can install OpenShift Virtualization on single-node OpenShift.

However, you should be aware that Single-node OpenShift does not support the following features:

  • High availability
  • Pod disruption
  • Live migration
  • Virtual machines or templates that have an eviction strategy configured

4.1.7. Object maximums

You must consider the following tested object maximums when planning your cluster:

4.1.8. Cluster high-availability options

You can configure one of the following high-availability (HA) options for your cluster:

  • Automatic high availability for installer-provisioned infrastructure (IPI) is available by deploying machine health checks.

    Note

    In OpenShift Container Platform clusters installed using installer-provisioned infrastructure and with a properly configured MachineHealthCheck resource, if a node fails the machine health check and becomes unavailable to the cluster, it is recycled. What happens next with VMs that ran on the failed node depends on a series of conditions. See Run strategies for more detailed information about the potential outcomes and how run strategies affect those outcomes.

    Currently, IPI is not supported on IBM Z®.

  • Automatic high availability for both IPI and non-IPI is available by using the Node Health Check Operator on the OpenShift Container Platform cluster to deploy the NodeHealthCheck controller. The controller identifies unhealthy nodes and uses a remediation provider, such as the Self Node Remediation Operator or Fence Agents Remediation Operator, to remediate the unhealthy nodes. For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation.

    Note

    Fence Agents Remediation uses supported fencing agents to reset failed nodes faster than the Self Node Remediation Operator. This improves overall virtual machine high availability. For more information, see the OpenShift Virtualization - Fencing and VM High Availability Guide knowledgebase article.

  • High availability for any platform is available by using either a monitoring system or a qualified human to monitor node availability. When a node is lost, shut it down and run oc delete node <lost_node>.

    Note

    Without an external monitoring system or a qualified human monitoring node health, virtual machines lose high availability.

4.2. Installing OpenShift Virtualization

Install OpenShift Virtualization to add virtualization functionality to your OpenShift Container Platform cluster.

Important

If you install OpenShift Virtualization in a restricted environment with no internet connectivity, you must configure Operator Lifecycle Manager for disconnected environments.

If you have limited internet connectivity, you can configure proxy support in OLM to access the software catalog.

4.2.1. Installing the OpenShift Virtualization Operator

Install the OpenShift Virtualization Operator by using the OpenShift Container Platform web console or the command line.

You can deploy the OpenShift Virtualization Operator by using the OpenShift Container Platform web console.

Prerequisites

  • Install OpenShift Container Platform 4.21 on your cluster.
  • Log in to the OpenShift Container Platform web console as a user with cluster-admin permissions.

Procedure

  1. From the Administrator perspective, click Ecosystem Software Catalog.
  2. In the Filter by keyword field, type Virtualization.
  3. Select the OpenShift Virtualization Operator tile with the Red Hat source label.
  4. Read the information about the Operator and click Install.
  5. On the Install Operator page:

    1. Select stable from the list of available Update Channel options. This ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version.
    2. For Installed Namespace, ensure that the Operator recommended namespace option is selected. This installs the Operator in the mandatory openshift-cnv namespace, which is automatically created if it does not exist.

      Warning

      Attempting to install the OpenShift Virtualization Operator in a namespace other than openshift-cnv causes the installation to fail.

    3. For Approval Strategy, it is highly recommended that you select Automatic, which is the default value, so that OpenShift Virtualization automatically updates when a new version is available in the stable update channel.

      Selecting the Manual approval strategy is not recommended, as it poses a high risk to cluster support and functionality. Only select Manual if you fully understand these risks and cannot use Automatic.

      Warning

      Because OpenShift Virtualization is only supported when used with the corresponding OpenShift Container Platform version, missing OpenShift Virtualization updates can cause your cluster to become unsupported.

  6. Click Install to make the Operator available to the openshift-cnv namespace.
  7. When the Operator installs successfully, click Create HyperConverged.
  8. Optional: Configure Infra and Workloads node placement options for OpenShift Virtualization components.
  9. Click Create to launch OpenShift Virtualization.

Verification

  • Navigate to the Workloads Pods page and monitor the OpenShift Virtualization pods until they are all Running. After all the pods display the Running state, you can use OpenShift Virtualization.

Subscribe to the OpenShift Virtualization catalog and install the OpenShift Virtualization Operator by applying manifests to your cluster.

Before you install OpenShift Virtualization, you must subscribe to the OpenShift Virtualization catalog. Subscribing gives the openshift-cnv namespace access to the OpenShift Virtualization Operators.

To subscribe, configure Namespace, OperatorGroup, and Subscription objects by applying a single manifest to your cluster.

Prerequisites

  • Install OpenShift Container Platform 4.21 on your cluster.
  • Install the OpenShift CLI (oc).
  • Log in as a user with cluster-admin privileges.

Procedure

  1. Create a YAML file that contains the following manifest:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-cnv
      labels:
        openshift.io/cluster-monitoring: "true"
    ---
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: kubevirt-hyperconverged-group
      namespace: openshift-cnv
    spec:
      targetNamespaces:
        - openshift-cnv
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: hco-operatorhub
      namespace: openshift-cnv
    spec:
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      name: kubevirt-hyperconverged
      startingCSV: kubevirt-hyperconverged-operator.v4.21.0
      channel: "stable"
    Copy to Clipboard Toggle word wrap

    Using the stable channel ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version.

  2. Create the required Namespace, OperatorGroup, and Subscription objects for OpenShift Virtualization by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap

Verification

You must verify that the subscription creation was successful before you can proceed with installing OpenShift Virtualization.

  1. Check that the ClusterServiceVersion (CSV) object was created successfully. Run the following command and verify the output:

    $ oc get csv -n openshift-cnv
    Copy to Clipboard Toggle word wrap

    If the CSV was created successfully, the output shows an entry that contains a NAME value of kubevirt-hyperconverged-operator-*, a DISPLAY value of OpenShift Virtualization, and a PHASE value of Succeeded, as shown in the following example output:

    Example output:

    NAME                                       DISPLAY                    VERSION   REPLACES                                   PHASE
    kubevirt-hyperconverged-operator.v4.21.0   OpenShift Virtualization   4.21.0    kubevirt-hyperconverged-operator.v4.20.0   Succeeded
    Copy to Clipboard Toggle word wrap
  2. Check that the HyperConverged custom resource (CR) has the correct version. Run the following command and verify the output:

    $ oc get hco -n openshift-cnv kubevirt-hyperconverged -o json | jq .status.versions
    Copy to Clipboard Toggle word wrap

    Example output:

    {
    "name": "operator",
    "version": "4.21.0"
    }
    Copy to Clipboard Toggle word wrap
  3. Verify the HyperConverged CR conditions. Run the following command and check the output:

    $ oc get hco kubevirt-hyperconverged -n openshift-cnv -o json | jq -r '.status.conditions[] | {type,status}'
    Copy to Clipboard Toggle word wrap

    Example output:

    {
      "type": "ReconcileComplete",
      "status": "True"
    }
    {
      "type": "Available",
      "status": "True"
    }
    {
      "type": "Progressing",
      "status": "False"
    }
    {
      "type": "Degraded",
      "status": "False"
    }
    {
      "type": "Upgradeable",
      "status": "True"
    }
    Copy to Clipboard Toggle word wrap
Note

You can configure certificate rotation parameters in the YAML file.

You can deploy the OpenShift Virtualization Operator by using the oc CLI.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Subscribe to the OpenShift Virtualization catalog in the openshift-cnv namespace.
  • Log in as a user with cluster-admin privileges.

Procedure

  1. Create a YAML file that contains the following manifest:

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
    Copy to Clipboard Toggle word wrap
  2. Deploy the OpenShift Virtualization Operator by running the following command:

    $ oc apply -f <file_name>.yaml
    Copy to Clipboard Toggle word wrap

Verification

  • Ensure that OpenShift Virtualization deployed successfully by watching the PHASE of the cluster service version (CSV) in the openshift-cnv namespace. Run the following command:

    $ watch oc get csv -n openshift-cnv
    Copy to Clipboard Toggle word wrap

    The following output displays if deployment was successful:

    NAME                                      DISPLAY                    VERSION   REPLACES   PHASE
    kubevirt-hyperconverged-operator.v4.21.0   OpenShift Virtualization   4.21.0                Succeeded
    Copy to Clipboard Toggle word wrap

4.2.2. Next steps

  • As a cluster administrator, you can run a self validation checkup to verify that the environment is fully functional and self-sustained before you deploy production workloads.
  • The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.

4.3. Uninstalling OpenShift Virtualization

You uninstall OpenShift Virtualization by using the web console or the command-line interface (CLI) to delete the OpenShift Virtualization workloads, the Operator, and its resources.

You uninstall OpenShift Virtualization by using the web console to perform the following tasks:

Important

You must first delete all virtual machines, and virtual machine instances.

You cannot uninstall OpenShift Virtualization while its workloads remain on the cluster.

4.3.1.1. Deleting the HyperConverged custom resource

To uninstall OpenShift Virtualization, you first delete the HyperConverged custom resource (CR).

Prerequisites

  • You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.

Procedure

  1. Navigate to the Ecosystem Installed Operators page.
  2. Select the OpenShift Virtualization Operator.
  3. Click the OpenShift Virtualization Deployment tab.
  4. Click the Options menu kebab beside kubevirt-hyperconverged and select Delete HyperConverged.
  5. Click Delete in the confirmation window.

Cluster administrators can delete installed Operators from a selected namespace by using the web console.

Prerequisites

  • You have access to the OpenShift Container Platform cluster web console using an account with cluster-admin permissions.

Procedure

  1. Navigate to the Ecosystem Installed Operators page.
  2. Scroll or enter a keyword into the Filter by name field to find the Operator that you want to remove. Then, click on it.
  3. On the right side of the Operator Details page, select Uninstall Operator from the Actions list.

    An Uninstall Operator? dialog box is displayed.

  4. Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates.

    Note

    This action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs.

4.3.1.3. Deleting a namespace using the web console

You can delete a namespace by using the OpenShift Container Platform web console.

Prerequisites

  • You have access to the OpenShift Container Platform cluster using an account with cluster-admin permissions.

Procedure

  1. Navigate to Administration Namespaces.
  2. Locate the namespace that you want to delete in the list of namespaces.
  3. On the far right side of the namespace listing, select Delete Namespace from the Options menu kebab .
  4. When the Delete Namespace pane opens, enter the name of the namespace that you want to delete in the field.
  5. Click Delete.

You can delete the OpenShift Virtualization custom resource definitions (CRDs) by using the web console.

Prerequisites

  • You have access to the OpenShift Container Platform cluster using an account with cluster-admin permissions.

Procedure

  1. Navigate to Administration CustomResourceDefinitions.
  2. Select the Label filter and enter operators.coreos.com/kubevirt-hyperconverged.openshift-cnv in the Search field to display the OpenShift Virtualization CRDs.
  3. Click the Options menu kebab beside each CRD and select Delete CustomResourceDefinition.

4.3.2. Uninstalling OpenShift Virtualization by using the CLI

You can uninstall OpenShift Virtualization by using the OpenShift CLI (oc).

Prerequisites

  • You have access to the OpenShift Container Platform cluster using an account with cluster-admin permissions.
  • You have installed the OpenShift CLI (oc).
  • You have deleted all virtual machines and virtual machine instances. You cannot uninstall OpenShift Virtualization while its workloads remain on the cluster.

Procedure

  1. Delete the HyperConverged custom resource:

    $ oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv
    Copy to Clipboard Toggle word wrap
  2. Delete the OpenShift Virtualization Operator subscription:

    $ oc delete subscription hco-operatorhub -n openshift-cnv
    Copy to Clipboard Toggle word wrap
  3. Delete the OpenShift Virtualization ClusterServiceVersion resource:

    $ oc delete csv -n openshift-cnv -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv
    Copy to Clipboard Toggle word wrap
  4. Delete the OpenShift Virtualization namespace:

    $ oc delete namespace openshift-cnv
    Copy to Clipboard Toggle word wrap
  5. List the OpenShift Virtualization custom resource definitions (CRDs) by running the oc delete crd command with the dry-run option:

    $ oc delete crd --dry-run=client -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv
    Copy to Clipboard Toggle word wrap

    Example output:

    customresourcedefinition.apiextensions.k8s.io "cdis.cdi.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "hostpathprovisioners.hostpathprovisioner.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "hyperconvergeds.hco.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "kubevirts.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "networkaddonsconfigs.networkaddonsoperator.network.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "ssps.ssp.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "tektontasks.tektontasks.kubevirt.io" deleted (dry run)
    Copy to Clipboard Toggle word wrap
  6. Delete the CRDs by running the oc delete crd command without the dry-run option:

    $ oc delete crd -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv
    Copy to Clipboard Toggle word wrap

Install OpenShift Virtualization on IBM Cloud bare-metal nodes using Assisted Installer. The cluster has 6 bare-metal nodes (3 control and 3 compute). An additional virtual machine is required for bootstrapping and to act as a Samba server, DHCP server, network gateway, and load balancer.

4.4.1. Prerequisites

  • An account in IBM Cloud with permissions to order and operate bare-metal nodes.
  • An IBM Cloud SSL VPN user, to access the SuperMicro IPMI interface of a node.
  • Install the OpenShift CLI (oc).

4.4.2. Configuring IBM Cloud for the new cluster

Configure and provision the IBM Cloud environment to establish the operational framework and nodes for your OpenShift Virtualization cluster.

Procedure

  1. Create a new virtual server instance in IBM Cloud at Virtual Server for Classic to be the Bastion server. This instance is used to run the installation and provide environment services.
  2. Change the default properties of the new virtual server instance to the following values. Use the provided defaults for all other values.

    • Type of virtual server: Public
    • Operating system: CentOS
    • Your public SSH RSA key
  3. Note the private VLAN and subnet the virtual server instance is assigned to at VLANs.
  4. Provision 6 bare-metal nodes in IBM Cloud at Bare metal server provision. Use the following values when provisioning the nodes:

    • Domain: A subdomain you can add records to.
    • Quantity: 6
    • Location: The same location as the virtual server instance.
    • Storage disks: RAID 1
    • Network Interface: Private
    • Private VLAN: The same as noted for the virtual server instance.
  5. Confirm all nodes are provisioned and ready at Device list.
  6. Rename the control plane nodes to control0-<domain-name>, control1-<domain-name>, and control2-<domain-name>. Replace <domain-name> with the domain used when provisioning the nodes.
  7. Rename the compute nodes to compute0-<domain-name>, compute1-<domain-name>, and compute2-<domain-name>. Replace <domain-name> with the domain used when provisioning the nodes.
  8. Configure the Bastion virtual server instance as a default network gateway.
  9. Configure DHCP by editing /etc/dhcp/dhcpd.conf on the Bastion virtual server instance. For example:

    # Set DNS name and DNS server's IP address or hostname
    option domain-name  <dns_domain_name>;
    option domain-name-servers  <dns_ip_addresses>;
    
    # Declare DHCP Server
    authoritative;
    
    # The default DHCP lease time
    default-lease-time <default_lease_value>;
    
    # Set the maximum lease time
    max-lease-time <max_lease_value>;
    
    # Set Network address, subnet mask and gateway
    
    subnet <subnet_ip_address> netmask <subnet_mask> {
      # Range of IP addresses to allocate
      range dynamic-bootp <dynamic_boot_lower_address> <dynamic_boot_upper_address>;
      # Provide broadcast address
      option broadcast-address <broadcast_ip_address>;
      # Set default gateway
      option routers <default_gateway_ip_address>;
    Copy to Clipboard Toggle word wrap

    where:

    <dns_domain_name>
    The default domain name for DNS clients.
    <dns_ip_addresses>
    A comma-seperated list of DNS server IP addresses.
    <default_lease_value>
    The default number of seconds a client keeps an assigned address.
    <max_lease_value>
    The maximum number of seconds a client keeps an assigned address.
    <subnet_ip_address>
    The start of the subnet IP address range.
    <subnet_mask>
    The subnet mask of the subnet IP address range.
    <broad_ip_address>
    The broadcast IP address to use when to use sending a message to every device on the subnet.
    <default_gateway_ip_address>
    The default gateway of the subnet.
  10. Restart DHCP on the Bastion virtual server instance:

    $ systemctl restart dhcpd
    Copy to Clipboard Toggle word wrap
  11. Enable IP forwarding on the Bastion virtual server instance:

    $ sysctl -w net.ipv4.ip_forward=1
    Copy to Clipboard Toggle word wrap
  12. Verify IP forwarding is enabled on the Bastion virtual server instance:

    $ sysctl -p /etc/sysctl.conf
    Copy to Clipboard Toggle word wrap
  13. Restart the network service on the Bastion virtual server instance:

    $ service network restart
    Copy to Clipboard Toggle word wrap
  14. Verify if firewalld is enabled on the Bastion virtual server instance:

    $ firewall-cmd --state
    Copy to Clipboard Toggle word wrap
  15. If the firewalld service is not enabled on the Bastion virtual server instance, enable the service:

    $ systemctl enable firewalld
    Copy to Clipboard Toggle word wrap
  16. Start the firewalld service:

    $ systemctl start firewalld
    Copy to Clipboard Toggle word wrap
  17. Add network address translation (NAT) rules to the firewalld service:

    $ firewall-cmd --add-masquerade --permanent
    Copy to Clipboard Toggle word wrap
  18. Restart the firewalld service:

    $ firewall-cmd --reload
    Copy to Clipboard Toggle word wrap

4.4.3. Initializing the new cluster configuration

Initialize the new cluster configuration using the OpenShift Virtualization Assisted Installer service and Samba on the Bastion virtual server instance.

Procedure

  1. Log in to the Assisted Installer service.
  2. Create a new cluster. The new cluster has the following properties:

    • Cluster name: The name used to identify the cluster under the base domain.
    • Base domain: The domain used to provision the bare-metal nodes.
  3. Click Next.
  4. Click Generate Discovery ISO.
  5. Provide your public SSH RSA key when prompted.
  6. Copy and save the generated wget command for the ISO file. This will be used later to connect to the cluster nodes.
  7. Install Samba server on the Bastion virtual server instance:

    $ dnf install samba
    Copy to Clipboard Toggle word wrap
  8. Enable Samba server on the Bastion virtual server instance:

    $ systemctl enable smb --now
    Copy to Clipboard Toggle word wrap
  9. Configure NAT rules for the Samba server:

    $ firewall-cmd --permanent --zone=FedoraWorkstation --add-service=samba
    $ firewall-cmd --reload
    Copy to Clipboard Toggle word wrap
  10. Configure a root user password:

    $ sudo smbpasswd -a root
    Copy to Clipboard Toggle word wrap
  11. Create a share directory:

    $ mkdir <share_directory>
    Copy to Clipboard Toggle word wrap

    Replace <share_directory> with the share directory name.

  12. Navigate to the share directory and download the Assisted Installer ISO file using the generated wget command.

4.4.4. Configuring cluster networking and access

Configure networking and access to allow for remote management of the cluster.

Procedure

  1. Edit /etc/samba/smb.conf to use the following configuration:

    [global]
          log level = 3
              workgroup = SAMBA
              security = user
    
              passdb backend = tdbsam
    
              printing = cups
              printcap name = cups
              load printers = yes
              cups options = raw
    
          server min protocol = NT1
          ntlm auth = yes
    
    [share]
          comment = ISO Files
          path = /root/share
          browseable = yes
          public = no
          read only = no
          directory mode = 0555
          valid users = root
    Copy to Clipboard Toggle word wrap
    Note

    For a more detailed example of the smb.conf file, see the smb.conf.example file in the same directory.

  2. Save the file.
  3. Verify the new Samba configuration:

    $ testparm
    Copy to Clipboard Toggle word wrap
  4. Restart the Samba service:

    $ systemctl restart smb
    Copy to Clipboard Toggle word wrap
  5. Verify that the Samba service is running and active:

    $ systemctl status smb
    Copy to Clipboard Toggle word wrap
  6. Configure SSL VPN access to IBM Cloud:

    1. Perform the procedure at Getting started with IBM Cloud Virtual Private Networking in the IBM Cloud documentation.
    2. Download and install the MotionPro SSL VPN client.
    3. Connect to the appropriate IBM Cloud endpoint:

      $ sudo MotionPro --host $<vpn_endpoint> --user $<vpn_username> --passwd $<vpn_password>
      Copy to Clipboard Toggle word wrap

      where:

      <vpn_endpoint>
      The appropriate SSL VPN endpoint.
      <vpn_username>
      The SSL VPN user name you configured.
      <vpn_password>

      The SSL VPN password you configured.

      Note

      Connecting to the IBM Cloud SSL VPN will disconnect you from any open VPN connections.

4.4.5. Completing the cluster configuration

Complete the cluster configuration by installing software on the control plane and compute nodes and configuring DNS for external access.

Procedure

  1. For each bare-metal server, perform the following tasks:

    1. Access the server using the IPMI console.

      Note

      The IP address and credentials for IPMI console access is available in the Remote management section for each server.

    2. Mount the Assisted Installer ISO file with the following attributes:

      • Virtual Media: CD-ROM Image
      • Share host: The private IP address of the Bastion server.
      • Path to image: The location of the Assisted Installer ISO file.
      • User: root
      • Password: The root user password you configured.
    3. Click Save and Mount.
    4. Verify the ISO mounted successfully.
    5. Restart the server by selecting Remote Control Power Control Reset Server Perform Action.
  2. Return to the Assisted Installer service.
  3. Select the Install OpenShift Virtualization and Install OpenShift Data Foundation checkboxes in the Assisted Installer options.
  4. Select a role for each host.

    Note

    The cluster consists of 3 control plane and 3 compute nodes.

  5. Wait for the Assisted Installer interface to indicate each node is ready.
  6. Click Next.
  7. Select Cluster Managed Network.
  8. Select the API VIP and Ingress VIP checkboxes to obtain them from DHCP or leave them unchecked to enter static values.
  9. Click Install.
  10. For each bare-metal server, perform the following tasks:

    1. Access the server using the IPMI console.

      Note

      The IP address and credentials for IPMI console access is available in the Remote management section for each server.

    2. Select Virtual Media CD-Rom Image.
    3. Click Unmount.
    4. Select Remote Control Power Control Reset Server Perform Action to restart the server.
  11. Locate the Cluster Credentials section of the installation summary.
  12. Perform the following tasks in the Cluster Credentials section:

    1. Download the kubeconfig file.
    2. Save the kubeadmin password.
  13. Install haproxy on the Bastion virtual server instance.
  14. Configure haproxy for your environment. The following is an example configuration:

    #---------------------------------------------------------------------
    # Example configuration for a possible web application.  See the
    # full configuration options online.
    #
    #   https://www.haproxy.org/download/1.8/doc/configuration.txt
    #
    #---------------------------------------------------------------------
    
    #---------------------------------------------------------------------
    # Global settings
    #---------------------------------------------------------------------
    global
      # to have these messages end up in /var/log/haproxy.log you will
      # need to:
      #
      # 1) configure syslog to accept network log events.  This is done
      # by adding the '-r' option to the SYSLOGD_OPTIONS in
      # /etc/sysconfig/syslog
      #
      # 2) configure local2 events to go to the /var/log/haproxy.log
      #   file. A line like the following can be added to
      #   /etc/sysconfig/syslog
      #
      # local2.*                    /var/log/haproxy.log
      #
      log       127.0.0.1 local2
    
      chroot    /var/lib/haproxy
      pidfile   /var/run/haproxy.pid
      maxconn   4000
      user      haproxy
      group     haproxy
      daemon
    
      # turn on stats unix socket
      stats socket /var/lib/haproxy/stats
    
      # utilize system-wide crypto-policies
      #ssl-default-bind-ciphers PROFILE=SYSTEM
      #ssl-default-server-ciphers PROFILE=SYSTEM
    
    #---------------------------------------------------------------------
    # common defaults that all the 'listen' and 'backend' sections will
    # use if not designated in their block
    #---------------------------------------------------------------------
    defaults
      mode                  tcp
      log                   global
      option                httplog
      option                dontlognull
      option http-server-close
      option forwardfor     except 127.0.0.0/8
      option                redispatch
      retries               3
      timeout http-request  10s
      timeout queue         1m
      timeout connect       10s
      timeout client        1m
      timeout server        1m
      timeout http-keep-alive 10s
      timeout check         10s
      maxconn               3000
    #---------------------------------------------------------------------
    # main frontend which proxys to the backends
    #---------------------------------------------------------------------
    
    frontend api
      bind <api_ip_address>:<api_port>
      default_backend controlplaneapi
    
    frontend apiinternal
      bind <apiinternal_ip_address>:<apiinternal_port>
      default_backend controlplaneapiinternal
    
    frontend secure
      bind <frontend_secure_ip_address>:<frontend_secure_port>
      default_backend secure
    
    frontend insecure
      bind <frontend_insecure_ip_address>:<frontend_insecure_port>
      default_backend insecure
    
    #---------------------------------------------------------------------
    # static backend
    #---------------------------------------------------------------------
    
    backend controlplaneapi
      balance source
      server api <controlplaneapi_ip_address>:<controlplaneapi_port> check
    
    backend controlplaneapiinternal
      balance source
      server api <controlplaneapiinternal_ip_address>:<controlplaneapiinternal_port> check
    
    backend secure
      balance source
      server ingress <backend_secure_ip_address>:<backend_secure_port> check
    
    backend insecure
      balance source
      server ingress <backend_insecure_ip_address>:<backend_insecure_port> check
    Copy to Clipboard Toggle word wrap

    where:

    <api_ip_address>:<api_port>
    The front end IP address and port used by the Kubernetes API server.
    <apiinternal_ip_address>:<apiinternal_port>
    The front end IP address and port used for internal cluster management.
    <frontend_secure_ip_address>:<frontend_secure_port>
    The front end IP address and port used for HTTPS traffic for hosted applications.
    <frontend_insecure_ip_address>:<frontend_insecure_port>
    The front end IP address and port used for HTTP traffic for hosted applications.
    <controlplaneapi_ip_address>:<controlplaneapi_port>
    The back end IP address and port used by the Kubernetes API server.
    <controlplaneapiinternal_ip_address>:<controlplaneapiinternal_port>
    The back end IP address and port used for internal cluster management.
    <backend_secure_ip_address>:<backend_secure_port>
    The back end IP address and port used for HTTPS traffic for hosted applications.
    <backend_insecure_ip_address>:<backend_insecure_port>

    The back end IP address and port used for HTTP traffic for hosted applications.

    Note

    Replace the example values with values applicable to your network configuration.

  15. Save the haproxy configuration.
  16. Configure two DNS Address records (A records) for the subdomain that are externally available over the Internet:

    <bastion_public_ip_address> api.<cluster_name>.<cluster_domain>
    <bastion_public_ip_address> *.apps..<cluster_name>.<cluster_domain>
    Copy to Clipboard Toggle word wrap

    where:

    <bastion_public_ip_address>
    The externally available IP address of the Bastion virtual server instance.
    <cluster_name>
    The name assigned to the cluster.
    <cluster_domain>
    The domain assigned to the cluster.

Verification

  1. Perform the following tasks to verify cluster access using command line access:

    1. Set your environment with the kubeconfig file:

      $ export KUBECONFIG=<kubeconfig_file_path>
      Copy to Clipboard Toggle word wrap

      where:

      <kubeconfig_file_path>
      The path to the downloaded kubeconfig file.
    2. Check cluster node status:

      $ oc get nodes
      Copy to Clipboard Toggle word wrap
      Note

      The command output should show all nodes as Ready in the STATUS column and the ROLES column should show that control plane and compute nodes are present.

    3. Check the cluster version:

      $ oc get clusterversion
      Copy to Clipboard Toggle word wrap
      Note

      The command output should say Condition: Available.

  2. Perform the following tasks to verify cluster access using the web console:

    1. Paste the access URL provided by Assisted Installer into your web browser.

      Note

      By default, clusters use self-signed certificates. This may cause your browser to display a message that says Connection not private or a similar warning. You can close this warning and continue.

    2. Navigate to the URL.
    3. Log in to the cluster with the username kubeadmin and the kubeadmin password provided in the Cluster Credentials section.
Red Hat logoGithubredditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance. Découvrez nos récentes mises à jour.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez le Blog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

Theme

© 2026 Red Hat
Retour au début