Chapter 6. Installing


6.1. Preparing your cluster for OpenShift Virtualization

Review this section before you install OpenShift Virtualization to ensure that your cluster meets the requirements.

Important

You can use any installation method, including user-provisioned, installer-provisioned, or assisted installer, to deploy OpenShift Container Platform. However, the installation method and the cluster topology might affect OpenShift Virtualization functionality, such as snapshots or live migration.

FIPS mode

If you install your cluster in FIPS mode, no additional setup is required for OpenShift Virtualization.

IPv6

You cannot run OpenShift Virtualization on a single-stack IPv6 cluster. (BZ#2193267)

6.1.1. Hardware and operating system requirements

Review the following hardware and operating system requirements for OpenShift Virtualization.

Supported platforms

Important

Installing OpenShift Virtualization on AWS bare metal instances or on IBM Cloud Bare Metal Servers is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

  • Bare metal instances or servers offered by other cloud providers are not supported.

CPU requirements

  • Supported by Red Hat Enterprise Linux (RHEL) 8
  • Support for Intel 64 or AMD64 CPU extensions
  • Intel VT or AMD-V hardware virtualization extensions enabled
  • NX (no execute) flag enabled

Storage requirements

  • Supported by OpenShift Container Platform
Warning

If you deploy OpenShift Virtualization with Red Hat OpenShift Data Foundation, you must create a dedicated storage class for Windows virtual machine disks. See Optimizing ODF PersistentVolumes for Windows VMs for details.

Operating system requirements

  • Red Hat Enterprise Linux CoreOS (RHCOS) installed on worker nodes

    Note

    RHEL worker nodes are not supported.

  • If your cluster uses worker nodes with different CPUs, live migration failures can occur because different CPUs have different capabilities. To avoid such failures, use CPUs with appropriate capacity for each node and set node affinity on your virtual machines to ensure successful migration. See Configuring a required node affinity rule for more information.

Additional resources

6.1.2. Physical resource overhead requirements

OpenShift Virtualization is an add-on to OpenShift Container Platform and imposes additional overhead that you must account for when planning a cluster. Each cluster machine must accommodate the following overhead requirements in addition to the OpenShift Container Platform requirements. Oversubscribing the physical resources in a cluster can affect performance.

Important

The numbers noted in this documentation are based on Red Hat’s test methodology and setup. These numbers can vary based on your own individual setup and environments.

6.1.2.1. Memory overhead

Calculate the memory overhead values for OpenShift Virtualization by using the equations below.

Cluster memory overhead

Memory overhead per infrastructure node ≈ 150 MiB

Memory overhead per worker node ≈ 360 MiB

Additionally, OpenShift Virtualization environment resources require a total of 2179 MiB of RAM that is spread across all infrastructure nodes.

Virtual machine memory overhead

Memory overhead per virtual machine ≈ (1.002 × requested memory) \
              + 218 MiB \ 1
              + 8 MiB × (number of vCPUs) \ 2
              + 16 MiB × (number of graphics devices) \ 3
              + (additional memory overhead) 4

1
Required for the processes that run in the virt-launcher pod.
2
Number of virtual CPUs requested by the virtual machine.
3
Number of virtual graphics cards requested by the virtual machine.
4
Additional memory overhead:
  • If your environment includes a Single Root I/O Virtualization (SR-IOV) network device or a Graphics Processing Unit (GPU), allocate 1 GiB additional memory overhead for each device.

6.1.2.2. CPU overhead

Calculate the cluster processor overhead requirements for OpenShift Virtualization by using the equation below. The CPU overhead per virtual machine depends on your individual setup.

Cluster CPU overhead

CPU overhead for infrastructure nodes ≈ 4 cores

OpenShift Virtualization increases the overall utilization of cluster level services such as logging, routing, and monitoring. To account for this workload, ensure that nodes that host infrastructure components have capacity allocated for 4 additional cores (4000 millicores) distributed across those nodes.

CPU overhead for worker nodes ≈ 2 cores + CPU overhead per virtual machine

Each worker node that hosts virtual machines must have capacity for 2 additional cores (2000 millicores) for OpenShift Virtualization management workloads in addition to the CPUs required for virtual machine workloads.

Virtual machine CPU overhead

If dedicated CPUs are requested, there is a 1:1 impact on the cluster CPU overhead requirement. Otherwise, there are no specific rules about how many CPUs a virtual machine requires.

6.1.2.3. Storage overhead

Use the guidelines below to estimate storage overhead requirements for your OpenShift Virtualization environment.

Cluster storage overhead

Aggregated storage overhead per node ≈ 10 GiB

10 GiB is the estimated on-disk storage impact for each node in the cluster when you install OpenShift Virtualization.

Virtual machine storage overhead

Storage overhead per virtual machine depends on specific requests for resource allocation within the virtual machine. The request could be for ephemeral storage on the node or storage resources hosted elsewhere in the cluster. OpenShift Virtualization does not currently allocate any additional ephemeral storage for the running container itself.

6.1.2.4. Example

As a cluster administrator, if you plan to host 10 virtual machines in the cluster, each with 1 GiB of RAM and 2 vCPUs, the memory impact across the cluster is 11.68 GiB. The estimated on-disk storage impact for each node in the cluster is 10 GiB and the CPU impact for worker nodes that host virtual machine workloads is a minimum of 2 cores.

6.1.3. Object maximums

You must consider the following tested object maximums when planning your cluster:

6.1.4. Restricted network environments

If you install OpenShift Virtualization in a restricted environment with no internet connectivity, you must configure Operator Lifecycle Manager for restricted networks.

If you have limited internet connectivity, you can configure proxy support in Operator Lifecycle Manager to access the Red Hat-provided OperatorHub.

6.1.5. Live migration

Live migration has the following requirements:

  • Shared storage with ReadWriteMany (RWX) access mode.
  • Sufficient RAM and network bandwidth.
  • If the virtual machine uses a host model CPU, the nodes must support the virtual machine’s host model CPU.
Note

You must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation:

Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)

The default number of migrations that can run in parallel in the cluster is 5.

6.1.6. Snapshots and cloning

See OpenShift Virtualization storage features for snapshot and cloning requirements.

6.1.7. Cluster high-availability options

You can configure one of the following high-availability (HA) options for your cluster:

  • Automatic high availability for installer-provisioned infrastructure (IPI) is available by deploying machine health checks.

    Note

    In OpenShift Container Platform clusters installed using installer-provisioned infrastructure and with MachineHealthCheck properly configured, if a node fails the MachineHealthCheck and becomes unavailable to the cluster, it is recycled. What happens next with VMs that ran on the failed node depends on a series of conditions. See About RunStrategies for virtual machines for more detailed information about the potential outcomes and how RunStrategies affect those outcomes.

  • Automatic high availability for both IPI and non-IPI is available by using the Node Health Check Operator on the OpenShift Container Platform cluster to deploy the NodeHealthCheck controller. The controller identifies unhealthy nodes and uses a remediation provider, such as the Self Node Remediation Operator or Fence Agents Remediation Operator, to remediate the unhealthy nodes. For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation.
  • High availability for any platform is available by using either a monitoring system or a qualified human to monitor node availability. When a node is lost, shut it down and run oc delete node <lost_node>.

    Note

    Without an external monitoring system or a qualified human monitoring node health, virtual machines lose high availability.

6.2. Specifying nodes for OpenShift Virtualization components

Specify the nodes where you want to deploy OpenShift Virtualization Operators, workloads, and controllers by configuring node placement rules.

Note

You can configure node placement for some components after installing OpenShift Virtualization, but there must not be virtual machines present if you want to configure node placement for workloads.

6.2.1. About node placement for virtualization components

You might want to customize where OpenShift Virtualization deploys its components to ensure that:

  • Virtual machines only deploy on nodes that are intended for virtualization workloads.
  • Operators only deploy on infrastructure nodes.
  • Certain nodes are unaffected by OpenShift Virtualization. For example, you have workloads unrelated to virtualization running on your cluster, and you want those workloads to be isolated from OpenShift Virtualization.

6.2.1.1. How to apply node placement rules to virtualization components

You can specify node placement rules for a component by editing the corresponding object directly or by using the web console.

  • For the OpenShift Virtualization Operators that Operator Lifecycle Manager (OLM) deploys, edit the OLM Subscription object directly. Currently, you cannot configure node placement rules for the Subscription object by using the web console.
  • For components that the OpenShift Virtualization Operators deploy, edit the HyperConverged object directly or configure it by using the web console during OpenShift Virtualization installation.
  • For the hostpath provisioner, edit the HostPathProvisioner object directly or configure it by using the web console.

    Warning

    You must schedule the hostpath provisioner and the virtualization components on the same nodes. Otherwise, virtualization pods that use the hostpath provisioner cannot run.

Depending on the object, you can use one or more of the following rule types:

nodeSelector
Allows pods to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs.
affinity
Enables you to use more expressive syntax to set rules that match nodes with pods. Affinity also allows for more nuance in how the rules are applied. For example, you can specify that a rule is a preference, rather than a hard requirement, so that pods are still scheduled if the rule is not satisfied.
tolerations
Allows pods to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts pods that tolerate the taint.

6.2.1.2. Node placement in the OLM Subscription object

To specify the nodes where OLM deploys the OpenShift Virtualization Operators, edit the Subscription object during OpenShift Virtualization installation. You can include node placement rules in the spec.config field, as shown in the following example:

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: hco-operatorhub
  namespace: openshift-cnv
spec:
  source: redhat-operators
  sourceNamespace: openshift-marketplace
  name: kubevirt-hyperconverged
  startingCSV: kubevirt-hyperconverged-operator.v4.12.14
  channel: "stable"
  config: 1
1
The config field supports nodeSelector and tolerations, but it does not support affinity.

6.2.1.3. Node placement in the HyperConverged object

To specify the nodes where OpenShift Virtualization deploys its components, you can include the nodePlacement object in the HyperConverged Cluster custom resource (CR) file that you create during OpenShift Virtualization installation. You can include nodePlacement under the spec.infra and spec.workloads fields, as shown in the following example:

apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
  name: kubevirt-hyperconverged
  namespace: openshift-cnv
spec:
  infra:
    nodePlacement: 1
    ...
  workloads:
    nodePlacement:
    ...
1
The nodePlacement fields support nodeSelector, affinity, and tolerations fields.

6.2.1.4. Node placement in the HostPathProvisioner object

You can configure node placement rules in the spec.workload field of the HostPathProvisioner object that you create when you install the hostpath provisioner.

apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
kind: HostPathProvisioner
metadata:
  name: hostpath-provisioner
spec:
  imagePullPolicy: IfNotPresent
  pathConfig:
    path: "</path/to/backing/directory>"
    useNamingPrefix: false
  workload: 1
1
The workload field supports nodeSelector, affinity, and tolerations fields.

6.2.1.5. Additional resources

6.2.2. Example manifests

The following example YAML files use nodePlacement, affinity, and tolerations objects to customize node placement for OpenShift Virtualization components.

6.2.2.1. Operator Lifecycle Manager Subscription object

6.2.2.1.1. Example: Node placement with nodeSelector in the OLM Subscription object

In this example, nodeSelector is configured so that OLM places the OpenShift Virtualization Operators on nodes that are labeled with example.io/example-infra-key = example-infra-value.

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: hco-operatorhub
  namespace: openshift-cnv
spec:
  source: redhat-operators
  sourceNamespace: openshift-marketplace
  name: kubevirt-hyperconverged
  startingCSV: kubevirt-hyperconverged-operator.v4.12.14
  channel: "stable"
  config:
    nodeSelector:
      example.io/example-infra-key: example-infra-value
6.2.2.1.2. Example: Node placement with tolerations in the OLM Subscription object

In this example, nodes that are reserved for OLM to deploy OpenShift Virtualization Operators are labeled with the key=virtualization:NoSchedule taint. Only pods with the matching tolerations are scheduled to these nodes.

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: hco-operatorhub
  namespace: openshift-cnv
spec:
  source: redhat-operators
  sourceNamespace: openshift-marketplace
  name: kubevirt-hyperconverged
  startingCSV: kubevirt-hyperconverged-operator.v4.12.14
  channel: "stable"
  config:
    tolerations:
    - key: "key"
      operator: "Equal"
      value: "virtualization"
      effect: "NoSchedule"

6.2.2.2. HyperConverged object

6.2.2.2.1. Example: Node placement with nodeSelector in the HyperConverged Cluster CR

In this example, nodeSelector is configured so that infrastructure resources are placed on nodes that are labeled with example.io/example-infra-key = example-infra-value and workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value.

apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
  name: kubevirt-hyperconverged
  namespace: openshift-cnv
spec:
  infra:
    nodePlacement:
      nodeSelector:
        example.io/example-infra-key: example-infra-value
  workloads:
    nodePlacement:
      nodeSelector:
        example.io/example-workloads-key: example-workloads-value
6.2.2.2.2. Example: Node placement with affinity in the HyperConverged Cluster CR

In this example, affinity is configured so that infrastructure resources are placed on nodes that are labeled with example.io/example-infra-key = example-value and workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value. Nodes that have more than eight CPUs are preferred for workloads, but if they are not available, pods are still scheduled.

apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
  name: kubevirt-hyperconverged
  namespace: openshift-cnv
spec:
  infra:
    nodePlacement:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: example.io/example-infra-key
                operator: In
                values:
                - example-infra-value
  workloads:
    nodePlacement:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: example.io/example-workloads-key
                operator: In
                values:
                - example-workloads-value
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            preference:
              matchExpressions:
              - key: example.io/num-cpus
                operator: Gt
                values:
                - 8
6.2.2.2.3. Example: Node placement with tolerations in the HyperConverged Cluster CR

In this example, nodes that are reserved for OpenShift Virtualization components are labeled with the key=virtualization:NoSchedule taint. Only pods with the matching tolerations are scheduled to these nodes.

apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
  name: kubevirt-hyperconverged
  namespace: openshift-cnv
spec:
  workloads:
    nodePlacement:
      tolerations:
      - key: "key"
        operator: "Equal"
        value: "virtualization"
        effect: "NoSchedule"

6.2.2.3. HostPathProvisioner object

6.2.2.3.1. Example: Node placement with nodeSelector in the HostPathProvisioner object

In this example, nodeSelector is configured so that workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value.

apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
kind: HostPathProvisioner
metadata:
  name: hostpath-provisioner
spec:
  imagePullPolicy: IfNotPresent
  pathConfig:
    path: "</path/to/backing/directory>"
    useNamingPrefix: false
  workload:
    nodeSelector:
      example.io/example-workloads-key: example-workloads-value

6.3. Installing OpenShift Virtualization using the web console

Install OpenShift Virtualization to add virtualization functionality to your OpenShift Container Platform cluster.

You can use the OpenShift Container Platform 4.12 web console to subscribe to and deploy the OpenShift Virtualization Operators.

6.3.1. Installing the OpenShift Virtualization Operator

You can install the OpenShift Virtualization Operator from the OpenShift Container Platform web console.

Prerequisites

  • Install OpenShift Container Platform 4.12 on your cluster.
  • Log in to the OpenShift Container Platform web console as a user with cluster-admin permissions.

Procedure

  1. From the Administrator perspective, click Operators OperatorHub.
  2. In the Filter by keyword field, type Virtualization.
  3. Select the {CNVOperatorDisplayName} tile with the Red Hat source label.
  4. Read the information about the Operator and click Install.
  5. On the Install Operator page:

    1. Select stable from the list of available Update Channel options. This ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version.
    2. For Installed Namespace, ensure that the Operator recommended namespace option is selected. This installs the Operator in the mandatory openshift-cnv namespace, which is automatically created if it does not exist.

      Warning

      Attempting to install the OpenShift Virtualization Operator in a namespace other than openshift-cnv causes the installation to fail.

    3. For Approval Strategy, it is highly recommended that you select Automatic, which is the default value, so that OpenShift Virtualization automatically updates when a new version is available in the stable update channel.

      While it is possible to select the Manual approval strategy, this is inadvisable because of the high risk that it presents to the supportability and functionality of your cluster. Only select Manual if you fully understand these risks and cannot use Automatic.

      Warning

      Because OpenShift Virtualization is only supported when used with the corresponding OpenShift Container Platform version, missing OpenShift Virtualization updates can cause your cluster to become unsupported.

  6. Click Install to make the Operator available to the openshift-cnv namespace.
  7. When the Operator installs successfully, click Create HyperConverged.
  8. Optional: Configure Infra and Workloads node placement options for OpenShift Virtualization components.
  9. Click Create to launch OpenShift Virtualization.

Verification

  • Navigate to the Workloads Pods page and monitor the OpenShift Virtualization pods until they are all Running. After all the pods display the Running state, you can use OpenShift Virtualization.

6.3.2. Next steps

You might want to additionally configure the following components:

  • The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.

6.4. Installing OpenShift Virtualization using the CLI

Install OpenShift Virtualization to add virtualization functionality to your OpenShift Container Platform cluster. You can subscribe to and deploy the OpenShift Virtualization Operators by using the command line to apply manifests to your cluster.

Note

To specify the nodes where you want OpenShift Virtualization to install its components, configure node placement rules.

6.4.1. Prerequisites

  • Install OpenShift Container Platform 4.12 on your cluster.
  • Install the OpenShift CLI (oc).
  • Log in as a user with cluster-admin privileges.

6.4.2. Subscribing to the OpenShift Virtualization catalog by using the CLI

Before you install OpenShift Virtualization, you must subscribe to the OpenShift Virtualization catalog. Subscribing gives the openshift-cnv namespace access to the OpenShift Virtualization Operators.

To subscribe, configure Namespace, OperatorGroup, and Subscription objects by applying a single manifest to your cluster.

Procedure

  1. Create a YAML file that contains the following manifest:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-cnv
    ---
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: kubevirt-hyperconverged-group
      namespace: openshift-cnv
    spec:
      targetNamespaces:
        - openshift-cnv
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: hco-operatorhub
      namespace: openshift-cnv
    spec:
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      name: kubevirt-hyperconverged
      startingCSV: kubevirt-hyperconverged-operator.v4.12.14
      channel: "stable" 1
    1
    Using the stable channel ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version.
  2. Create the required Namespace, OperatorGroup, and Subscription objects for OpenShift Virtualization by running the following command:

    $ oc apply -f <file name>.yaml
Note

You can configure certificate rotation parameters in the YAML file.

6.4.3. Deploying the OpenShift Virtualization Operator by using the CLI

You can deploy the OpenShift Virtualization Operator by using the oc CLI.

Prerequisites

  • An active subscription to the OpenShift Virtualization catalog in the openshift-cnv namespace.

Procedure

  1. Create a YAML file that contains the following manifest:

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
  2. Deploy the OpenShift Virtualization Operator by running the following command:

    $ oc apply -f <file_name>.yaml

Verification

  • Ensure that OpenShift Virtualization deployed successfully by watching the PHASE of the cluster service version (CSV) in the openshift-cnv namespace. Run the following command:

    $ watch oc get csv -n openshift-cnv

    The following output displays if deployment was successful:

    Example output

    NAME                                      DISPLAY                    VERSION   REPLACES   PHASE
    kubevirt-hyperconverged-operator.v4.12.14   OpenShift Virtualization   4.12.14                Succeeded

6.4.4. Next steps

You might want to additionally configure the following components:

  • The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.

6.5. Installing the virtctl client

The virtctl client is a command-line utility for managing OpenShift Virtualization resources. It is available for Linux, Windows, and macOS.

6.5.1. Installing the virtctl client on Linux, Windows, and macOS

Download and install the virtctl client for your operating system.

Procedure

  1. Navigate to Virtualization > Overview in the OpenShift Container Platform web console.
  2. Click the Download virtctl link on the upper right corner of the page and download the virtctl client for your operating system.
  3. Install virtctl:

    • For Linux:

      1. Decompress the archive file:

        $ tar -xvf <virtctl-version-distribution.arch>.tar.gz
      2. Run the following command to make the virtctl binary executable:

        $ chmod +x <path/virtctl-file-name>
      3. Move the virtctl binary to a directory in your PATH environment variable.

        You can check your path by running the following command:

        $ echo $PATH
      4. Set the KUBECONFIG environment variable:

        $ export KUBECONFIG=/home/<user>/clusters/current/auth/kubeconfig
    • For Windows:

      1. Decompress the archive file.
      2. Navigate the extracted folder hierarchy and double-click the virtctl executable file to install the client.
      3. Move the virtctl binary to a directory in your PATH environment variable.

        You can check your path by running the following command:

        C:\> path
    • For macOS:

      1. Decompress the archive file.
      2. Move the virtctl binary to a directory in your PATH environment variable.

        You can check your path by running the following command:

        echo $PATH

6.5.2. Installing the virtctl as an RPM

You can install the virtctl client on Red Hat Enterprise Linux (RHEL) as an RPM after enabling the OpenShift Virtualization repository.

6.5.2.1. Enabling OpenShift Virtualization repositories

Enable the OpenShift Virtualization repository for your version of Red Hat Enterprise Linux (RHEL).

Prerequisites

  • Your system is registered to a Red Hat account with an active subscription to the "Red Hat Container Native Virtualization" entitlement.

Procedure

  • Enable the appropriate OpenShift Virtualization repository for your operating system by using the subscription-manager CLI tool.

    • To enable the repository for RHEL 8, run:

      # subscription-manager repos --enable cnv-4.12-for-rhel-8-x86_64-rpms
    • To enable the repository for RHEL 7, run:

      # subscription-manager repos --enable rhel-7-server-cnv-4.12-rpms

6.5.2.2. Installing the virtctl client using the yum utility

Install the virtctl client from the kubevirt-virtctl package.

Prerequisites

  • You enabled an OpenShift Virtualization repository on your Red Hat Enterprise Linux (RHEL) system.

Procedure

  • Install the kubevirt-virtctl package:

    # yum install kubevirt-virtctl

6.5.3. Additional resources

6.6. Uninstalling OpenShift Virtualization

You uninstall OpenShift Virtualization by using the web console or the command line interface (CLI) to delete the OpenShift Virtualization workloads, the Operator, and its resources.

6.6.1. Uninstalling OpenShift Virtualization by using the web console

You uninstall OpenShift Virtualization by using the web console to perform the following tasks:

Important

You must first delete all virtual machines, and virtual machine instances.

You cannot uninstall OpenShift Virtualization while its workloads remain on the cluster.

6.6.1.1. Deleting the HyperConverged custom resource

To uninstall OpenShift Virtualization, you first delete the HyperConverged custom resource (CR).

Prerequisites

  • You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.

Procedure

  1. Navigate to the Operators Installed Operators page.
  2. Select the OpenShift Virtualization Operator.
  3. Click the OpenShift Virtualization Deployment tab.
  4. Click the Options menu kebab beside kubevirt-hyperconverged and select Delete HyperConverged.
  5. Click Delete in the confirmation window.

6.6.1.2. Deleting Operators from a cluster using the web console

Cluster administrators can delete installed Operators from a selected namespace by using the web console.

Prerequisites

  • You have access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions.

Procedure

  1. Navigate to the Operators Installed Operators page.
  2. Scroll or enter a keyword into the Filter by name field to find the Operator that you want to remove. Then, click on it.
  3. On the right side of the Operator Details page, select Uninstall Operator from the Actions list.

    An Uninstall Operator? dialog box is displayed.

  4. Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates.

    Note

    This action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs.

6.6.1.3. Deleting a namespace using the web console

You can delete a namespace by using the OpenShift Container Platform web console.

Prerequisites

  • You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.

Procedure

  1. Navigate to Administration Namespaces.
  2. Locate the namespace that you want to delete in the list of namespaces.
  3. On the far right side of the namespace listing, select Delete Namespace from the Options menu kebab .
  4. When the Delete Namespace pane opens, enter the name of the namespace that you want to delete in the field.
  5. Click Delete.

6.6.1.4. Deleting OpenShift Virtualization custom resource definitions

You can delete the OpenShift Virtualization custom resource definitions (CRDs) by using the web console.

Prerequisites

  • You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.

Procedure

  1. Navigate to Administration CustomResourceDefinitions.
  2. Select the Label filter and enter operators.coreos.com/kubevirt-hyperconverged.openshift-cnv in the Search field to display the OpenShift Virtualization CRDs.
  3. Click the Options menu kebab beside each CRD and select Delete CustomResourceDefinition.

6.6.2. Uninstalling OpenShift Virtualization by using the CLI

You can uninstall OpenShift Virtualization by using the OpenShift CLI (oc).

Prerequisites

  • You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.
  • You have installed the OpenShift CLI (oc).
  • You have deleted all virtual machines and virtual machine instances. You cannot uninstall OpenShift Virtualization while its workloads remain on the cluster.

Procedure

  1. Delete the HyperConverged custom resource:

    $ oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv
  2. Delete the OpenShift Virtualization Operator subscription:

    $ oc delete subscription kubevirt-hyperconverged -n openshift-cnv
  3. Delete the OpenShift Virtualization ClusterServiceVersion resource:

    $ oc delete csv -n openshift-cnv -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv
  4. Delete the OpenShift Virtualization namespace:

    $ oc delete namespace openshift-cnv
  5. List the OpenShift Virtualization custom resource definitions (CRDs) by running the oc delete crd command with the dry-run option:

    $ oc delete crd --dry-run=client -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv

    Example output

    customresourcedefinition.apiextensions.k8s.io "cdis.cdi.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "hostpathprovisioners.hostpathprovisioner.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "hyperconvergeds.hco.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "kubevirts.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "networkaddonsconfigs.networkaddonsoperator.network.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "ssps.ssp.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "tektontasks.tektontasks.kubevirt.io" deleted (dry run)

  6. Delete the CRDs by running the oc delete crd command without the dry-run option:

    $ oc delete crd -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.