Chapter 4. Installing OpenShift Virtualization
4.1. Preparing your cluster for OpenShift Virtualization
Review this section before you install OpenShift Virtualization to ensure that your cluster meets the requirements.
You can use any installation method, including user-provisioned, installer-provisioned, or assisted installer, to deploy OpenShift Container Platform. However, the installation method and the cluster topology might affect OpenShift Virtualization functionality, such as snapshots or live migration.
FIPS mode
If you install your cluster in FIPS mode, no additional setup is required for OpenShift Virtualization.
4.1.1. Hardware and operating system requirements
Review the following hardware and operating system requirements for OpenShift Virtualization.
Supported platforms
- On-premise bare metal servers
- Amazon Web Services bare metal instances
Installing OpenShift Virtualization on an AWS bare metal instance is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
- Bare metal instances or servers offered by other cloud providers are not supported.
CPU requirements
- Supported by Red Hat Enterprise Linux (RHEL) 8
- Support for Intel 64 or AMD64 CPU extensions
- Intel VT or AMD-V hardware virtualization extensions enabled
- NX (no execute) flag enabled
Storage requirements
- Supported by OpenShift Container Platform
Operating system requirements
Red Hat Enterprise Linux CoreOS (RHCOS) installed on worker nodes
NoteRHEL worker nodes are not supported.
Additional resources
- About RHCOS
- Red Hat Ecosystem Catalog for supported CPUs
- Supported storage
4.1.2. Physical resource overhead requirements
OpenShift Virtualization is an add-on to OpenShift Container Platform and imposes additional overhead that you must account for when planning a cluster. Each cluster machine must accommodate the following overhead requirements in addition to the OpenShift Container Platform requirements. Oversubscribing the physical resources in a cluster can affect performance.
The numbers noted in this documentation are based on Red Hat’s test methodology and setup. These numbers can vary based on your own individual setup and environments.
4.1.2.1. Memory overhead
Calculate the memory overhead values for OpenShift Virtualization by using the equations below.
Cluster memory overhead
Memory overhead per infrastructure node ≈ 150 MiB
Memory overhead per worker node ≈ 360 MiB
Additionally, OpenShift Virtualization environment resources require a total of 2179 MiB of RAM that is spread across all infrastructure nodes.
Virtual machine memory overhead
Memory overhead per virtual machine ≈ (1.002 * requested memory) + 146 MiB \ + 8 MiB * (number of vCPUs) \ 1 + 16 MiB * (number of graphics devices) 2
If your environment includes a Single Root I/O Virtualization (SR-IOV) network device or a Graphics Processing Unit (GPU), allocate 1 GiB additional memory overhead for each device.
4.1.2.2. CPU overhead
Calculate the cluster processor overhead requirements for OpenShift Virtualization by using the equation below. The CPU overhead per virtual machine depends on your individual setup.
Cluster CPU overhead
CPU overhead for infrastructure nodes ≈ 4 cores
OpenShift Virtualization increases the overall utilization of cluster level services such as logging, routing, and monitoring. To account for this workload, ensure that nodes that host infrastructure components have capacity allocated for 4 additional cores (4000 millicores) distributed across those nodes.
CPU overhead for worker nodes ≈ 2 cores + CPU overhead per virtual machine
Each worker node that hosts virtual machines must have capacity for 2 additional cores (2000 millicores) for OpenShift Virtualization management workloads in addition to the CPUs required for virtual machine workloads.
Virtual machine CPU overhead
If dedicated CPUs are requested, there is a 1:1 impact on the cluster CPU overhead requirement. Otherwise, there are no specific rules about how many CPUs a virtual machine requires.
4.1.2.3. Storage overhead
Use the guidelines below to estimate storage overhead requirements for your OpenShift Virtualization environment.
Cluster storage overhead
Aggregated storage overhead per node ≈ 10 GiB
10 GiB is the estimated on-disk storage impact for each node in the cluster when you install OpenShift Virtualization.
Virtual machine storage overhead
Storage overhead per virtual machine depends on specific requests for resource allocation within the virtual machine. The request could be for ephemeral storage on the node or storage resources hosted elsewhere in the cluster. OpenShift Virtualization does not currently allocate any additional ephemeral storage for the running container itself.
4.1.2.4. Example
As a cluster administrator, if you plan to host 10 virtual machines in the cluster, each with 1 GiB of RAM and 2 vCPUs, the memory impact across the cluster is 11.68 GiB. The estimated on-disk storage impact for each node in the cluster is 10 GiB and the CPU impact for worker nodes that host virtual machine workloads is a minimum of 2 cores.
4.1.3. Object maximums
You must consider the following tested object maximums when planning your cluster:
4.1.4. Restricted network environments
If you install OpenShift Virtualization in a restricted environment with no internet connectivity, you must configure Operator Lifecycle Manager for restricted networks.
If you have limited internet connectivity, you can configure proxy support in Operator Lifecycle Manager to access the Red Hat-provided OperatorHub.
4.1.5. Live migration
Live migration has the following requirements:
-
Shared storage with
ReadWriteMany
(RWX) access mode - Sufficient RAM and network bandwidth
- Appropriate CPUs with sufficient capacity on the worker nodes. If the CPUs have different capacities, live migration might be very slow or fail.
4.1.6. Snapshots and cloning
See OpenShift Virtualization storage features for snapshot and cloning requirements.
4.1.7. Cluster high-availability options
You can configure one of the following high-availability (HA) options for your cluster:
Automatic high availability for installer-provisioned infrastructure (IPI) is available by deploying machine health checks.
NoteIn OpenShift Container Platform clusters installed using installer-provisioned infrastructure and with MachineHealthCheck properly configured, if a node fails the MachineHealthCheck and becomes unavailable to the cluster, it is recycled. What happens next with VMs that ran on the failed node depends on a series of conditions. See About RunStrategies for virtual machines for more detailed information about the potential outcomes and how RunStrategies affect those outcomes.
High availability for any platform is available by using either a monitoring system or a qualified human to monitor node availability. When a node is lost, shut it down and run
oc delete node <lost_node>
.NoteWithout an external monitoring system or a qualified human monitoring node health, virtual machines lose high availability.
4.2. Specifying nodes for OpenShift Virtualization components
Specify the nodes where you want to deploy OpenShift Virtualization Operators, workloads, and controllers by configuring node placement rules.
You can configure node placement for some components after installing OpenShift Virtualization, but there must not be virtual machines present if you want to configure node placement for workloads.
4.2.1. About node placement for virtualization components
You might want to customize where OpenShift Virtualization deploys its components to ensure that:
- Virtual machines only deploy on nodes that are intended for virtualization workloads.
- Operators only deploy on infrastructure nodes.
- Certain nodes are unaffected by OpenShift Virtualization. For example, you have workloads unrelated to virtualization running on your cluster, and you want those workloads to be isolated from OpenShift Virtualization.
4.2.1.1. How to apply node placement rules to virtualization components
You can specify node placement rules for a component by editing the corresponding object directly or by using the web console.
-
For the OpenShift Virtualization Operators that Operator Lifecycle Manager (OLM) deploys, edit the OLM
Subscription
object directly. Currently, you cannot configure node placement rules for theSubscription
object by using the web console. -
For components that the OpenShift Virtualization Operators deploy, edit the
HyperConverged
object directly or configure it by using the web console during OpenShift Virtualization installation. For the hostpath provisioner, edit the
HostPathProvisioner
object directly or configure it by using the web console.WarningYou must schedule the hostpath provisioner and the virtualization components on the same nodes. Otherwise, virtualization pods that use the hostpath provisioner cannot run.
Depending on the object, you can use one or more of the following rule types:
nodeSelector
- Allows pods to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs.
affinity
- Enables you to use more expressive syntax to set rules that match nodes with pods. Affinity also allows for more nuance in how the rules are applied. For example, you can specify that a rule is a preference, rather than a hard requirement, so that pods are still scheduled if the rule is not satisfied.
tolerations
- Allows pods to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts pods that tolerate the taint.
4.2.1.2. Node placement in the OLM Subscription object
To specify the nodes where OLM deploys the OpenShift Virtualization Operators, edit the Subscription
object during OpenShift Virtualization installation. You can include node placement rules in the spec.config
field, as shown in the following example:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: hco-operatorhub
namespace: openshift-cnv
spec:
source: redhat-operators
sourceNamespace: openshift-marketplace
name: kubevirt-hyperconverged
startingCSV: kubevirt-hyperconverged-operator.v4.8.7
channel: "stable"
config: 1
- 1
- The
config
field supportsnodeSelector
andtolerations
, but it does not supportaffinity
.
4.2.1.3. Node placement in the HyperConverged object
To specify the nodes where OpenShift Virtualization deploys its components, you can include the nodePlacement
object in the HyperConverged Cluster custom resource (CR) file that you create during OpenShift Virtualization installation. You can include nodePlacement
under the spec.infra
and spec.workloads
fields, as shown in the following example:
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
namespace: openshift-cnv
spec:
infra:
nodePlacement: 1
...
workloads:
nodePlacement:
...
- 1
- The
nodePlacement
fields supportnodeSelector
,affinity
, andtolerations
fields.
4.2.1.4. Node placement in the HostPathProvisioner object
You can configure node placement rules in the spec.workload
field of the HostPathProvisioner
object that you create when you install the hostpath provisioner.
apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
kind: HostPathProvisioner
metadata:
name: hostpath-provisioner
spec:
imagePullPolicy: IfNotPresent
pathConfig:
path: "</path/to/backing/directory>"
useNamingPrefix: false
workload: 1
- 1
- The
workload
field supportsnodeSelector
,affinity
, andtolerations
fields.
4.2.1.5. Additional resources
- Specifying nodes for virtual machines
- Placing pods on specific nodes using node selectors
- Controlling pod placement on nodes using node affinity rules
- Controlling pod placement using node taints
- Installing OpenShift Virtualization using the CLI
- Installing OpenShift Virtualization using the web console
- Configuring local storage for virtual machines
4.2.2. Example manifests
The following example YAML files use nodePlacement
, affinity
, and tolerations
objects to customize node placement for OpenShift Virtualization components.
4.2.2.1. Operator Lifecycle Manager Subscription object
4.2.2.1.1. Example: Node placement with nodeSelector in the OLM Subscription object
In this example, nodeSelector
is configured so that OLM places the OpenShift Virtualization Operators on nodes that are labeled with example.io/example-infra-key = example-infra-value
.
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.8.7 channel: "stable" config: nodeSelector: example.io/example-infra-key: example-infra-value
4.2.2.1.2. Example: Node placement with tolerations in the OLM Subscription object
In this example, nodes that are reserved for OLM to deploy OpenShift Virtualization Operators are labeled with the key=virtualization:NoSchedule
taint. Only pods with the matching tolerations are scheduled to these nodes.
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.8.7 channel: "stable" config: tolerations: - key: "key" operator: "Equal" value: "virtualization" effect: "NoSchedule"
4.2.2.2. HyperConverged object
4.2.2.2.1. Example: Node placement with nodeSelector in the HyperConverged Cluster CR
In this example, nodeSelector
is configured so that infrastructure resources are placed on nodes that are labeled with example.io/example-infra-key = example-infra-value
and workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value
.
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: nodeSelector: example.io/example-infra-key: example-infra-value workloads: nodePlacement: nodeSelector: example.io/example-workloads-key: example-workloads-value
4.2.2.2.2. Example: Node placement with affinity in the HyperConverged Cluster CR
In this example, affinity
is configured so that infrastructure resources are placed on nodes that are labeled with example.io/example-infra-key = example-value
and workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value
. Nodes that have more than eight CPUs are preferred for workloads, but if they are not available, pods are still scheduled.
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-infra-key operator: In values: - example-infra-value workloads: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-workloads-key operator: In values: - example-workloads-value preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: example.io/num-cpus operator: Gt values: - 8
4.2.2.2.3. Example: Node placement with tolerations in the HyperConverged Cluster CR
In this example, nodes that are reserved for OpenShift Virtualization components are labeled with the key=virtualization:NoSchedule
taint. Only pods with the matching tolerations are scheduled to these nodes.
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: workloads: nodePlacement: tolerations: - key: "key" operator: "Equal" value: "virtualization" effect: "NoSchedule"
4.2.2.3. HostPathProvisioner object
4.2.2.3.1. Example: Node placement with nodeSelector in the HostPathProvisioner object
In this example, nodeSelector
is configured so that workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value
.
apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: "</path/to/backing/directory>" useNamingPrefix: false workload: nodeSelector: example.io/example-workloads-key: example-workloads-value
4.3. Installing OpenShift Virtualization using the web console
Install OpenShift Virtualization to add virtualization functionality to your OpenShift Container Platform cluster.
You can use the OpenShift Container Platform 4.8 web console to subscribe to and deploy the OpenShift Virtualization Operators.
4.3.1. Installing the OpenShift Virtualization Operator
You can install the OpenShift Virtualization Operator from the OpenShift Container Platform web console.
Prerequisites
- Install OpenShift Container Platform 4.8 on your cluster.
-
Log in to the OpenShift Container Platform web console as a user with
cluster-admin
permissions.
Procedure
-
From the Administrator perspective, click Operators
OperatorHub. - In the Filter by keyword field, type OpenShift Virtualization.
- Select the OpenShift Virtualization tile.
- Read the information about the Operator and click Install.
On the Install Operator page:
- Select stable from the list of available Update Channel options. This ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version.
For Installed Namespace, ensure that the Operator recommended namespace option is selected. This installs the Operator in the mandatory
openshift-cnv
namespace, which is automatically created if it does not exist.WarningAttempting to install the OpenShift Virtualization Operator in a namespace other than
openshift-cnv
causes the installation to fail.For Approval Strategy, it is highly recommended that you select Automatic, which is the default value, so that OpenShift Virtualization automatically updates when a new version is available in the stable update channel.
While it is possible to select the Manual approval strategy, this is inadvisable because of the high risk that it presents to the supportability and functionality of your cluster. Only select Manual if you fully understand these risks and cannot use Automatic.
WarningBecause OpenShift Virtualization is only supported when used with the corresponding OpenShift Container Platform version, missing OpenShift Virtualization updates can cause your cluster to become unsupported.
-
Click Install to make the Operator available to the
openshift-cnv
namespace. - When the Operator installs successfully, click Create HyperConverged.
- Optional: Configure Infra and Workloads node placement options for OpenShift Virtualization components.
- Click Create to launch OpenShift Virtualization.
Verification
-
Navigate to the Workloads
Pods page and monitor the OpenShift Virtualization pods until they are all Running. After all the pods display the Running state, you can use OpenShift Virtualization.
4.3.2. Next steps
You might want to additionally configure the following components:
- The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.
Install OpenShift Virtualization to add virtualization functionality to your OpenShift Container Platform cluster. You can subscribe to and deploy the OpenShift Virtualization Operators by using the command line to apply manifests to your cluster.
To specify the nodes where you want OpenShift Virtualization to install its components, configure node placement rules.
4.3.3. Prerequisites
- Install OpenShift Container Platform 4.8 on your cluster.
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
4.3.4. Subscribing to the OpenShift Virtualization catalog by using the CLI
Before you install OpenShift Virtualization, you must subscribe to the OpenShift Virtualization catalog. Subscribing gives the openshift-cnv
namespace access to the OpenShift Virtualization Operators.
To subscribe, configure Namespace
, OperatorGroup
, and Subscription
objects by applying a single manifest to your cluster.
Procedure
Create a YAML file that contains the following manifest:
apiVersion: v1 kind: Namespace metadata: name: openshift-cnv --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kubevirt-hyperconverged-group namespace: openshift-cnv spec: targetNamespaces: - openshift-cnv --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.8.7 channel: "stable" 1
- 1
- Using the
stable
channel ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version.
Create the required
Namespace
,OperatorGroup
, andSubscription
objects for OpenShift Virtualization by running the following command:$ oc apply -f <file name>.yaml
You can configure certificate rotation parameters in the YAML file.
4.3.5. Deploying the OpenShift Virtualization Operator by using the CLI
You can deploy the OpenShift Virtualization Operator by using the oc
CLI.
Prerequisites
-
An active subscription to the OpenShift Virtualization catalog in the
openshift-cnv
namespace.
Procedure
Create a YAML file that contains the following manifest:
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec:
Deploy the OpenShift Virtualization Operator by running the following command:
$ oc apply -f <file_name>.yaml
Verification
Ensure that OpenShift Virtualization deployed successfully by watching the
PHASE
of the cluster service version (CSV) in theopenshift-cnv
namespace. Run the following command:$ watch oc get csv -n openshift-cnv
The following output displays if deployment was successful:
Example output
NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.8.7 OpenShift Virtualization 4.8.7 Succeeded
4.3.6. Next steps
You might want to additionally configure the following components:
- The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.
4.4. Installing the virtctl client
The virtctl
client is a command-line utility for managing OpenShift Virtualization resources. It is available for Linux, macOS, and Windows distributions.
You can install the virtctl
client from the OpenShift Virtualization web console or by enabling the OpenShift Virtualization repository and installing the kubevirt-virtctl
package.
4.4.1. Installing the virtctl client from the web console
You can download the virtctl
client from the Red Hat Customer Portal, which is linked to in your OpenShift Virtualization web console in the Command Line Tools page.
Prerequisites
- You must have an activated OpenShift Container Platform subscription to access the download page on the Customer Portal.
Procedure
- Access the Customer Portal by clicking the icon, which is in the upper-right corner of the web console, and selecting Command Line Tools.
- Ensure you have the appropriate version for your cluster selected from the Version: list.
-
Download the
virtctl
client for your distribution. All downloads are intar.gz
format. Extract the tarball. The following CLI command extracts it into the same directory as the tarball and is applicable for all distributions:
$ tar -xvf <virtctl-version-distribution.arch>.tar.gz
For Linux and macOS:
Navigate the extracted folder hierachy and make the
virtctl
binary executable:$ chmod +x <virtctl-file-name>
Move the
virtctl
binary to a directory on your PATH.To check your path, run:
$ echo $PATH
For Windows users:
-
Navigate the extracted folder hierarchy and double-click the
virtctl
executable file to install the client.
-
Navigate the extracted folder hierarchy and double-click the
4.4.2. Enabling OpenShift Virtualization repositories
Red Hat offers OpenShift Virtualization repositories for both Red Hat Enterprise Linux 8 and Red Hat Enterprise Linux 7:
-
Red Hat Enterprise Linux 8 repository:
cnv-4.8-for-rhel-8-x86_64-rpms
-
Red Hat Enterprise Linux 7 repository:
rhel-7-server-cnv-4.8-rpms
The process for enabling the repository in subscription-manager
is the same in both platforms.
Procedure
Enable the appropriate OpenShift Virtualization repository for your system by running the following command:
# subscription-manager repos --enable <repository>
4.4.3. Installing the virtctl client
Install the virtctl
client from the kubevirt-virtctl
package.
Procedure
Install the
kubevirt-virtctl
package:# yum install kubevirt-virtctl
4.4.4. Additional resources
- Using the CLI tools for OpenShift Virtualization.
4.5. Uninstalling OpenShift Virtualization using the web console
You can uninstall OpenShift Virtualization by using the OpenShift Container Platform web console.
4.5.1. Prerequisites
- You must have OpenShift Virtualization 4.8 installed.
You must delete all virtual machines, virtual machine instances, and data volumes.
ImportantAttempting to uninstall OpenShift Virtualization without deleting these objects results in failure.
4.5.2. Deleting the OpenShift Virtualization Operator Deployment custom resource
To uninstall OpenShift Virtualization, you must first delete the OpenShift Virtualization Operator Deployment custom resource.
Prerequisites
- Create the OpenShift Virtualization Operator Deployment custom resource.
Procedure
-
From the OpenShift Container Platform web console, select
openshift-cnv
from the Projects list. -
Navigate to the Operators
Installed Operators page. - Click OpenShift Virtualization.
- Click the OpenShift Virtualization Operator Deployment tab.
- Click the Options menu in the row containing the kubevirt-hyperconverged custom resource. In the expanded menu, click Delete HyperConverged Cluster.
- Click Delete in the confirmation window.
-
Navigate to the Workloads
Pods page to verify that only the Operator pods are running. Open a terminal window and clean up the remaining resources by running the following command:
$ oc delete apiservices v1alpha3.subresources.kubevirt.io -n openshift-cnv
4.5.3. Deleting the OpenShift Virtualization catalog subscription
To finish uninstalling OpenShift Virtualization, delete the OpenShift Virtualization catalog subscription.
Prerequisites
- An active subscription to the OpenShift Virtualization catalog
Procedure
-
Navigate to the Operators
OperatorHub page. - Search for OpenShift Virtualization and then select it.
- Click Uninstall.
You can now delete the openshift-cnv
namespace.
4.5.4. Deleting a namespace using the web console
You can delete a namespace by using the OpenShift Container Platform web console.
If you do not have permissions to delete the namespace, the Delete Namespace option is not available.
Procedure
-
Navigate to Administration
Namespaces. - Locate the namespace that you want to delete in the list of namespaces.
- On the far right side of the namespace listing, select Delete Namespace from the Options menu .
- When the Delete Namespace pane opens, enter the name of the namespace that you want to delete in the field.
- Click Delete.
4.6. Uninstalling OpenShift Virtualization using the CLI
You can uninstall OpenShift Virtualization by using the OpenShift Container Platform CLI.
4.6.1. Prerequisites
- You must have OpenShift Virtualization 4.8 installed.
You must delete all virtual machines, virtual machine instances, and data volumes.
ImportantAttempting to uninstall OpenShift Virtualization without deleting these objects results in failure.
4.6.2. Deleting OpenShift Virtualization
You can delete OpenShift Virtualization by using the CLI.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Access to a OpenShift Virtualization cluster using an account with
cluster-admin
permissions.
When you delete the subscription of the OpenShift Virtualization operator in the OLM by using the CLI, the ClusterServiceVersion
(CSV) object is not deleted from the cluster. To completely uninstall OpenShift Virtualization, you must explicitly delete the CSV.
Procedure
Delete the
HyperConverged
custom resource:$ oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv
Delete the subscription of the OpenShift Virtualization operator in the Operator Lifecycle Manager (OLM):
$ oc delete subscription kubevirt-hyperconverged -n openshift-cnv
Set the cluster service version (CSV) name for OpenShift Virtualization as an environment variable:
$ CSV_NAME=$(oc get csv -n openshift-cnv -o=jsonpath="{.items[0].metadata.name}")
Delete the CSV from the OpenShift Virtualization cluster by specifying the CSV name from the previous step:
$ oc delete csv ${CSV_NAME} -n openshift-cnv
OpenShift Virtualization is uninstalled when a confirmation message indicates that the CSV was deleted successfully:
Example output
clusterserviceversion.operators.coreos.com "kubevirt-hyperconverged-operator.v4.8.7" deleted