Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 5. Postinstallation configuration
5.1. Postinstallation configuration Link kopierenLink in die Zwischenablage kopiert!
The following procedures are typically performed after OpenShift Virtualization is installed. You can configure the components that are relevant for your environment:
- Node placement rules for OpenShift Virtualization Operators, workloads, and controllers
- Installing the Kubernetes NMState and SR-IOV Operators
- Configuring a Linux bridge network for external access to virtual machines (VMs)
- Configuring a dedicated secondary network for live migration
- Configuring an SR-IOV network
- Enabling the creation of load balancer services by using the OpenShift Container Platform web console
- Defining a default storage class for the Container Storage Interface (CSI)
- Configuring local storage by using the Hostpath Provisioner (HPP)
5.2. Specifying nodes for OpenShift Virtualization components Link kopierenLink in die Zwischenablage kopiert!
The default scheduling for virtual machines (VMs) on bare metal nodes is appropriate. Optionally, you can specify the nodes where you want to deploy OpenShift Virtualization Operators, workloads, and controllers by configuring node placement rules.
You can configure node placement rules for some components after installing OpenShift Virtualization, but virtual machines cannot be present if you want to configure node placement rules for workloads.
5.2.1. About node placement rules for OpenShift Virtualization components Link kopierenLink in die Zwischenablage kopiert!
You can use node placement rules for the following tasks:
- Deploy virtual machines only on nodes intended for virtualization workloads.
- Deploy Operators only on infrastructure nodes.
- Maintain separation between workloads.
Depending on the object, you can use one or more of the following rule types:
nodeSelector- Allows pods to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs.
affinity- Enables you to use more expressive syntax to set rules that match nodes with pods. Affinity also allows for more nuance in how the rules are applied. For example, you can specify that a rule is a preference, not a requirement. If a rule is a preference, pods are still scheduled when the rule is not satisfied.
tolerations- Allows pods to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts pods that tolerate the taint.
5.2.2. Applying node placement rules Link kopierenLink in die Zwischenablage kopiert!
You can apply node placement rules by editing a
Subscription
HyperConverged
HostPathProvisioner
Prerequisites
-
You have installed the OpenShift CLI ().
oc - You are logged in with cluster administrator permissions.
Procedure
Edit the object in your default editor by running the following command:
$ oc edit <resource_type> <resource_name> -n openshift-cnv- Save the file to apply the changes.
5.2.3. Node placement rule examples Link kopierenLink in die Zwischenablage kopiert!
You can specify node placement rules for a OpenShift Virtualization component by editing a
Subscription
HyperConverged
HostPathProvisioner
5.2.3.1. Subscription object node placement rule examples Link kopierenLink in die Zwischenablage kopiert!
To specify the nodes where OLM deploys the OpenShift Virtualization Operators, edit the
Subscription
Currently, you cannot configure node placement rules for the
Subscription
The
Subscription
affinity
Example Subscription object with nodeSelector rule
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: hco-operatorhub
namespace: openshift-cnv
spec:
source: redhat-operators
sourceNamespace: openshift-marketplace
name: kubevirt-hyperconverged
startingCSV: kubevirt-hyperconverged-operator.v4.14.17
channel: "stable"
config:
nodeSelector:
example.io/example-infra-key: example-infra-value
OLM deploys the OpenShift Virtualization Operators on nodes labeled
example.io/example-infra-key = example-infra-value
Example Subscription object with tolerations rule
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: hco-operatorhub
namespace: openshift-cnv
spec:
source: redhat-operators
sourceNamespace: openshift-marketplace
name: kubevirt-hyperconverged
startingCSV: kubevirt-hyperconverged-operator.v4.14.17
channel: "stable"
config:
tolerations:
- key: "key"
operator: "Equal"
value: "virtualization"
effect: "NoSchedule"
OLM deploys OpenShift Virtualization Operators on nodes labeled
key = virtualization:NoSchedule
5.2.3.2. HyperConverged object node placement rule example Link kopierenLink in die Zwischenablage kopiert!
To specify the nodes where OpenShift Virtualization deploys its components, you can edit the
nodePlacement
HyperConverged
Example HyperConverged object with nodeSelector rule
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
namespace: openshift-cnv
spec:
infra:
nodePlacement:
nodeSelector:
example.io/example-infra-key: example-infra-value
workloads:
nodePlacement:
nodeSelector:
example.io/example-workloads-key: example-workloads-value
-
Infrastructure resources are placed on nodes labeled .
example.io/example-infra-key = example-infra-value -
Workloads are placed on nodes labeled .
example.io/example-workloads-key = example-workloads-value
Example HyperConverged object with affinity rule
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
namespace: openshift-cnv
spec:
infra:
nodePlacement:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: example.io/example-infra-key
operator: In
values:
- example-infra-value
workloads:
nodePlacement:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: example.io/example-workloads-key
operator: In
values:
- example-workloads-value
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: example.io/num-cpus
operator: Gt
values:
- 8
-
Infrastructure resources are placed on nodes labeled .
example.io/example-infra-key = example-value -
Workloads are placed on nodes labeled .
example.io/example-workloads-key = example-workloads-value - Nodes that have more than eight CPUs are preferred for workloads, but if they are not available, pods are still scheduled.
Example HyperConverged object with tolerations rule
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
namespace: openshift-cnv
spec:
workloads:
nodePlacement:
tolerations:
- key: "key"
operator: "Equal"
value: "virtualization"
effect: "NoSchedule"
Nodes reserved for OpenShift Virtualization components are labeled with the
key = virtualization:NoSchedule
5.2.3.3. HostPathProvisioner object node placement rule example Link kopierenLink in die Zwischenablage kopiert!
You can edit the
HostPathProvisioner
You must schedule the hostpath provisioner (HPP) and the OpenShift Virtualization components on the same nodes. Otherwise, virtualization pods that use the hostpath provisioner cannot run. You cannot run virtual machines.
After you deploy a virtual machine (VM) with the HPP storage class, you can remove the hostpath provisioner pod from the same node by using the node selector. However, you must first revert that change, at least for that specific node, and wait for the pod to run before trying to delete the VM.
You can configure node placement rules by specifying
nodeSelector
affinity
tolerations
spec.workload
HostPathProvisioner
Example HostPathProvisioner object with nodeSelector rule
apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
kind: HostPathProvisioner
metadata:
name: hostpath-provisioner
spec:
imagePullPolicy: IfNotPresent
pathConfig:
path: "</path/to/backing/directory>"
useNamingPrefix: false
workload:
nodeSelector:
example.io/example-workloads-key: example-workloads-value
Workloads are placed on nodes labeled
example.io/example-workloads-key = example-workloads-value
5.3. Postinstallation network configuration Link kopierenLink in die Zwischenablage kopiert!
By default, OpenShift Virtualization is installed with a single, internal pod network.
After you install OpenShift Virtualization, you can install networking Operators and configure additional networks.
5.3.1. Installing networking Operators Link kopierenLink in die Zwischenablage kopiert!
You must install the Kubernetes NMState Operator to configure a Linux bridge network for live migration or external access to virtual machines (VMs). For installation instructions, see Installing the Kubernetes NMState Operator by using the web console.
You can install the SR-IOV Operator to manage SR-IOV network devices and network attachments. For installation instructions, see Installing the SR-IOV Network Operator.
You can add the MetalLB Operator to manage the lifecycle for an instance of MetalLB on your cluster. For installation instructions, see Installing the MetalLB Operator from the OperatorHub using the web console.
5.3.2. Configuring a Linux bridge network Link kopierenLink in die Zwischenablage kopiert!
After you install the Kubernetes NMState Operator, you can configure a Linux bridge network for live migration or external access to virtual machines (VMs).
5.3.2.1. Creating a Linux bridge NNCP Link kopierenLink in die Zwischenablage kopiert!
You can create a
NodeNetworkConfigurationPolicy
Prerequisites
- You have installed the Kubernetes NMState Operator.
Procedure
Create the
manifest. This example includes sample values that you must replace with your own information.NodeNetworkConfigurationPolicyapiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy spec: desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port type: linux-bridge state: up ipv4: enabled: false bridge: options: stp: enabled: false port: - name: eth1-
defines the name of the node network configuration policy.
metadata.name -
defines the name of the new Linux bridge.
spec.desiredState.interfaces.name -
is an optional field that can be used to define a human-readable description for the bridge.
spec.desiredState.interfaces.description -
defines the interface type. In this example, the type is a Linux bridge.
spec.desiredState.interfaces.type -
defines the requested state for the interface after creation.
spec.desiredState.interfaces.state -
defines whether the ipv4 protocol is active. Setting this to
spec.desiredState.interfaces.ipv4.enableddisables IPv4 addressing on this bridge.false -
defines whether STP is active. Setting this to
spec.desiredState.interfaces.bridge.options.stp.enableddisables STP on this bridge.false -
defines the node NIC to which the bridge is attached.
spec.desiredState.interfaces.bridge.port.name
-
5.3.2.2. Creating a Linux bridge NAD by using the web console Link kopierenLink in die Zwischenablage kopiert!
You can create a network attachment definition (NAD) to provide layer-2 networking to pods and virtual machines by using the OpenShift Container Platform web console.
A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN.
Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported.
Procedure
-
In the web console, click Networking
NetworkAttachmentDefinitions. Click Create Network Attachment Definition.
NoteThe network attachment definition must be in the same namespace as the pod or virtual machine.
- Enter a unique Name and optional Description.
- Select CNV Linux bridge from the Network Type list.
- Enter the name of the bridge in the Bridge Name field.
- Optional: If the resource has VLAN IDs configured, enter the ID numbers in the VLAN Tag Number field.
- Optional: Select MAC Spoof Check to enable MAC spoof filtering. This feature provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod.
- Click Create.
5.3.3. Configuring a network for live migration Link kopierenLink in die Zwischenablage kopiert!
After you have configured a Linux bridge network, you can configure a dedicated network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration.
5.3.3.1. Configuring a dedicated secondary network for live migration Link kopierenLink in die Zwischenablage kopiert!
To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition (NAD) by using the CLI. Then, you add the name of the
NetworkAttachmentDefinition
HyperConverged
Prerequisites
-
You installed the OpenShift CLI ().
oc -
You logged in to the cluster as a user with the role.
cluster-admin - Each node has at least two Network Interface Cards (NICs).
- The NICs for live migration are connected to the same VLAN.
Procedure
Create a
manifest according to the following example:NetworkAttachmentDefinitionExample configuration file
apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network namespace: openshift-cnv spec: config: '{ "cniVersion": "0.3.1", "name": "migration-bridge", "type": "macvlan", "master": "eth1", "mode": "bridge", "ipam": { "type": "whereabouts", "range": "10.200.5.0/24" } }'-
specifies the name of the
metadata.nameobject.NetworkAttachmentDefinition -
specifies the name of the NIC to be used for live migration.
config.master -
specifies the name of the CNI plugin that provides the network for the NAD.
config.type -
specifies an IP address range for the secondary network. This range must not overlap the IP addresses of the main network.
config.range
-
Open the
CR in your default editor by running the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvAdd the name of the
object to theNetworkAttachmentDefinitionstanza of thespec.liveMigrationConfigCR:HyperConvergedExample
HyperConvergedmanifestapiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150 # ...-
specifies the name of the Multus
spec.liveMigrationConfig.networkobject to be used for live migrations.NetworkAttachmentDefinition
-
-
Save your changes and exit the editor. The pods restart and connect to the secondary network.
virt-handler
Verification
When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata.
$ oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'
5.3.3.2. Selecting a dedicated network by using the web console Link kopierenLink in die Zwischenablage kopiert!
You can select a dedicated network for live migration by using the OpenShift Container Platform web console.
Prerequisites
- You configured a Multus network for live migration.
- You created a network attachment definition for the network.
Procedure
- Go to Virtualization > Overview in the OpenShift Container Platform web console.
- Click the Settings tab and then click Live migration.
- Select the network from the Live migration network list.
5.3.4. Configuring an SR-IOV network Link kopierenLink in die Zwischenablage kopiert!
After you install the SR-IOV Operator, you can configure an SR-IOV network.
5.3.4.1. Configuring SR-IOV network devices Link kopierenLink in die Zwischenablage kopiert!
The SR-IOV Network Operator adds the
SriovNetworkNodePolicy.sriovnetwork.openshift.io
SriovNetworkNodePolicy
When applying the configuration specified in a
SriovNetworkNodePolicy
It might take several minutes for a configuration change to apply.
Prerequisites
-
You installed the OpenShift CLI ().
oc -
You have access to the cluster as a user with the role.
cluster-admin - You have installed the SR-IOV Network Operator.
- You have enough available nodes in your cluster to handle the evicted workload from drained nodes.
- You have not selected any control plane nodes for SR-IOV network device configuration.
Procedure
Create an
object, and then save the YAML in theSriovNetworkNodePolicyfile. Replace<name>-sriov-node-network.yamlwith the name for this configuration.<name>apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> namespace: openshift-sriov-network-operator spec: resourceName: <sriov_resource_name> nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" priority: <priority> mtu: <mtu> numVfs: <num> nicSelector: vendor: "<vendor_code>" deviceID: "<device_id>" pfNames: ["<pf_name>", ...] rootDevices: ["<pci_bus_id>", "..."] deviceType: vfio-pci isRdma: false-
specifies a name for the
metadata.nameobject.SriovNetworkNodePolicy -
specifies the namespace where the SR-IOV Network Operator is installed.
metadata.namespace -
specifies the resource name of the SR-IOV device plugin. You can create multiple
spec.resourceNameobjects for a resource name.SriovNetworkNodePolicy -
specifies the node selector to select which nodes are configured. Only SR-IOV network devices on selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed only on selected nodes.
spec.nodeSelector.feature.node.kubernetes.io/network-sriov.capable -
is an optional field that specifies an integer value between
spec.priorityand0. A smaller number gets higher priority, so a priority of99is higher than a priority of10. The default value is99.99 -
is an optional field that specifies a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models.
spec.mtu -
specifies the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than
spec.numVfs.127 - selects the Ethernet device for the Operator to configure. You do not need to specify values for all the parameters.
spec.nicSelectorNoteIt is recommended to identify the Ethernet adapter with enough precision to minimize the possibility of selecting an Ethernet device unintentionally. If you specify
, you must also specify a value forrootDevices,vendor, ordeviceID.pfNamesIf you specify both
andpfNamesat the same time, ensure that they point to an identical device.rootDevices -
is an optional field that specifies the vendor hex code of the SR-IOV network device. The only allowed values are either
spec.nicSelector.vendoror8086.15b3 -
is an optional field that specifies the device hex code of SR-IOV network device. The only allowed values are
spec.nicSelector.deviceID,158b,1015.1017 -
is an optional field that specifies an array of one or more physical function (PF) names for the Ethernet device.
spec.nicSelector.pfNames -
is an optional field that specifies an array of one or more PCI bus addresses for the physical function of the Ethernet device. Provide the address in the following format:
spec.nicSelector.rootDevices.0000:02:00.1 -
specifies the driver type. The
spec.deviceTypedriver type is required for virtual functions in OpenShift Virtualization.vfio-pci - is an optional field that specifies whether to enable remote direct memory access (RDMA) mode. For a Mellanox card, set
spec.isRdmatoisRdma. The default value isfalse.falseNoteIf
flag is set toisRDMA, you can continue to use the RDMA enabled VF as a normal network device. A device can be used in either mode.true
-
-
Optional: Label the SR-IOV capable cluster nodes with if they are not already labeled. For more information about labeling nodes, see "Understanding how to update labels on nodes".
SriovNetworkNodePolicy.Spec.NodeSelector Create the
object. When running the following command, replaceSriovNetworkNodePolicywith the name for this configuration:<name>$ oc create -f <name>-sriov-node-network.yamlAfter applying the configuration update, all the pods in
namespace transition to thesriov-network-operatorstatus.RunningTo verify that the SR-IOV network device is configured, enter the following command. Replace
with the name of a node with the SR-IOV network device that you just configured.<node_name>$ oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'
5.3.5. Enabling load balancer service creation by using the web console Link kopierenLink in die Zwischenablage kopiert!
You can enable the creation of load balancer services for a virtual machine (VM) by using the OpenShift Container Platform web console.
Prerequisites
- You have configured a load balancer for the cluster.
-
You have logged in as a user with the role.
cluster-admin - You created a network attachment definition for the network.
Procedure
-
Go to Virtualization
Overview. - On the Settings tab, click Cluster.
- Expand LoadBalancer service and select Enable the creation of LoadBalancer services for SSH connections to VirtualMachines.
5.4. Postinstallation storage configuration Link kopierenLink in die Zwischenablage kopiert!
The following storage configuration tasks are mandatory:
- You must configure a default storage class for your cluster. Otherwise, the cluster cannot receive automated boot source updates.
- You must configure storage profiles if your storage provider is not recognized by CDI. A storage profile provides recommended storage settings based on the associated storage class.
Optional: You can configure local storage by using the hostpath provisioner (HPP).
See the storage configuration overview for more options, including configuring the Containerized Data Importer (CDI), data volumes, and automatic boot source updates.
5.4.1. Configuring local storage by using the HPP Link kopierenLink in die Zwischenablage kopiert!
When you install the OpenShift Virtualization Operator, the Hostpath Provisioner (HPP) Operator is automatically installed. The HPP Operator creates the HPP provisioner.
The HPP is a local storage provisioner designed for OpenShift Virtualization. To use the HPP, you must create an HPP custom resource (CR).
HPP storage pools must not be in the same partition as the operating system. Otherwise, the storage pools might fill the operating system partition. If the operating system partition is full, performance can be effected or the node can become unstable or unusable.
5.4.1.1. Creating a storage class for the CSI driver with the storagePools stanza Link kopierenLink in die Zwischenablage kopiert!
To use the hostpath provisioner (HPP) you must create an associated storage class for the Container Storage Interface (CSI) driver.
When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a
StorageClass
Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While a disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.
To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the
StorageClass
volumeBindingMode
WaitForFirstConsumer
Procedure
Create a
file to define the storage class:storageclass_csi.yamlapiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete1 volumeBindingMode: WaitForFirstConsumer2 parameters: storagePool: my-storage-pool3 -
specifies whether the underlying storage is deleted or retained when a user deletes a PVC. The two possible
reclaimPolicyvalues arereclaimPolicyandDelete. If you do not specify a value, the default value isRetain.Delete -
specifies the timing of PV creation. The
volumeBindingModeconfiguration in this example means that PV creation is delayed until a pod is scheduled to a specific node.WaitForFirstConsumer -
specifies the name of the storage pool defined in the HPP custom resource (CR).
parameters.storagePool
-
- Save the file and exit.
Create the
object by running the following command:StorageClass$ oc create -f storageclass_csi.yaml