Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 4. Post-installation configuration
4.1. Postinstallation configuration
The following procedures are typically performed after OpenShift Virtualization is installed. You can configure the components that are relevant for your environment:
- Node placement rules for OpenShift Virtualization Operators, workloads, and controllers
- Enabling the creation of load balancer services by using the Red Hat OpenShift Service on AWS web console
- Defining a default storage class for the Container Storage Interface (CSI)
- Configuring local storage by using the Hostpath Provisioner (HPP)
4.2. Specifying nodes for OpenShift Virtualization components
The default scheduling for virtual machines (VMs) on bare metal nodes is appropriate. Optionally, you can specify the nodes where you want to deploy OpenShift Virtualization Operators, workloads, and controllers by configuring node placement rules.
You can configure node placement rules for some components after installing OpenShift Virtualization, but virtual machines cannot be present if you want to configure node placement rules for workloads.
4.2.1. About node placement rules for OpenShift Virtualization components
You can use node placement rules for the following tasks:
- Deploy virtual machines only on nodes intended for virtualization workloads.
- Deploy Operators only on infrastructure nodes.
- Maintain separation between workloads.
Depending on the object, you can use one or more of the following rule types:
nodeSelector
- Allows pods to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs.
affinity
- Enables you to use more expressive syntax to set rules that match nodes with pods. Affinity also allows for more nuance in how the rules are applied. For example, you can specify that a rule is a preference, not a requirement. If a rule is a preference, pods are still scheduled when the rule is not satisfied.
tolerations
- Allows pods to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts pods that tolerate the taint.
4.2.2. Applying node placement rules
You can apply node placement rules by editing a HyperConverged
or HostPathProvisioner
object using the command line.
Prerequisites
-
The
oc
CLI tool is installed. - You are logged in with cluster administrator permissions.
Procedure
Edit the object in your default editor by running the following command:
$ oc edit <resource_type> <resource_name> -n {CNVNamespace}
- Save the file to apply the changes.
4.2.3. Node placement rule examples
You can specify node placement rules for a OpenShift Virtualization component by editing a HyperConverged
or HostPathProvisioner
object.
4.2.3.1. HyperConverged object node placement rule example
To specify the nodes where OpenShift Virtualization deploys its components, you can edit the nodePlacement
object in the HyperConverged custom resource (CR) file that you create during OpenShift Virtualization installation.
Example HyperConverged
object with nodeSelector
rule
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: nodeSelector: example.io/example-infra-key: example-infra-value 1 workloads: nodePlacement: nodeSelector: example.io/example-workloads-key: example-workloads-value 2
Example HyperConverged
object with affinity
rule
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-infra-key operator: In values: - example-infra-value 1 workloads: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-workloads-key 2 operator: In values: - example-workloads-value preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: example.io/num-cpus operator: Gt values: - 8 3
- 1
- Infrastructure resources are placed on nodes labeled
example.io/example-infra-key = example-value
. - 2
- workloads are placed on nodes labeled
example.io/example-workloads-key = example-workloads-value
. - 3
- Nodes that have more than eight CPUs are preferred for workloads, but if they are not available, pods are still scheduled.
Example HyperConverged
object with tolerations
rule
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
namespace: openshift-cnv
spec:
workloads:
nodePlacement:
tolerations: 1
- key: "key"
operator: "Equal"
value: "virtualization"
effect: "NoSchedule"
- 1
- Nodes reserved for OpenShift Virtualization components are labeled with the
key = virtualization:NoSchedule
taint. Only pods with matching tolerations are scheduled on reserved nodes.
4.2.3.2. HostPathProvisioner object node placement rule example
You can edit the HostPathProvisioner
object directly or by using the web console.
You must schedule the hostpath provisioner and the OpenShift Virtualization components on the same nodes. Otherwise, virtualization pods that use the hostpath provisioner cannot run. You cannot run virtual machines.
After you deploy a virtual machine (VM) with the hostpath provisioner (HPP) storage class, you can remove the hostpath provisioner pod from the same node by using the node selector. However, you must first revert that change, at least for that specific node, and wait for the pod to run before trying to delete the VM.
You can configure node placement rules by specifying nodeSelector
, affinity
, or tolerations
for the spec.workload
field of the HostPathProvisioner
object that you create when you install the hostpath provisioner.
Example HostPathProvisioner
object with nodeSelector
rule
apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
kind: HostPathProvisioner
metadata:
name: hostpath-provisioner
spec:
imagePullPolicy: IfNotPresent
pathConfig:
path: "</path/to/backing/directory>"
useNamingPrefix: false
workload:
nodeSelector:
example.io/example-workloads-key: example-workloads-value 1
- 1
- Workloads are placed on nodes labeled
example.io/example-workloads-key = example-workloads-value
.
4.2.4. Additional resources
4.3. Postinstallation network configuration
By default, OpenShift Virtualization is installed with a single, internal pod network.
4.3.1. Installing networking Operators
4.3.2. Configuring a Linux bridge network
After you install the Kubernetes NMState Operator, you can configure a Linux bridge network for live migration or external access to virtual machines (VMs).
4.3.2.1. Creating a Linux bridge NNCP
You can create a NodeNetworkConfigurationPolicy
(NNCP) manifest for a Linux bridge network.
Prerequisites
- You have installed the Kubernetes NMState Operator.
Procedure
Create the
NodeNetworkConfigurationPolicy
manifest. This example includes sample values that you must replace with your own information.apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8
- 1
- Name of the policy.
- 2
- Name of the interface.
- 3
- Optional: Human-readable description of the interface.
- 4
- The type of interface. This example creates a bridge.
- 5
- The requested state for the interface after creation.
- 6
- Disables IPv4 in this example.
- 7
- Disables STP in this example.
- 8
- The node NIC to which the bridge is attached.
4.3.2.2. Creating a Linux bridge NAD by using the web console
You can create a network attachment definition (NAD) to provide layer-2 networking to pods and virtual machines by using the Red Hat OpenShift Service on AWS web console.
A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN.
Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported.
Procedure
-
In the web console, click Networking
NetworkAttachmentDefinitions. Click Create Network Attachment Definition.
NoteThe network attachment definition must be in the same namespace as the pod or virtual machine.
- Enter a unique Name and optional Description.
- Select CNV Linux bridge from the Network Type list.
- Enter the name of the bridge in the Bridge Name field.
- Optional: If the resource has VLAN IDs configured, enter the ID numbers in the VLAN Tag Number field.
- Optional: Select MAC Spoof Check to enable MAC spoof filtering. This feature provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod.
- Click Create.
4.3.3. Configuring a network for live migration
After you have configured a Linux bridge network, you can configure a dedicated network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration.
4.3.3.1. Configuring a dedicated secondary network for live migration
To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition (NAD) by using the CLI. Then, you add the name of the NetworkAttachmentDefinition
object to the HyperConverged
custom resource (CR).
Prerequisites
-
You installed the OpenShift CLI (
oc
). -
You logged in to the cluster as a user with the
cluster-admin
role. - Each node has at least two Network Interface Cards (NICs).
- The NICs for live migration are connected to the same VLAN.
Procedure
Create a
NetworkAttachmentDefinition
manifest according to the following example:Example configuration file
apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv spec: config: '{ "cniVersion": "0.3.1", "name": "migration-bridge", "type": "macvlan", "master": "eth1", 2 "mode": "bridge", "ipam": { "type": "whereabouts", 3 "range": "10.200.5.0/24" 4 } }'
- 1
- Specify the name of the
NetworkAttachmentDefinition
object. - 2
- Specify the name of the NIC to be used for live migration.
- 3
- Specify the name of the CNI plugin that provides the network for the NAD.
- 4
- Specify an IP address range for the secondary network. This range must not overlap the IP addresses of the main network.
Open the
HyperConverged
CR in your default editor by running the following command:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Add the name of the
NetworkAttachmentDefinition
object to thespec.liveMigrationConfig
stanza of theHyperConverged
CR:Example
HyperConverged
manifestapiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150 # ...
- 1
- Specify the name of the Multus
NetworkAttachmentDefinition
object to be used for live migrations.
-
Save your changes and exit the editor. The
virt-handler
pods restart and connect to the secondary network.
Verification
When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata.
$ oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'
4.3.3.2. Selecting a dedicated network by using the web console
You can select a dedicated network for live migration by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
- You configured a Multus network for live migration.
- You created a network attachment definition for the network.
Procedure
- Navigate to Virtualization > Overview in the Red Hat OpenShift Service on AWS web console.
- Click the Settings tab and then click Live migration.
- Select the network from the Live migration network list.
4.3.4. Enabling load balancer service creation by using the web console
You can enable the creation of load balancer services for a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
- You have configured a load balancer for the cluster.
-
You are logged in as a user with the
cluster-admin
role. - You created a network attachment definition for the network.
Procedure
-
Navigate to Virtualization
Overview. - On the Settings tab, click Cluster.
- Expand General settings and SSH configuration.
- Set SSH over LoadBalancer service to on.
4.4. Postinstallation storage configuration
The following storage configuration tasks are mandatory:
- You must configure storage profiles if your storage provider is not recognized by CDI. A storage profile provides recommended storage settings based on the associated storage class.
Optional: You can configure local storage by using the hostpath provisioner (HPP).
See the storage configuration overview for more options, including configuring the Containerized Data Importer (CDI), data volumes, and automatic boot source updates.
4.4.1. Configuring local storage by using the HPP
When you install the OpenShift Virtualization Operator, the Hostpath Provisioner (HPP) Operator is automatically installed. The HPP Operator creates the HPP provisioner.
The HPP is a local storage provisioner designed for OpenShift Virtualization. To use the HPP, you must create an HPP custom resource (CR).
HPP storage pools must not be in the same partition as the operating system. Otherwise, the storage pools might fill the operating system partition. If the operating system partition is full, performance can be effected or the node can become unstable or unusable.
4.4.1.1. Creating a storage class for the CSI driver with the storagePools stanza
To use the hostpath provisioner (HPP) you must create an associated storage class for the Container Storage Interface (CSI) driver.
When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a StorageClass
object’s parameters after you create it.
Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While a disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.
To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the StorageClass
value with volumeBindingMode
parameter set to WaitForFirstConsumer
, the binding and provisioning of the PV is delayed until a pod is created using the PVC.
Prerequisites
-
Log in as a user with
cluster-admin
privileges.
Procedure
Create a
storageclass_csi.yaml
file to define the storage class:apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete 1 volumeBindingMode: WaitForFirstConsumer 2 parameters: storagePool: my-storage-pool 3
- 1
- The two possible
reclaimPolicy
values areDelete
andRetain
. If you do not specify a value, the default value isDelete
. - 2
- The
volumeBindingMode
parameter determines when dynamic provisioning and volume binding occur. SpecifyWaitForFirstConsumer
to delay the binding and provisioning of a persistent volume (PV) until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod’s scheduling requirements. - 3
- Specify the name of the storage pool defined in the HPP CR.
- Save the file and exit.
Create the
StorageClass
object by running the following command:$ oc create -f storageclass_csi.yaml