Questo contenuto non è disponibile nella lingua selezionata.
Chapter 7. Networking
7.1. Networking overview
OpenShift Virtualization provides advanced networking functionality by using custom resources and plugins. Virtual machines (VMs) are integrated with Red Hat OpenShift Service on AWS networking and its ecosystem.
You cannot run OpenShift Virtualization on a single-stack IPv6 cluster.
The following figure illustrates the typical network setup of OpenShift Virtualization. Other configurations are also possible.
Figure 7.1. OpenShift Virtualization networking overview
Pods and VMs run on the same network infrastructure which allows you to easily connect your containerized and virtualized workloads.
You can connect VMs to the default pod network and to any number of secondary networks.
The default pod network provides connectivity between all its members, service abstraction, IP management, micro segmentation, and other functionality.
Multus is a "meta" CNI plugin that enables a pod or virtual machine to connect to additional network interfaces by using other compatible CNI plugins.
The default pod network is overlay-based, tunneled through the underlying machine network.
The machine network can be defined over a selected set of network interface controllers (NICs).
Secondary VM networks are typically bridged directly to a physical network, with or without VLAN encapsulation.
Secondary VM networks can be defined on dedicated set of NICs, as shown in Figure 1, or they can use the machine network.
7.1.1. OpenShift Virtualization networking glossary
The following terms are used throughout OpenShift Virtualization documentation:
- Container Network Interface (CNI)
- A Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality.
- Multus
- A "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs.
- Custom resource definition (CRD)
- A Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.
- Network attachment definition (NAD)
- A CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks.
- Node network configuration policy (NNCP)
-
A CRD introduced by the nmstate project, describing the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a
NodeNetworkConfigurationPolicy
manifest to the cluster.
7.1.2. Using the default pod network
- Connecting a virtual machine to the default pod network
- Each VM is connected by default to the default internal pod network. You can add or remove network interfaces by editing the VM specification.
- Exposing a virtual machine as a service
-
You can expose a VM within the cluster or outside the cluster by creating a
Service
object.
7.1.3. Configuring VM secondary network interfaces
You can connect a virtual machine to a secondary network by using Linux bridge, SR-IOV and OVN-Kubernetes CNI plugins. You can list multiple secondary networks and interfaces in the VM specification. It is not required to specify the primary pod network in the VM specification when connecting to a secondary network interface.
- Connecting a virtual machine to an OVN-Kubernetes secondary network
You can connect a VM to an Open Virtual Network (OVN)-Kubernetes secondary network. OpenShift Virtualization supports the layer 2 and localnet topologies for OVN-Kubernetes.
- A layer 2 topology connects workloads by a cluster-wide logical switch. The OVN-Kubernetes Container Network Interface (CNI) plug-in uses the Geneve (Generic Network Virtualization Encapsulation) protocol to create an overlay network between nodes. You can use this overlay network to connect VMs on different nodes, without having to configure any additional physical networking infrastructure.
- A localnet topology connects the secondary network to the physical underlay. This enables both east-west cluster traffic and access to services running outside the cluster, but it requires additional configuration of the underlying Open vSwitch (OVS) system on cluster nodes.
To configure an OVN-Kubernetes secondary network and attach a VM to that network, perform the following steps:
- Configure an OVN-Kubernetes secondary network by creating a network attachment definition (NAD).
- Connect the VM to the OVN-Kubernetes secondary network by adding the network details to the VM specification.
- Configuring and viewing IP addresses
- You can configure an IP address of a secondary network interface when you create a VM. The IP address is provisioned with cloud-init. You can view the IP address of a VM by using the Red Hat OpenShift Service on AWS web console or the command line. The network information is collected by the QEMU guest agent.
7.1.4. Integrating with OpenShift Service Mesh
- Connecting a virtual machine to a service mesh
- OpenShift Virtualization is integrated with OpenShift Service Mesh. You can monitor, visualize, and control traffic between pods and virtual machines.
7.1.5. Managing MAC address pools
- Managing MAC address pools for network interfaces
- The KubeMacPool component allocates MAC addresses for VM network interfaces from a shared MAC address pool. This ensures that each network interface is assigned a unique MAC address. A virtual machine instance created from that VM retains the assigned MAC address across reboots.
7.1.6. Configuring SSH access
- Configuring SSH access to virtual machines
You can configure SSH access to VMs by using the following methods:
You create an SSH key pair, add the public key to a VM, and connect to the VM by running the
virtctl ssh
command with the private key.You can add public SSH keys to Red Hat Enterprise Linux (RHEL) 9 VMs at runtime or at first boot to VMs with guest operating systems that can be configured by using a cloud-init data source.
You add the
virtctl port-foward
command to your.ssh/config
file and connect to the VM by using OpenSSH.You create a service, associate the service with the VM, and connect to the IP address and port exposed by the service.
You configure a secondary network, attach a VM to the secondary network interface, and connect to its allocated IP address.
7.2. Connecting a virtual machine to the default pod network
You can connect a virtual machine to the default internal pod network by configuring its network interface to use the masquerade
binding mode.
Traffic passing through network interfaces to the default pod network is interrupted during live migration.
7.2.1. Configuring masquerade mode from the command line
You can use masquerade mode to hide a virtual machine’s outgoing traffic behind the pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the pod network backend through a Linux bridge.
Enable masquerade mode and allow traffic to enter the virtual machine by editing your virtual machine configuration file.
Prerequisites
- The virtual machine must be configured to use DHCP to acquire IPv4 addresses.
Procedure
Edit the
interfaces
spec of your virtual machine configuration file:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: 2 - port: 80 # ... networks: - name: default pod: {}
- 1
- Connect using masquerade mode.
- 2
- Optional: List the ports that you want to expose from the virtual machine, each specified by the
port
field. Theport
value must be a number between 0 and 65536. When theports
array is not used, all ports in the valid range are open to incoming traffic. In this example, incoming traffic is allowed on port80
.
NotePorts 49152 and 49153 are reserved for use by the libvirt platform and all other incoming traffic to these ports is dropped.
Create the virtual machine:
$ oc create -f <vm-name>.yaml
7.2.2. Configuring masquerade mode with dual-stack (IPv4 and IPv6)
You can configure a new virtual machine (VM) to use both IPv6 and IPv4 on the default pod network by using cloud-init.
The Network.pod.vmIPv6NetworkCIDR
field in the virtual machine instance configuration determines the static IPv6 address of the VM and the gateway IP address. These are used by the virt-launcher pod to route IPv6 traffic to the virtual machine and are not used externally. The Network.pod.vmIPv6NetworkCIDR
field specifies an IPv6 address block in Classless Inter-Domain Routing (CIDR) notation. The default value is fd10:0:2::2/120
. You can edit this value based on your network requirements.
When the virtual machine is running, incoming and outgoing traffic for the virtual machine is routed to both the IPv4 address and the unique IPv6 address of the virt-launcher pod. The virt-launcher pod then routes the IPv4 traffic to the DHCP address of the virtual machine, and the IPv6 traffic to the statically set IPv6 address of the virtual machine.
Prerequisites
- The Red Hat OpenShift Service on AWS cluster must use the OVN-Kubernetes Container Network Interface (CNI) network plugin configured for dual-stack.
Procedure
In a new virtual machine configuration, include an interface with
masquerade
and configure the IPv6 address and default gateway by using cloud-init.apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm-ipv6 spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: - port: 80 2 # ... networks: - name: default pod: {} volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true addresses: [ fd10:0:2::2/120 ] 3 gateway6: fd10:0:2::1 4
- 1
- Connect using masquerade mode.
- 2
- Allows incoming traffic on port 80 to the virtual machine.
- 3
- The static IPv6 address as determined by the
Network.pod.vmIPv6NetworkCIDR
field in the virtual machine instance configuration. The default value isfd10:0:2::2/120
. - 4
- The gateway IP address as determined by the
Network.pod.vmIPv6NetworkCIDR
field in the virtual machine instance configuration. The default value isfd10:0:2::1
.
Create the virtual machine in the namespace:
$ oc create -f example-vm-ipv6.yaml
Verification
- To verify that IPv6 has been configured, start the virtual machine and view the interface status of the virtual machine instance to ensure it has an IPv6 address:
$ oc get vmi <vmi-name> -o jsonpath="{.status.interfaces[*].ipAddresses}"
7.2.3. About jumbo frames support
When using the OVN-Kubernetes CNI plugin, you can send unfragmented jumbo frame packets between two virtual machines (VMs) that are connected on the default pod network. Jumbo frames have a maximum transmission unit (MTU) value greater than 1500 bytes.
The VM automatically gets the MTU value of the cluster network, set by the cluster administrator, in one of the following ways:
-
libvirt
: If the guest OS has the latest version of the VirtIO driver that can interpret incoming data via a Peripheral Component Interconnect (PCI) config register in the emulated device. - DHCP: If the guest DHCP client can read the MTU value from the DHCP server response.
For Windows VMs that do not have a VirtIO driver, you must set the MTU manually by using netsh
or a similar tool. This is because the Windows DHCP client does not read the MTU value.
7.3. Exposing a virtual machine by using a service
You can expose a virtual machine within the cluster or outside the cluster by creating a Service
object.
7.3.1. About services
A Kubernetes service exposes network access for clients to an application running on a set of pods. Services offer abstraction, load balancing, and, in the case of the NodePort
and LoadBalancer
types, exposure to the outside world.
- ClusterIP
-
Exposes the service on an internal IP address and as a DNS name to other applications within the cluster. A single service can map to multiple virtual machines. When a client tries to connect to the service, the client’s request is load balanced among available backends.
ClusterIP
is the default service type. - NodePort
-
Exposes the service on the same port of each selected node in the cluster.
NodePort
makes a port accessible from outside the cluster, as long as the node itself is externally accessible to the client. - LoadBalancer
- Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP address to the service.
7.3.2. Dual-stack support
If IPv4 and IPv6 dual-stack networking is enabled for your cluster, you can create a service that uses IPv4, IPv6, or both, by defining the spec.ipFamilyPolicy
and the spec.ipFamilies
fields in the Service
object.
The spec.ipFamilyPolicy
field can be set to one of the following values:
- SingleStack
- The control plane assigns a cluster IP address for the service based on the first configured service cluster IP range.
- PreferDualStack
- The control plane assigns both IPv4 and IPv6 cluster IP addresses for the service on clusters that have dual-stack configured.
- RequireDualStack
-
This option fails for clusters that do not have dual-stack networking enabled. For clusters that have dual-stack configured, the behavior is the same as when the value is set to
PreferDualStack
. The control plane allocates cluster IP addresses from both IPv4 and IPv6 address ranges.
You can define which IP family to use for single-stack or define the order of IP families for dual-stack by setting the spec.ipFamilies
field to one of the following array values:
-
[IPv4]
-
[IPv6]
-
[IPv4, IPv6]
-
[IPv6, IPv4]
7.3.3. Creating a service by using the command line
You can create a service and associate it with a virtual machine (VM) by using the command line.
Prerequisites
- You configured the cluster network to support the service.
Procedure
Edit the
VirtualMachine
manifest to add the label for service creation:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: false template: metadata: labels: special: key 1 # ...
- 1
- Add
special: key
to thespec.template.metadata.labels
stanza.
NoteLabels on a virtual machine are passed through to the pod. The
special: key
label must match the label in thespec.selector
attribute of theService
manifest.-
Save the
VirtualMachine
manifest file to apply your changes. Create a
Service
manifest to expose the VM:apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: # ... selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000
-
Save the
Service
manifest file. Create the service by running the following command:
$ oc create -f example-service.yaml
- Restart the VM to apply the changes.
Verification
Query the
Service
object to verify that it is available:$ oc get service -n example-namespace
7.4. Connecting a virtual machine to an OVN-Kubernetes secondary network
You can connect a virtual machine (VM) to an Open Virtual Network (OVN)-Kubernetes secondary network. OpenShift Virtualization supports the layer 2 and localnet topologies for OVN-Kubernetes.
- A layer 2 topology connects workloads by a cluster-wide logical switch. The OVN-Kubernetes Container Network Interface (CNI) plug-in uses the Geneve (Generic Network Virtualization Encapsulation) protocol to create an overlay network between nodes. You can use this overlay network to connect VMs on different nodes, without having to configure any additional physical networking infrastructure.
- A localnet topology connects the secondary network to the physical underlay. This enables both east-west cluster traffic and access to services running outside the cluster, but it requires additional configuration of the underlying Open vSwitch (OVS) system on cluster nodes.
To configure an OVN-Kubernetes secondary network and attach a VM to that network, perform the following steps:
- Configure an OVN-Kubernetes secondary network by creating a network attachment definition (NAD).
- Connect the VM to the OVN-Kubernetes secondary network by adding the network details to the VM specification.
7.4.1. Creating an OVN-Kubernetes NAD
You can create an OVN-Kubernetes layer 2 or localnet network attachment definition (NAD) by using the Red Hat OpenShift Service on AWS web console or the CLI.
Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported.
7.4.1.1. Creating a NAD for layer 2 topology using the CLI
You can create a network attachment definition (NAD) which describes how to attach a pod to the layer 2 overlay network.
Prerequisites
-
You have access to the cluster as a user with
cluster-admin
privileges. -
You have installed the OpenShift CLI (
oc
).
Procedure
Create a
NetworkAttachmentDefinition
object:apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: l2-network namespace: my-namespace spec: config: |- { "cniVersion": "0.3.1", 1 "name": "my-namespace-l2-network", 2 "type": "ovn-k8s-cni-overlay", 3 "topology":"layer2", 4 "mtu": 1300, 5 "netAttachDefName": "my-namespace/l2-network" 6 }
- 1
- The CNI specification version. The required value is
0.3.1
. - 2
- The name of the network. This attribute is not namespaced. For example, you can have a network named
l2-network
referenced from two differentNetworkAttachmentDefinition
objects that exist in two different namespaces. This feature is useful to connect VMs in different namespaces. - 3
- The name of the CNI plug-in to be configured. The required value is
ovn-k8s-cni-overlay
. - 4
- The topological configuration for the network. The required value is
layer2
. - 5
- Optional: The maximum transmission unit (MTU) value. The default value is automatically set by the kernel.
- 6
- The value of the
namespace
andname
fields in themetadata
stanza of theNetworkAttachmentDefinition
object.
NoteThe above example configures a cluster-wide overlay without a subnet defined. This means that the logical switch implementing the network only provides layer 2 communication. You must configure an IP address when you create the virtual machine by either setting a static IP address or by deploying a DHCP server on the network for a dynamic IP address.
Apply the manifest:
$ oc apply -f <filename>.yaml
7.4.1.2. Creating a NAD for localnet topology using the CLI
You can create a network attachment definition (NAD) which describes how to attach a pod to the underlying physical network.
Prerequisites
-
You have access to the cluster as a user with
cluster-admin
privileges. -
You have installed the OpenShift CLI (
oc
). - You have installed the Kubernetes NMState Operator.
-
You have created a
NodeNetworkConfigurationPolicy
object to map the OVN-Kubernetes secondary network to an Open vSwitch (OVS) bridge.
Procedure
Create a
NetworkAttachmentDefinition
object:apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: localnet-network namespace: default spec: config: |- { "cniVersion": "0.3.1", 1 "name": "localnet-network", 2 "type": "ovn-k8s-cni-overlay", 3 "topology": "localnet", 4 "netAttachDefName": "default/localnet-network" 5 }
- 1
- The CNI specification version. The required value is
0.3.1
. - 2
- The name of the network. This attribute must match the value of the
spec.desiredState.ovn.bridge-mappings.localnet
field of theNodeNetworkConfigurationPolicy
object that defines the OVS bridge mapping. - 3
- The name of the CNI plug-in to be configured. The required value is
ovn-k8s-cni-overlay
. - 4
- The topological configuration for the network. The required value is
localnet
. - 5
- The value of the
namespace
andname
fields in themetadata
stanza of theNetworkAttachmentDefinition
object.
Apply the manifest:
$ oc apply -f <filename>.yaml
7.4.1.3. Creating a NAD for layer 2 topology by using the web console
You can create a network attachment definition (NAD) that describes how to attach a pod to the layer 2 overlay network.
Prerequisites
-
You have access to the cluster as a user with
cluster-admin
privileges.
Procedure
-
Go to Networking
NetworkAttachmentDefinitions in the web console. - Click Create Network Attachment Definition. The network attachment definition must be in the same namespace as the pod or virtual machine using it.
- Enter a unique Name and optional Description.
- Select OVN Kubernetes L2 overlay network from the Network Type list.
- Click Create.
7.4.1.4. Creating a NAD for localnet topology using the web console
You can create a network attachment definition (NAD) to connect workloads to a physical network by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
-
You have access to the cluster as a user with
cluster-admin
privileges. -
Use
nmstate
to configure the localnet to OVS bridge mappings.
Procedure
-
Navigate to Networking
NetworkAttachmentDefinitions in the web console. - Click Create Network Attachment Definition. The network attachment definition must be in the same namespace as the pod or virtual machine using it.
- Enter a unique Name and optional Description.
- Select OVN Kubernetes secondary localnet network from the Network Type list.
- Enter the name of your pre-configured localnet identifier in the Bridge mapping field.
- Optional: You can explicitly set MTU to the specified value. The default value is chosen by the kernel.
- Optional: Encapsulate the traffic in a VLAN. The default value is none.
- Click Create.
7.4.2. Attaching a virtual machine to the OVN-Kubernetes secondary network
You can attach a virtual machine (VM) to the OVN-Kubernetes secondary network interface by using the Red Hat OpenShift Service on AWS web console or the CLI.
7.4.2.1. Attaching a virtual machine to an OVN-Kubernetes secondary network using the CLI
You can connect a virtual machine (VM) to the OVN-Kubernetes secondary network by including the network details in the VM configuration.
Prerequisites
-
You have access to the cluster as a user with
cluster-admin
privileges. -
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
VirtualMachine
manifest to add the OVN-Kubernetes secondary network interface details, as in the following example:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-server spec: running: true template: spec: domain: devices: interfaces: - name: secondary 1 bridge: {} resources: requests: memory: 1024Mi networks: - name: secondary 2 multus: networkName: <nad_name> 3 # ...
Apply the
VirtualMachine
manifest:$ oc apply -f <filename>.yaml
- Optional: If you edited a running virtual machine, you must restart it for the changes to take effect.
7.5. Connecting a virtual machine to a service mesh
OpenShift Virtualization is now integrated with OpenShift Service Mesh. You can monitor, visualize, and control traffic between pods that run virtual machine workloads on the default pod network with IPv4.
7.5.1. Adding a virtual machine to a service mesh
To add a virtual machine (VM) workload to a service mesh, enable automatic sidecar injection in the VM configuration file by setting the sidecar.istio.io/inject
annotation to true
. Then expose your VM as a service to view your application in the mesh.
To avoid port conflicts, do not use ports used by the Istio sidecar proxy. These include ports 15000, 15001, 15006, 15008, 15020, 15021, and 15090.
Prerequisites
- You installed the Service Mesh Operators.
- You created the Service Mesh control plane.
- You added the VM project to the Service Mesh member roll.
Procedure
Edit the VM configuration file to add the
sidecar.istio.io/inject: "true"
annotation:Example configuration file
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-istio name: vm-istio spec: runStrategy: Always template: metadata: labels: kubevirt.io/vm: vm-istio app: vm-istio 1 annotations: sidecar.istio.io/inject: "true" 2 spec: domain: devices: interfaces: - name: default masquerade: {} 3 disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M networks: - name: default pod: {} terminationGracePeriodSeconds: 180 volumes: - containerDisk: image: registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel name: containerdisk
Apply the VM configuration:
$ oc apply -f <vm_name>.yaml 1
- 1
- The name of the virtual machine YAML file.
Create a
Service
object to expose your VM to the service mesh.apiVersion: v1 kind: Service metadata: name: vm-istio spec: selector: app: vm-istio 1 ports: - port: 8080 name: http protocol: TCP
- 1
- The service selector that determines the set of pods targeted by a service. This attribute corresponds to the
spec.metadata.labels
field in the VM configuration file. In the above example, theService
object namedvm-istio
targets TCP port 8080 on any pod with the labelapp=vm-istio
.
Create the service:
$ oc create -f <service_name>.yaml 1
- 1
- The name of the service YAML file.
7.5.2. Additional resources
7.6. Configuring a dedicated network for live migration
You can configure a dedicated secondary network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration.
7.6.1. Configuring a dedicated secondary network for live migration
To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition (NAD) by using the CLI. Then, you add the name of the NetworkAttachmentDefinition
object to the HyperConverged
custom resource (CR).
Prerequisites
-
You installed the OpenShift CLI (
oc
). -
You logged in to the cluster as a user with the
cluster-admin
role. - Each node has at least two Network Interface Cards (NICs).
- The NICs for live migration are connected to the same VLAN.
Procedure
Create a
NetworkAttachmentDefinition
manifest according to the following example:Example configuration file
apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv spec: config: '{ "cniVersion": "0.3.1", "name": "migration-bridge", "type": "macvlan", "master": "eth1", 2 "mode": "bridge", "ipam": { "type": "whereabouts", 3 "range": "10.200.5.0/24" 4 } }'
- 1
- Specify the name of the
NetworkAttachmentDefinition
object. - 2
- Specify the name of the NIC to be used for live migration.
- 3
- Specify the name of the CNI plugin that provides the network for the NAD.
- 4
- Specify an IP address range for the secondary network. This range must not overlap the IP addresses of the main network.
Open the
HyperConverged
CR in your default editor by running the following command:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Add the name of the
NetworkAttachmentDefinition
object to thespec.liveMigrationConfig
stanza of theHyperConverged
CR:Example
HyperConverged
manifestapiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150 # ...
- 1
- Specify the name of the Multus
NetworkAttachmentDefinition
object to be used for live migrations.
-
Save your changes and exit the editor. The
virt-handler
pods restart and connect to the secondary network.
Verification
When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata.
$ oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'
7.6.2. Selecting a dedicated network by using the web console
You can select a dedicated network for live migration by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
- You configured a Multus network for live migration.
Procedure
- Navigate to Virtualization > Overview in the Red Hat OpenShift Service on AWS web console.
- Click the Settings tab and then click Live migration.
- Select the network from the Live migration network list.
7.6.3. Additional resources
7.7. Configuring and viewing IP addresses
You can configure an IP address when you create a virtual machine (VM). The IP address is provisioned with cloud-init.
You can view the IP address of a VM by using the Red Hat OpenShift Service on AWS web console or the command line. The network information is collected by the QEMU guest agent.
7.7.1. Configuring IP addresses for virtual machines
You can configure a static IP address when you create a virtual machine (VM) by using the web console or the command line.
You can configure a dynamic IP address when you create a VM by using the command line.
The IP address is provisioned with cloud-init.
7.7.1.1. Configuring an IP address when creating a virtual machine by using the command line
You can configure a static or dynamic IP address when you create a virtual machine (VM). The IP address is provisioned with cloud-init.
If the VM is connected to the pod network, the pod network interface is the default route unless you update it.
Prerequisites
- The virtual machine is connected to a secondary network.
- You have a DHCP server available on the secondary network to configure a dynamic IP for the virtual machine.
Procedure
Edit the
spec.template.spec.volumes.cloudInitNoCloud.networkData
stanza of the virtual machine configuration:To configure a dynamic IP address, specify the interface name and enable DHCP:
kind: VirtualMachine spec: # ... template: # ... spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 dhcp4: true
- 1
- Specify the interface name.
To configure a static IP, specify the interface name and the IP address:
kind: VirtualMachine spec: # ... template: # ... spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 addresses: - 10.10.10.14/24 2
7.7.2. Viewing IP addresses of virtual machines
You can view the IP address of a VM by using the Red Hat OpenShift Service on AWS web console or the command line.
The network information is collected by the QEMU guest agent.
7.7.2.1. Viewing the IP address of a virtual machine by using the web console
You can view the IP address of a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console.
You must install the QEMU guest agent on a VM to view the IP address of a secondary network interface. A pod network interface does not require the QEMU guest agent.
Procedure
-
In the Red Hat OpenShift Service on AWS console, click Virtualization
VirtualMachines from the side menu. - Select a VM to open the VirtualMachine details page.
- Click the Details tab to view the IP address.
7.7.2.2. Viewing the IP address of a virtual machine by using the command line
You can view the IP address of a virtual machine (VM) by using the command line.
You must install the QEMU guest agent on a VM to view the IP address of a secondary network interface. A pod network interface does not require the QEMU guest agent.
Procedure
Obtain the virtual machine instance configuration by running the following command:
$ oc describe vmi <vmi_name>
Example output
# ... Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa
7.7.3. Additional resources
7.8. Managing MAC address pools for network interfaces
The KubeMacPool component allocates MAC addresses for virtual machine (VM) network interfaces from a shared MAC address pool. This ensures that each network interface is assigned a unique MAC address.
A virtual machine instance created from that VM retains the assigned MAC address across reboots.
KubeMacPool does not handle virtual machine instances created independently from a virtual machine.
7.8.1. Managing KubeMacPool by using the command line
You can disable and re-enable KubeMacPool by using the command line.
KubeMacPool is enabled by default.
Procedure
To disable KubeMacPool in two namespaces, run the following command:
$ oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore
To re-enable KubeMacPool in two namespaces, run the following command:
$ oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-