Este conteúdo não está disponível no idioma selecionado.
Chapter 1. About the OVN-Kubernetes network plugin
The OVN-Kubernetes Container Network Interface (CNI) plugin is the default networking solution for a MicroShift node. OVN-Kubernetes is a virtualized network for pods and services that is based on Open Virtual Network (OVN).
- 
					Default network configuration and connections are applied automatically in MicroShift with the microshift-networkingRPM during installation.
- A node that uses the OVN-Kubernetes network plugin also runs Open vSwitch (OVS) on the node.
- OVN-K configures OVS on the node to implement the declared network configuration.
- 
					Host physical interfaces are not bound by default to the OVN-K gateway bridge, br-ex. You can use standard tools on the host for managing the default gateway, such as the Network Manager CLI (nmcli).
- Changing the CNI is not supported on MicroShift.
Using configuration files or custom scripts, you can configure the following networking settings:
- You can use subnet CIDR ranges to allocate IP addresses to pods.
- You can change the maximum transmission unit (MTU) value.
- You can configure firewall ingress and egress.
- You can define network policies in the MicroShift, including ingress and egress rules.
- You can use the MicroShift Multus plugin to chain other CNI plugins.
- You can configure or remove the ingress router.
1.1. MicroShift networking configuration matrix
The following table summarizes the status of networking features and capabilities that are either present as defaults, supported for configuration, or not available with the MicroShift service:
| Network capability | Availability | Configuration supported | 
|---|---|---|
| Advertise address | Yes | Yes [1] | 
| Kubernetes network policy | Yes | Yes | 
| Kubernetes network policy logs | Not available | N/A | 
| Load balancing | Yes | Yes | 
| Multicast DNS | Yes | Yes [2] | 
| Network proxies | Yes [3] | CRI-O | 
| Network performance | Yes | MTU configuration | 
| Egress IPs | Not available | N/A | 
| Egress firewall | Not available | N/A | 
| Egress router | Not available | N/A | 
| Firewall | No [4] | Yes | 
| Hardware offloading | Not available | N/A | 
| Hybrid networking | Not available | N/A | 
| IPsec encryption for intra-cluster communication | Not available | N/A | 
| IPv6 | Supported [5] | N/A | 
| Ingress router | Yes | Yes [6] | 
| Multiple networks plug-in | Yes | Yes | 
- 
						If unset, the default value is set to the next immediate subnet after the service network. For example, when the service network is 10.43.0.0/16, theadvertiseAddressis set to10.44.0.0/32.
- 
						You can use the multicast DNS protocol (mDNS) to allow name resolution and service discovery within a Local Area Network (LAN) using multicast exposed on the 5353/UDPport.
- There is no built-in transparent proxying of egress traffic in MicroShift. Egress must be manually configured.
- Setting up the firewalld service is supported by RHEL for Edge.
- IPv6 is supported in both single-stack and dual-stack networks with the OVN-Kubernetes network plugin. IPv6 can also be used by connecting to other networks with the MicroShift Multus CNI plugin.
- 
						Configure by using the MicroShift config.yamlfile.
1.1.1. Default settings
The Generic Device Plugin for MicroShift is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
					If you do not create a config.yaml file or use a configuration snippet YAML file, default values are used. The following example shows the default configuration settings.
				
- To see the default values, run the following command: - microshift show-config - $ microshift show-config- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Default values example output in YAML form - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
1.2. Network features
Networking features available with MicroShift 4.20 include:
- Kubernetes network policy
- Dynamic node IP
- Custom gateway interface
- Second gateway interface
- Node network on specified host interface
- Blocking external access to NodePort service on specific host interfaces
Networking features not available with MicroShift 4.20:
- Egress IP/firewall/QoS: disabled
- Hybrid networking: not supported
- IPsec: not supported
- Hardware offload: not supported
1.3. IP forward
				The host network sysctl net.ipv4.ip_forward kernel parameter is automatically enabled by the ovnkube-master container when started. This is required to forward incoming traffic to the CNI. For example, accessing the NodePort service from outside of a node fails if ip_forward is disabled.
			
1.4. Network performance optimizations
By default, three performance optimizations are applied to OVS services to minimize resource consumption:
- 
						CPU affinity to ovs-vswitchd.serviceandovsdb-server.service
- 
						no-mlockalltoopenvswitch.service
- 
						Limit handler and revalidatorthreads toovs-vswitchd.service
1.5. MicroShift networking components and services
				This brief overview describes networking components and their operation in MicroShift. The microshift-networking RPM is a package that automatically pulls in any networking-related dependencies and systemd services to initialize networking, for example, the microshift-ovs-init systemd service.
			
- NetworkManager
- 
							NetworkManager is required to set up the initial gateway bridge on the MicroShift node. The NetworkManager and NetworkManager-ovsRPM packages are installed as dependencies to themicroshift-networkingRPM package, which contains the necessary configuration files. NetworkManager in MicroShift uses thekeyfileplugin and is restarted after installation of themicroshift-networkingRPM package.
- microshift-ovs-init
- 
							The microshift-ovs-init.serviceis installed by themicroshift-networkingRPM package as a dependent systemd service tomicroshift.service. It is responsible for setting up the OVS gateway bridge.
- OVN containers
- Two OVN-Kubernetes daemon sets are rendered and applied by MicroShift. - 
									ovnkube-master Includes the northd,nbdb,sbdbandovnkube-mastercontainers.
- ovnkube-node The ovnkube-node includes the OVN-Controller container. - After MicroShift starts, the OVN-Kubernetes daemon sets are deployed in the - openshift-ovn-kubernetesnamespace.
 
- 
									ovnkube-master Includes the 
- Packaging
- OVN-Kubernetes manifests and startup logic are built into MicroShift. The systemd services and configurations included in the - microshift-networkingRPM are:- 
									/etc/NetworkManager/conf.d/microshift-nm.confforNetworkManager.service
- 
									/etc/systemd/system/ovs-vswitchd.service.d/microshift-cpuaffinity.confforovs-vswitchd.service
- 
									/etc/systemd/system/ovsdb-server.service.d/microshift-cpuaffinity.confforovs-server.service
- 
									/usr/bin/configure-ovs-microshift.shformicroshift-ovs-init.service
- 
									/usr/bin/configure-ovs.shformicroshift-ovs-init.service
- 
									/etc/crio/crio.conf.d/microshift-ovn.conffor the CRI-O service
 
- 
									
1.6. Bridge mappings
				Bridge mappings allow provider network traffic to reach the physical network. Traffic leaves the provider network and arrives at the br-int bridge. A patch port between br-int and br-ex then allows the traffic to traverse to and from the provider network and the edge network. Kubernetes pods are connected to the br-int bridge through virtual ethernet pair: one end of the virtual ethernet pair is attached to the pod namespace, and the other end is attached to the br-int bridge.
			
1.7. Network topology
OVN-Kubernetes provides an overlay-based networking implementation. This overlay includes an OVS-based implementation of Service and NetworkPolicy. The overlay network uses the Geneve (Generic Network Virtualization Encapsulation) tunnel protocol. The pod maximum transmission unit (MTU) for the Geneve tunnel is set to the default route MTU if it is not configured.
To configure the MTU, you must set an equal-to or less-than value than the MTU of the physical interface on the host. A less-than value for the MTU makes room for the required information that is added to the tunnel header before it is transmitted.
					The MTU value of the OVN overlay networking in MicroShift must be 100 bytes smaller than the MTU value of the base network. If no MTU value is configured, MicroShift autoconfigures the value using the MTU value of the default gateway (Internet Protocol version 4 (IPv4) or Internet Protocol version 6 (IPv6)) of the host. If the auto-configuration does not work correctly, the MTU value can be configured manually. For example, if the MTU value of the network is 9000, the OVN MTU size must be set to 8900.
				
				OVS runs as a systemd service on the MicroShift node. The OVS RPM package is installed as a dependency to the microshift-networking RPM package. OVS is started immediately when the microshift-networking RPM is installed.
			
Red Hat build of MicroShift network topology
					 
				
1.7.1. Description of the OVN logical components of the virtualized network
- OVN node switch
- A virtual switch named - <node-name>. The OVN node switch is named according to the hostname of the node.- 
										In this example, the node-nameismicroshift-dev.
 
- 
										In this example, the 
- OVN cluster router
- A virtual router named - ovn_cluster_router, also known as the distributed router.- 
										In this example, the node network is 10.42.0.0/16.
 
- 
										In this example, the node network is 
- OVN join switch
- 
								A virtual switch named join.
- OVN gateway router
- 
								A virtual router named GR_<node-name>, also known as the external gateway router.
- OVN external switch
- 
								A virtual switch named ext_<node-name>.
1.7.2. Description of the connections in the network topology figure
- 
							The north-south traffic between the network service and the OVN external switch ext_microshift-devis provided through the host kernel by the gateway bridgebr-ex.
- 
							The OVN gateway router GR_microshift-devis connected to the external network switchext_microshift-devthrough the logical router port 4. Port 4 is attached with the node IP address 192.168.122.14.
- The join switch - joinconnects the OVN gateway router- GR_microshift-devto the OVN cluster router- ovn_cluster_router. The IP address range is 100.62.0.0/16.- 
									The OVN gateway router GR_microshift-devconnects to the OVN join switchjointhrough the logical router port 3. Port 3 attaches with the internal IP address 100.64.0.2.
- 
									The OVN cluster router ovn_cluster_routerconnects to the join switchjointhrough the logical router port 2. Port 2 attaches with the internal IP address 100.64.0.1.
 
- 
									The OVN gateway router 
- 
							The OVN cluster router ovn_cluster_routerconnects to the node switchmicroshift-devthrough the logical router port 1. Port 1 is attached with the OVN cluster network IP address 10.42.0.1.
- 
							The east-west traffic between the pods and the network service is provided by the OVN cluster router ovn_cluster_routerand the node switchmicroshift-dev. The IP address range is 10.42.0.0/24.
- 
							The east-west traffic between pods is provided by the node switch microshift-devwithout network address translation (NAT).
- 
							The north-south traffic between the pods and the external network is provided by the OVN cluster router ovn_cluster_routerand the host network. This router is connected through theovn-kubernetesmanagement portovn-k8s-mp0, with the IP address 10.42.0.2.
- All the pods are connected to the OVN node switch through their interfaces. - 
									In this example, Pod 1 and Pod 2 are connected to the node switch through Interface 1andInterface 2.
 
- 
									In this example, Pod 1 and Pod 2 are connected to the node switch through