Este conteúdo não está disponível no idioma selecionado.

Chapter 1. About the OVN-Kubernetes network plugin


The OVN-Kubernetes Container Network Interface (CNI) plugin is the default networking solution for MicroShift clusters. OVN-Kubernetes is a virtualized network for pods and services that is based on Open Virtual Network (OVN).

  • Default network configuration and connections are applied automatically in MicroShift with the microshift-networking RPM during installation.
  • A cluster that uses the OVN-Kubernetes network plugin also runs Open vSwitch (OVS) on the node.
  • OVN-K configures OVS on the node to implement the declared network configuration.
  • Host physical interfaces are not bound by default to the OVN-K gateway bridge, br-ex. You can use standard tools on the host for managing the default gateway, such as the Network Manager CLI (nmcli).
  • Changing the CNI is not supported on MicroShift.

Using configuration files or custom scripts, you can configure the following networking settings:

  • You can use subnet CIDR ranges to allocate IP addresses to pods.
  • You can change the maximum transmission unit (MTU) value.
  • You can configure firewall ingress and egress.
  • You can define network policies in the MicroShift cluster, including ingress and egress rules.

1.1. MicroShift networking customization matrix

The following table summarizes the status of networking features and capabilities that are either present as defaults, supported for configuration, or not available with the MicroShift service:

Table 1.1. MicroShift networking capabilities and customization status
Network featureAvailabilityCustomization supported

Advertise address

Yes

Yes [1]

Kubernetes network policy

Yes

Yes

Kubernetes network policy logs

Not available

N/A

Load balancing

Yes

Yes

Multicast DNS

Yes

Yes [2]

Network proxies

Yes [3]

CRI-O

Network performance

Yes

MTU configuration

Egress IPs

Not available

N/A

Egress firewall

Not available

N/A

Egress router

Not available

N/A

Firewall

No [4]

Yes

Hardware offloading

Not available

N/A

Hybrid networking

Not available

N/A

IPsec encryption for intra-cluster communication

Not available

N/A

IPv6

Not available [5]

N/A

  1. If unset, the default value is set to the next immediate subnet after the service network. For example, when the service network is 10.43.0.0/16, the advertiseAddress is set to 10.44.0.0/32.
  2. You can use the multicast DNS protocol (mDNS) to allow name resolution and service discovery within a Local Area Network (LAN) using multicast exposed on the 5353/UDP port.
  3. There is no built-in transparent proxying of egress traffic in MicroShift. Egress must be manually configured.
  4. Setting up the firewalld service is supported by RHEL for Edge.
  5. IPv6 is not available in any configuration.

1.1.1. Default settings

If you do not create a config.yaml file, default values are used. The following example shows the default configuration settings.

  • To see the default values, run the following command:

    $ microshift show-config

    Default values example output in YAML form

    dns:
      baseDomain: microshift.example.com 1
    network:
      clusterNetwork:
        - 10.42.0.0/16 2
      serviceNetwork:
        - 10.43.0.0/16 3
      serviceNodePortRange: 30000-32767 4
    node:
      hostnameOverride: "" 5
      nodeIP: "" 6
    apiServer:
      advertiseAddress: 10.44.0.0/32 7
      subjectAltNames: [] 8
    debugging:
      logLevel: "Normal" 9

    1
    Base domain of the cluster. All managed DNS records will be subdomains of this base.
    2
    A block of IP addresses from which Pod IP addresses are allocated.
    3
    A block of virtual IP addresses for Kubernetes services.
    4
    The port range allowed for Kubernetes services of type NodePort.
    5
    The name of the node. The default value is the hostname.
    6
    The IP address of the node. The default value is the IP address of the default route.
    7
    A string that specifies the IP address from which the API server is advertised to members of the cluster. The default value is calculated based on the address of the service network.
    8
    Subject Alternative Names for API server certificates.
    9
    Log verbosity. Valid values for this field are Normal, Debug, Trace, or TraceAll.

1.2. Network features

Networking features available with MicroShift 4.15 include:

  • Kubernetes network policy
  • Dynamic node IP
  • Custom gateway interface
  • Second gateway interface
  • Cluster network on specified host interface
  • Blocking external access to NodePort service on specific host interfaces

Networking features not available with MicroShift 4.15:

  • Egress IP/firewall/QoS: disabled
  • Hybrid networking: not supported
  • IPsec: not supported
  • Hardware offload: not supported

1.3. IP forward

The host network sysctl net.ipv4.ip_forward kernel parameter is automatically enabled by the ovnkube-master container when started. This is required to forward incoming traffic to the CNI. For example, accessing the NodePort service from outside of a cluster fails if ip_forward is disabled.

1.4. Network performance optimizations

By default, three performance optimizations are applied to OVS services to minimize resource consumption:

  • CPU affinity to ovs-vswitchd.service and ovsdb-server.service
  • no-mlockall to openvswitch.service
  • Limit handler and revalidator threads to ovs-vswitchd.service

1.5. MicroShift networking components and services

This brief overview describes networking components and their operation in MicroShift. The microshift-networking RPM is a package that automatically pulls in any networking-related dependencies and systemd services to initialize networking, for example, the microshift-ovs-init systemd service.

NetworkManager
NetworkManager is required to set up the initial gateway bridge on the MicroShift node. The NetworkManager and NetworkManager-ovs RPM packages are installed as dependencies to the microshift-networking RPM package, which contains the necessary configuration files. NetworkManager in MicroShift uses the keyfile plugin and is restarted after installation of the microshift-networking RPM package.
microshift-ovs-init
The microshift-ovs-init.service is installed by the microshift-networking RPM package as a dependent systemd service to microshift.service. It is responsible for setting up the OVS gateway bridge.
OVN containers

Two OVN-Kubernetes daemon sets are rendered and applied by MicroShift.

  • ovnkube-master Includes the northd, nbdb, sbdb and ovnkube-master containers.
  • ovnkube-node The ovnkube-node includes the OVN-Controller container.

    After MicroShift starts, the OVN-Kubernetes daemon sets are deployed in the openshift-ovn-kubernetes namespace.

Packaging

OVN-Kubernetes manifests and startup logic are built into MicroShift. The systemd services and configurations included in the microshift-networking RPM are:

  • /etc/NetworkManager/conf.d/microshift-nm.conf for NetworkManager.service
  • /etc/systemd/system/ovs-vswitchd.service.d/microshift-cpuaffinity.conf for ovs-vswitchd.service
  • /etc/systemd/system/ovsdb-server.service.d/microshift-cpuaffinity.conf for ovs-server.service
  • /usr/bin/configure-ovs-microshift.sh for microshift-ovs-init.service
  • /usr/bin/configure-ovs.sh for microshift-ovs-init.service
  • /etc/crio/crio.conf.d/microshift-ovn.conf for the CRI-O service

1.6. Bridge mappings

Bridge mappings allow provider network traffic to reach the physical network. Traffic leaves the provider network and arrives at the br-int bridge. A patch port between br-int and br-ex then allows the traffic to traverse to and from the provider network and the edge network. Kubernetes pods are connected to the br-int bridge through virtual ethernet pair: one end of the virtual ethernet pair is attached to the pod namespace, and the other end is attached to the br-int bridge.

1.7. Network topology

OVN-Kubernetes provides an overlay-based networking implementation. This overlay includes an OVS-based implementation of Service and NetworkPolicy. The overlay network uses the Geneve (Generic Network Virtualization Encapsulation) tunnel protocol. The pod maximum transmission unit (MTU) for the Geneve tunnel is set to the default route MTU if it is not configured.

To configure the MTU, you must set an equal-to or less-than value than the MTU of the physical interface on the host. A less-than value for the MTU makes room for the required information that is added to the tunnel header before it is transmitted.

OVS runs as a systemd service on the MicroShift node. The OVS RPM package is installed as a dependency to the microshift-networking RPM package. OVS is started immediately when the microshift-networking RPM is installed.

Red Hat build of MicroShift network topology

317 RHbM OVN topology 0923

1.7.1. Description of the OVN logical components of the virtualized network

OVN node switch

A virtual switch named <node-name>. The OVN node switch is named according to the hostname of the node.

  • In this example, the node-name is microshift-dev.
OVN cluster router

A virtual router named ovn_cluster_router, also known as the distributed router.

  • In this example, the cluster network is 10.42.0.0/16.
OVN join switch
A virtual switch named join.
OVN gateway router
A virtual router named GR_<node-name>, also known as the external gateway router.
OVN external switch
A virtual switch named ext_<node-name>.

1.7.2. Description of the connections in the network topology figure

  • The north-south traffic between the network service and the OVN external switch ext_microshift-dev is provided through the host kernel by the gateway bridge br-ex.
  • The OVN gateway router GR_microshift-dev is connected to the external network switch ext_microshift-dev through the logical router port 4. Port 4 is attached with the node IP address 192.168.122.14.
  • The join switch join connects the OVN gateway router GR_microshift-dev to the OVN cluster router ovn_cluster_router. The IP address range is 100.62.0.0/16.

    • The OVN gateway router GR_microshift-dev connects to the OVN join switch join through the logical router port 3. Port 3 attaches with the internal IP address 100.64.0.2.
    • The OVN cluster router ovn_cluster_router connects to the join switch join through the logical router port 2. Port 2 attaches with the internal IP address 100.64.0.1.
  • The OVN cluster router ovn_cluster_router connects to the node switch microshift-dev through the logical router port 1. Port 1 is attached with the OVN cluster network IP address 10.42.0.1.
  • The east-west traffic between the pods and the network service is provided by the OVN cluster router ovn_cluster_router and the node switch microshift-dev. The IP address range is 10.42.0.0/24.
  • The east-west traffic between pods is provided by the node switch microshift-dev without network address translation (NAT).
  • The north-south traffic between the pods and the external network is provided by the OVN cluster router ovn_cluster_router and the host network. This router is connected through the ovn-kubernetes management port ovn-k8s-mp0, with the IP address 10.42.0.2.
  • All the pods are connected to the OVN node switch through their interfaces.

    • In this example, Pod 1 and Pod 2 are connected to the node switch through Interface 1 and Interface 2.
Red Hat logoGithubRedditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja oBlog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

© 2024 Red Hat, Inc.