MicroShift is Developer Preview software only.
For more information about the support scope of Red Hat Developer Preview software, see Developer Preview Support Scope.Chapter 1. Understanding networking settings
Learn how to apply networking customization and default settings to Red Hat build of MicroShift deployments. Each node is contained to a single machine and single Red Hat build of MicroShift, so each deployment requires individual configuration, pods, and settings.
Cluster Administrators have several options for exposing applications that run inside a cluster to external traffic and securing network connections:
- A service such as NodePort
-
API resources, such as
Ingress
andRoute
By default, Kubernetes allocates each pod an internal IP address for applications running within the pod. Pods and their containers can have traffic between them, but clients outside the cluster do not have direct network access to pods except when exposed with a service such as NodePort.
1.1. About the OVN-Kubernetes network plugin
OVN-Kubernetes is the default networking solution for Red Hat build of MicroShift deployments. OVN-Kubernetes is a virtualized network for pods and services that is based on Open Virtual Network (OVN). The OVN-Kubernetes Container Network Interface (CNI) plugin is the network plugin for the cluster. A cluster that uses the OVN-Kubernetes network plugin also runs Open vSwitch (OVS) on the node. OVN configures OVS on the node to implement the declared network configuration.
1.1.1. Network topology
OVN-Kubernetes provides an overlay-based networking implementation. This overlay includes an OVS-based implementation of Service and NetworkPolicy. The overlay network uses the geneve tunnel, so the pod maximum transmission unit (MTU) is set to smaller than that of the physical interface on the host to remove the tunnel header.
OVS runs as a systemd service on the Red Hat build of MicroShift node. The OVS RPM package is installed as a dependency to the microshift-networking
RPM package. OVS is started immediately when the microshift-networking
RPM is installed.
1.1.1.1. IP forward
The host network sysctl net.ipv4.ip_forward
kernel parameter is automatically enabled by the ovnkube-master
container when started. This is required to forward incoming traffic to the CNI. For example, accessing the NodePort service from outside of a cluster fails if ip_forward
is disabled.
1.1.2. Network performance optimizations
By default, three performance optimizations are applied to OVS services to minimize resource consumption:
-
CPU affinity to
ovs-vswitchd.service
andovsdb-server.service
-
no-mlockall
toopenvswitch.service
-
Limit handler and
revalidator
threads toovs-vswitchd.service
1.1.3. Network features
Networking features available with Red Hat build of MicroShift 4.12 include:
- Kubernetes network policy
- Dynamic node IP
- Cluster network on specified host interface
- Secondary gateway interface
- Dual stack
Networking features not available with Red Hat build of MicroShift 4.12:
- Egress IP/firewall/qos: disabled
- Hybrid networking: not supported
- IPsec: not supported
- Hardware offload: not supported
1.1.4. Red Hat build of MicroShift networking components and services overview
This brief overview describes networking components and their operation in Red Hat build of MicroShift. The microshift-networking
RPM is a package that automatically pulls in any networking-related dependencies and systemd services to initialize networking, for example, the microshift-ovs-init
systemd service.
- NetworkManager
-
NetworkManager is required to set up the initial gateway bridge on the Red Hat build of MicroShift node. The NetworkManager and
NetworkManager-ovs
RPM packages are installed as dependencies to themicroshift-networking
RPM package, which contains the necessary configuration files. NetworkManager in Red Hat build of MicroShift uses thekeyfile
plugin and is restarted after installation of themicroshift-networking
RPM package. - microshift-ovs-init
-
The
microshift-ovs-init.service
is installed by themicroshift-networking
RPM package as a dependent systemd service to microshift.service. It is responsible for setting up the OVS gateway bridge. - OVN containers
Two OVN-Kubernetes daemon sets are rendered and applied by Red Hat build of MicroShift.
-
ovnkube-master Includes the
northd
,nbdb
,sbdb
andovnkube-master
containers. ovnkube-node The ovnkube-node includes the OVN-Controller container.
After Red Hat build of MicroShift boots, the OVN-Kubernetes daemon sets are deployed in the
openshift-ovn-kubernetes
namespace.
-
ovnkube-master Includes the
- Packaging
OVN-Kubernetes manifests and startup logic are built into Red Hat build of MicroShift. The systemd services and configurations included in
microshift-networking
RPM are:-
/etc/NetworkManager/conf.d/microshift-nm.conf
for NetworkManager.service -
/etc/systemd/system/ovs-vswitchd.service.d/microshift-cpuaffinity.conf
for ovs-vswitchd.service -
/etc/systemd/system/ovsdb-server.service.d/microshift-cpuaffinity.conf
-
/usr/bin/configure-ovs-microshift.sh
for microshift-ovs-init.service -
/usr/bin/configure-ovs.sh
for microshift-ovs-init.service -
/etc/crio/crio.conf.d/microshift-ovn.conf
for CRI-O service
-
1.1.5. Bridge mappings
Bridge mappings allow provider network traffic to reach the physical network. Traffic leaves the provider network and arrives at the br-int
bridge. A patch port between br-int
and br-ex
then allows the traffic to traverse to and from the provider network and the edge network. Kubernetes pods are connected to the br-int
bridge through virtual ethernet pair: one end of the virtual ethernet pair is attached to the pod namespace, and the other end is attached to the br-int
bridge.
1.1.5.1. Primary gateway interface
You can specify the desired host interface name in the ovn.yaml
config file as gatewayInterface
. The specified interface is added in OVS bridge br-ex which acts as gateway bridge for the CNI network.
1.1.5.2. Secondary gateway interface
You can set up one additional host interface for cluster ingress and egress in the ovn.yaml
config file. The additional interface is added in a second OVS bridge br-ex1
. Cluster pod traffic directed to the additional host subnet is routed automatically based on the destination IP through br-ex1.
Either two or three OVS bridges are created based on the CNI configuration:
- Default deployment
-
The
externalGatewayInterface
in not specified in theovn.yaml
config file. -
Two OVS bridges,
br-ex
andbr-int
, are created.
-
The
- Customized deployment
-
The
externalGatewayInterface
is user-specified in theovn.yaml
config file. -
Three OVS bridges are created:
br-ex
,br-ex1
andbr-int
.
-
The
The br-ex bridge is created by microshift-ovs-init.service
or manually. The br-ex bridge contains statically programmed openflow rules which distinguish traffic to and from the host network (underlay) and the OVN network (overlay).
The br-int
bridge is created by the ovnkube-master
container. The br-int
bridge contains dynamically programmed openflow rules which handle cluster network traffic.
1.2. Creating an OVN-Kubernetes configuration file
Red Hat build of MicroShift uses built-in default OVN-Kubernetes values if an OVN-Kubernetes configuration file is not created. You can write an OVN-Kubernetes configuration file to /etc/microshift/ovn.yaml
. An example file is provided for your configuration.
Procedure
To create your
ovn.yaml
file, run the following command:$ sudo cp /etc/microshift/ovn.yaml.default /etc/microshift/ovn.yaml
To list the contents of the configuration file you created, run the following command:
$ cat /etc/microshift/ovn.yaml.default
Example 'yaml' configuration file with default values
ovsInit: disableOVSInit: false gatewayInterface: "" 1 externalGatewayInterface: "" 2 mtu: 1400
To customize your configuration, use the following table that lists the valid values you can use:
Table 1.1. Supported optional OVN-Kubernetes configurations for Red Hat build of MicroShift Field Type Default Description Example ovsInit.disableOVSInit
bool
false
Skip configuring OVS bridge
br-ex
inmicroshift-ovs-init.service
true 1
ovsInit.gatewayInterface
Alpha
eth0
Ingress that is the API gateway
eth0
ovsInit.externalGatewayInterface
Alpha
eth1
Ingress routing external traffic to your services and pods inside the node
eth1
mtu
uint32
1400
MTU value used for the pods
1300
The OVS bridge is required. When
disableOVSInit
is true, OVS bridgebr-ex
must be configured manually.ImportantIf you change the
mtu
configuration value in theovn.yaml
file, you must restart the host that Red Hat build of MicroShift is running on to apply the updated setting.
Example custom ovn.yaml
configuration file
ovsInit: disableOVSInit: true gatewayInterface: eth0 externalGatewayInterface: eth1 mtu: 1300
When disableOVSInit
is set to true in the ovn.yaml
config file, the br-ex
OVS bridge must be manually configured.
1.3. Restarting the ovnkube-master pod
The following procedure restarts the ovnkube-master
pod.
Prerequisites
-
The OpenShift CLI (
oc
) is installed. -
Access to the cluster as a user with the
cluster-admin
role. - A cluster installed on infrastructure configured with the OVN-Kubernetes network plugin.
- The KUBECONFIG environment variable is set.
Procedure
Use the following steps to restart the ovnkube-master
pod.
Access the remote cluster by running the following command:
$ export KUBECONFIG=$PWD/kubeconfig
Find the name of the
ovnkube-master
pod that you want to restart by running the following command:$ pod=$(oc get pods -n openshift-ovn-kubernetes | awk -F " " '/ovnkube-master/{print $1}')
Delete the
ovnkube-master
pod by running the following command:$ oc -n openshift-ovn-kubernetes delete pod $pod
Confirm that a new
ovnkube-master
pod is running by using the following command:$ oc get pods -n openshift-ovn-kubernetes
The listing of the running pods shows a new
ovnkube-master
pod name and age.
1.4. Deploying Red Hat build of MicroShift behind an HTTP(S) proxy
Deploy a Red Hat build of MicroShift cluster behind an HTTP(S) proxy when you want to add basic anonymity and security measures to your pods.
You must configure the host operating system to use the proxy service with all components initiating HTTP(S) requests when deploying Red Hat build of MicroShift behind a proxy.
All the user-specific workloads or pods with egress traffic, such as accessing cloud services, must be configured to use the proxy. There is no built-in transparent proxying of egress traffic in Red Hat build of MicroShift.
1.5. Using a proxy in the CRI-O container runtime
To use an HTTP(S) proxy in CRI-O
, you need to set the HTTP_PROXY
and HTTPS_PROXY
environment variables. You can also set the NO_PROXY
variable to exclude a list of hosts from being proxied.
Procedure
Add the following settings to the
/etc/systemd/system/crio.service.d/00-proxy.conf
file:Environment=NO_PROXY="localhost,127.0.0.1" Environment=HTTP_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/" Environment=HTTPS_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"
Reload the configuration settings:
$ sudo systemctl daemon-reload
Restart the CRI-O service to apply the settings:
$ sudo systemctl restart crio
1.6. Getting a snapshot of OVS interfaces from a running cluster
A snapshot represents the state and data of OVS interfaces at a specific point in time.
Procedure
- To see a snapshot of OVS interfaces from a running Red Hat build of MicroShift cluster, use the following command:
$ sudo ovs-vsctl show
Example OVS interfaces in a running cluster
9d9f5ea2-9d9d-4e34-bbd2-dbac154fdc93 Bridge br-ex Port enp1s0 Interface enp1s0 type: system Port br-ex Interface br-ex type: internal Port patch-br-ex_localhost.localdomain-to-br-int 1 Interface patch-br-ex_localhost.localdomain-to-br-int type: patch options: {peer=patch-br-int-to-br-ex_localhost.localdomain} 2 Bridge br-int fail_mode: secure datapath_type: system Port patch-br-int-to-br-ex_localhost.localdomain Interface patch-br-int-to-br-ex_localhost.localdomain type: patch options: {peer=patch-br-ex_localhost.localdomain-to-br-int} Port eebee1ce5568761 Interface eebee1ce5568761 3 Port b47b1995ada84f4 Interface b47b1995ada84f4 4 Port "3031f43d67c167f" Interface "3031f43d67c167f" 5 Port br-int Interface br-int type: internal Port ovn-k8s-mp0 6 Interface ovn-k8s-mp0 type: internal ovs_version: "2.17.3"
- 1 2
- The
patch-br-ex_localhost.localdomain-to-br-int
andpatch-br-int-to-br-ex_localhost.localdomain
are OVS patch ports that connectbr-ex
andbr-int
. - 3 4 5
- The pod interfaces
eebee1ce5568761
,b47b1995ada84f4
and3031f43d67c167f
are named with the first 15 bits of pod sandbox ID and are plugged in thebr-int
bridge. - 6
- The OVS internal port for hairpin traffic,
ovn-k8s-mp0
is created by theovnkube-master
container.
1.7. The multicast DNS protocol
The multicast DNS protocol (mDNS) allows name resolution and service discovery within a Local Area Network (LAN) using multicast exposed on the 5353/UDP
port.
Red Hat build of MicroShift includes an embedded mDNS server for deployment scenarios in which the authoritative DNS server cannot be reconfigured to point clients to services on Red Hat build of MicroShift. The embedded DNS server allows .local
domains exposed by Red Hat build of MicroShift to be discovered by other elements on the LAN.
Additional resources