이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 1. Understanding networking settings
Learn how to apply networking customization and default settings to Red Hat build of MicroShift deployments. Each node is contained to a single machine and single Red Hat build of MicroShift, so each deployment requires individual configuration, pods, and settings.
Cluster Administrators have several options for exposing applications that run inside a cluster to external traffic and securing network connections:
- A service such as NodePort
-
API resources, such as
Ingress
andRoute
By default, Kubernetes allocates each pod an internal IP address for applications running within the pod. Pods and their containers can have traffic between them, but clients outside the cluster do not have direct network access to pods except when exposed with a service such as NodePort.
To troubleshoot connection problems with the NodePort service, read about the known issue in the Release Notes.
1.1. About the OVN-Kubernetes network plugin
OVN-Kubernetes is the default networking solution for Red Hat build of MicroShift deployments. OVN-Kubernetes is a virtualized network for pods and services that is based on Open Virtual Network (OVN). The OVN-Kubernetes Container Network Interface (CNI) plugin is the network plugin for the cluster. A cluster that uses the OVN-Kubernetes network plugin also runs Open vSwitch (OVS) on the node. OVN configures OVS on the node to implement the declared network configuration.
1.1.1. Network topology
OVN-Kubernetes provides an overlay-based networking implementation. This overlay includes an OVS-based implementation of Service and NetworkPolicy. The overlay network uses the Geneve (Generic Network Virtualization Encapsulation) tunnel protocol. The pod maximum transmission unit (MTU) for the Geneve tunnel is set to a smaller value than the MTU of the physical interface on the host. This smaller MTU makes room for the required information that is added to the tunnel header before it is transmitted.
OVS runs as a systemd service on the Red Hat build of MicroShift node. The OVS RPM package is installed as a dependency to the microshift-networking
RPM package. OVS is started immediately when the microshift-networking
RPM is installed.
Red Hat build of MicroShift network topology
1.1.1.1. Description of the OVN logical components of the virtualized network
- OVN node switch
A virtual switch named
<node-name>
. The OVN node switch is named according to the hostname of the node.-
In this example, the
node-name
ismicroshift-dev
.
-
In this example, the
- OVN cluster router
A virtual router named
ovn_cluster_router
, also known as the distributed router.-
In this example, the cluster network is
10.42.0.0/16
.
-
In this example, the cluster network is
- OVN join switch
-
A virtual switch named
join
. - OVN gateway router
-
A virtual router named
GR_<node-name>
, also known as the external gateway router. - OVN external switch
-
A virtual switch named
ext_<node-name>.
1.1.1.2. Description of the connections in the network topology figure
-
The north-south traffic between the network service device
enp1s0
and the OVN external switchext_microshift-dev
, is provided through the OVS patch port by the gateway bridgebr-ex
. -
The OVN gateway router
GR_microshift-dev
is connected to the external network switchext_microshift-dev
through the logical router port 4. Port 4 is attached with the node IP address 192.168.122.14. The join switch
join
connects the OVN gateway routerGR_microshift-dev
to the OVN cluster routerovn_cluster_router
. The IP address range is 100.62.0.0/16.-
The OVN gateway router
GR_microshift-dev
connects to the OVN join switchjoin
through the logical router port 3. Port 3 attaches with the internal IP address 100.64.0.2. -
The OVN cluster router
ovn_cluster_router
connects to the join switchjoin
through the logical router port 2. Port 2 attaches with the internal IP address 100.64.0.1.
-
The OVN gateway router
-
The OVN cluster router
ovn_cluster_router
connects to the node switchmicroshift-dev
through the logical router port 1. Port 1 is attached with the OVN cluster network IP address 10.42.0.1. -
The east-west traffic between the pods and the network service is provided by the OVN cluster router
ovn_cluster_router
and the node switchmicroshift-dev
. The IP address range is 10.42.0.0/24. -
The east-west traffic between pods is provided by the node switch
microshift-dev
without network address translation (NAT). -
The north-south traffic between the pods and the external network is provided by the OVN cluster router
ovn_cluster_router
and the host network. This router is connected through theovn-kubernetes
management portovn-k8s-mp0
, with the IP address 10.42.0.2. All the pods are connected to the OVN node switch through their interfaces.
-
In this example, Pod 1 and Pod 2 are connected to the node switch through
Interface 1
andInterface 2
.
-
In this example, Pod 1 and Pod 2 are connected to the node switch through
1.1.2. IP forward
The host network sysctl net.ipv4.ip_forward
kernel parameter is automatically enabled by the ovnkube-master
container when started. This is required to forward incoming traffic to the CNI. For example, accessing the NodePort service from outside of a cluster fails if ip_forward
is disabled.
1.1.3. Network performance optimizations
By default, three performance optimizations are applied to OVS services to minimize resource consumption:
-
CPU affinity to
ovs-vswitchd.service
andovsdb-server.service
-
no-mlockall
toopenvswitch.service
-
Limit handler and
revalidator
threads toovs-vswitchd.service
1.1.4. Network features
Networking features available with Red Hat build of MicroShift 4.13 include:
- Kubernetes network policy
- Dynamic node IP
- Cluster network on specified host interface
Networking features not available with Red Hat build of MicroShift 4.13:
- Egress IP/firewall/qos: disabled
- Hybrid networking: not supported
- IPsec: not supported
- Hardware offload: not supported
1.1.5. Red Hat build of MicroShift networking components and services
This brief overview describes networking components and their operation in Red Hat build of MicroShift. The microshift-networking
RPM is a package that automatically pulls in any networking-related dependencies and systemd services to initialize networking, for example, the microshift-ovs-init
systemd service.
- NetworkManager
-
NetworkManager is required to set up the initial gateway bridge on the Red Hat build of MicroShift node. The NetworkManager and
NetworkManager-ovs
RPM packages are installed as dependencies to themicroshift-networking
RPM package, which contains the necessary configuration files. NetworkManager in Red Hat build of MicroShift uses thekeyfile
plugin and is restarted after installation of themicroshift-networking
RPM package. - microshift-ovs-init
-
The
microshift-ovs-init.service
is installed by themicroshift-networking
RPM package as a dependent systemd service to microshift.service. It is responsible for setting up the OVS gateway bridge. - OVN containers
Two OVN-Kubernetes daemon sets are rendered and applied by Red Hat build of MicroShift.
-
ovnkube-master Includes the
northd
,nbdb
,sbdb
andovnkube-master
containers. ovnkube-node The ovnkube-node includes the OVN-Controller container.
After Red Hat build of MicroShift boots, the OVN-Kubernetes daemon sets are deployed in the
openshift-ovn-kubernetes
namespace.
-
ovnkube-master Includes the
- Packaging
OVN-Kubernetes manifests and startup logic are built into Red Hat build of MicroShift. The systemd services and configurations included in
microshift-networking
RPM are:-
/etc/NetworkManager/conf.d/microshift-nm.conf
for NetworkManager.service -
/etc/systemd/system/ovs-vswitchd.service.d/microshift-cpuaffinity.conf
for ovs-vswitchd.service -
/etc/systemd/system/ovsdb-server.service.d/microshift-cpuaffinity.conf
-
/usr/bin/configure-ovs-microshift.sh
for microshift-ovs-init.service -
/usr/bin/configure-ovs.sh
for microshift-ovs-init.service -
/etc/crio/crio.conf.d/microshift-ovn.conf
for CRI-O service
-
1.1.6. Bridge mappings
Bridge mappings allow provider network traffic to reach the physical network. Traffic leaves the provider network and arrives at the br-int
bridge. A patch port between br-int
and br-ex
then allows the traffic to traverse to and from the provider network and the edge network. Kubernetes pods are connected to the br-int
bridge through virtual ethernet pair: one end of the virtual ethernet pair is attached to the pod namespace, and the other end is attached to the br-int
bridge.
1.1.6.1. Primary gateway interface
You can specify the desired host interface name in the ovn.yaml
config file as gatewayInterface
. The specified interface is added in OVS bridge br-ex which acts as gateway bridge for the CNI network.
1.2. Creating an OVN-Kubernetes configuration file
Red Hat build of MicroShift uses built-in default OVN-Kubernetes values if an OVN-Kubernetes configuration file is not created. You can write an OVN-Kubernetes configuration file to /etc/microshift/ovn.yaml
. An example file is provided for your configuration.
Procedure
To create your
ovn.yaml
file, run the following command:$ sudo cp /etc/microshift/ovn.yaml.default /etc/microshift/ovn.yaml
To list the contents of the configuration file you created, run the following command:
$ cat /etc/microshift/ovn.yaml.default
Example 'yaml' configuration file with default values
ovsInit: disableOVSInit: false gatewayInterface: "" 1 mtu: 1400
- 1
- The default value is an empty string that means "not-specified." The CNI network plugin auto-detects to interface with the default route.
To customize your configuration, use the following table that lists the valid values you can use:
Table 1.1. Supported optional OVN-Kubernetes configurations for Red Hat build of MicroShift Field Type Default Description Example ovsInit.disableOVSInit
bool
false
Skip configuring OVS bridge
br-ex
inmicroshift-ovs-init.service
true [1]
ovsInit.gatewayInterface
Alpha
eth0
Ingress that is the API gateway
eth0
mtu
uint32
auto
MTU value used for the pods
1300
The OVS bridge is required. When
disableOVSInit
is true, OVS bridgebr-ex
must be configured manually.ImportantIf you change the
mtu
configuration value in theovn.yaml
file, you must restart the host that Red Hat build of MicroShift is running on to apply the updated setting.
Example custom ovn.yaml
configuration file
ovsInit: disableOVSInit: true gatewayInterface: eth0 mtu: 1300
When disableOVSInit
is set to true in the ovn.yaml
config file, the br-ex
OVS bridge must be manually configured.
1.3. Restarting the ovnkube-master pod
The following procedure restarts the ovnkube-master
pod.
Prerequisites
-
The OpenShift CLI (
oc
) is installed. -
Access to the cluster as a user with the
cluster-admin
role. - A cluster installed on infrastructure configured with the OVN-Kubernetes network plugin.
- The KUBECONFIG environment variable is set.
Procedure
Use the following steps to restart the ovnkube-master
pod.
Access the remote cluster by running the following command:
$ export KUBECONFIG=$PWD/kubeconfig
Find the name of the
ovnkube-master
pod that you want to restart by running the following command:$ pod=$(oc get pods -n openshift-ovn-kubernetes | awk -F " " '/ovnkube-master/{print $1}')
Delete the
ovnkube-master
pod by running the following command:$ oc -n openshift-ovn-kubernetes delete pod $pod
Confirm that a new
ovnkube-master
pod is running by using the following command:$ oc get pods -n openshift-ovn-kubernetes
The listing of the running pods shows a new
ovnkube-master
pod name and age.
1.4. Deploying Red Hat build of MicroShift behind an HTTP(S) proxy
Deploy a Red Hat build of MicroShift cluster behind an HTTP(S) proxy when you want to add basic anonymity and security measures to your pods.
You must configure the host operating system to use the proxy service with all components initiating HTTP(S) requests when deploying Red Hat build of MicroShift behind a proxy.
All the user-specific workloads or pods with egress traffic, such as accessing cloud services, must be configured to use the proxy. There is no built-in transparent proxying of egress traffic in Red Hat build of MicroShift.
1.5. Using the RPM-OStree HTTP(S) proxy
To use the HTTP(S) proxy in RPM-OStree, set the http_proxy environment
variable for the rpm-ostreed
service.
Procedure
Add this setting to the
/etc/systemd/system/rpm-ostreed.service.d/00-proxy.conf
file by running the following command:Environment="http_proxy=http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"
Next, reload the configuration settings and restart the service to apply your changes.
Reload the configuration settings by running the following command:
$ sudo systemctl daemon-reload
Restart the rpm-ostree service by running the following command:
$ sudo systemctl restart rpm-ostreed.service
1.6. Using a proxy in the CRI-O container runtime
To use an HTTP(S) proxy in CRI-O
, you need to set the HTTP_PROXY
and HTTPS_PROXY
environment variables. You can also set the NO_PROXY
variable to exclude a list of hosts from being proxied.
Procedure
Add the following settings to the
/etc/systemd/system/crio.service.d/00-proxy.conf
file:Environment=NO_PROXY="localhost,127.0.0.1" Environment=HTTP_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/" Environment=HTTPS_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"
Reload the configuration settings:
$ sudo systemctl daemon-reload
Restart the CRI-O service to apply the settings:
$ sudo systemctl restart crio
1.7. Getting a snapshot of OVS interfaces from a running cluster
A snapshot represents the state and data of OVS interfaces at a specific point in time.
Procedure
- To see a snapshot of OVS interfaces from a running Red Hat build of MicroShift cluster, use the following command:
$ sudo ovs-vsctl show
Example OVS interfaces in a running cluster
9d9f5ea2-9d9d-4e34-bbd2-dbac154fdc93 Bridge br-ex Port enp1s0 Interface enp1s0 type: system Port br-ex Interface br-ex type: internal Port patch-br-ex_localhost.localdomain-to-br-int 1 Interface patch-br-ex_localhost.localdomain-to-br-int type: patch options: {peer=patch-br-int-to-br-ex_localhost.localdomain} 2 Bridge br-int fail_mode: secure datapath_type: system Port patch-br-int-to-br-ex_localhost.localdomain Interface patch-br-int-to-br-ex_localhost.localdomain type: patch options: {peer=patch-br-ex_localhost.localdomain-to-br-int} Port eebee1ce5568761 Interface eebee1ce5568761 3 Port b47b1995ada84f4 Interface b47b1995ada84f4 4 Port "3031f43d67c167f" Interface "3031f43d67c167f" 5 Port br-int Interface br-int type: internal Port ovn-k8s-mp0 6 Interface ovn-k8s-mp0 type: internal ovs_version: "2.17.3"
- 1 2
- The
patch-br-ex_localhost.localdomain-to-br-int
andpatch-br-int-to-br-ex_localhost.localdomain
are OVS patch ports that connectbr-ex
andbr-int
. - 3 4 5
- The pod interfaces
eebee1ce5568761
,b47b1995ada84f4
and3031f43d67c167f
are named with the first 15 bits of pod sandbox ID and are plugged in thebr-int
bridge. - 6
- The OVS internal port for hairpin traffic,
ovn-k8s-mp0
is created by theovnkube-master
container.
1.8. Deploying a load balancer for a workload
Red Hat build of MicroShift offers a built-in implementation of network load balancers. The following example procedure uses the node IP address as the external IP address for the LoadBalancer
service configuration file.
Prerequisites
-
The OpenShift CLI (
oc
) is installed. - You have access to the cluster as a user with the cluster administration role.
- You installed a cluster on an infrastructure configured with the OVN-Kubernetes network plugin.
-
The
KUBECONFIG
environment variable is set.
Procedure
Verify that your pods are running by running the following command:
$ oc get pods -A
Create the example namespace by running the following commands:
$ NAMESPACE=nginx-lb-test
$ oc create ns $NAMESPACE
The following example deploys three replicas of the test
nginx
application in your namespace:$ oc apply -n $NAMESPACE -f - <<EOF apiVersion: v1 kind: ConfigMap metadata: name: nginx data: headers.conf: | add_header X-Server-IP \$server_addr always; --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: quay.io/packit/nginx-unprivileged imagePullPolicy: Always name: nginx ports: - containerPort: 8080 volumeMounts: - name: nginx-configs subPath: headers.conf mountPath: /etc/nginx/conf.d/headers.conf securityContext: allowPrivilegeEscalation: false seccompProfile: type: RuntimeDefault capabilities: drop: ["ALL"] runAsNonRoot: true volumes: - name: nginx-configs configMap: name: nginx items: - key: headers.conf path: headers.conf EOF
You can verify that the three sample replicas started successfully by running the following command:
$ oc get pods -n $NAMESPACE
Create a
LoadBalancer
service for thenginx
test application with the following sample commands:$ oc create -n $NAMESPACE -f - <<EOF apiVersion: v1 kind: Service metadata: name: nginx spec: ports: - port: 81 targetPort: 8080 selector: app: nginx type: LoadBalancer EOF
NoteYou must ensure that the
port
parameter is a host port that is not occupied by otherLoadBalancer
services or Red Hat build of MicroShift components.Verify that the service file exists, that the external IP address is properly assigned, and that the external IP is identical to the node IP by running the following command:
$ oc get svc -n $NAMESPACE
Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx LoadBalancer 10.43.183.104 192.168.1.241 81:32434/TCP 2m
Verification
The following command forms five connections to the example
nginx
application using the external IP address of theLoadBalancer
service configuration. The result of the command is a list of those server IP addresses. Verify that the load balancer sends requests to all the running applications with the following command:EXTERNAL_IP=192.168.1.241 seq 5 | xargs -Iz curl -s -I http://$EXTERNAL_IP:81 | grep X-Server-IP
The output of the previous command contains different IP addresses if the load balancer is successfully distributing the traffic to the applications, for example:
Example output
X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.43 X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.43
1.9. Blocking external access to the NodePort service on a specific host interface
OVN-Kubernetes does not restrict the host interface where a NodePort service can be accessed from outside a Red Hat build of MicroShift node. The following procedure explains how to block the NodePort service on a specific host interface and restrict external access.
Prerequisites
- You must have an account with root privileges.
Procedure
Change the
NODEPORT
variable to the host port number assigned to your Kubernetes NodePort service by running the following command:# export NODEPORT=30700
Change the
INTERFACE_IP
value to the IP address from the host interface that you want to block. For example:# export INTERFACE_IP=192.168.150.33
Insert a new rule in the
nat
table PREROUTING chain to drop all packets that match the destination port and IP address. For example:$ sudo nft -a insert rule ip nat PREROUTING tcp dport $NODEPORT ip daddr $INTERFACE_IP drop
List the new rule by running the following command:
$ sudo nft -a list chain ip nat PREROUTING table ip nat { chain PREROUTING { # handle 1 type nat hook prerouting priority dstnat; policy accept; tcp dport 30700 ip daddr 192.168.150.33 drop # handle 134 counter packets 108 bytes 18074 jump OVN-KUBE-ETP # handle 116 counter packets 108 bytes 18074 jump OVN-KUBE-EXTERNALIP # handle 114 counter packets 108 bytes 18074 jump OVN-KUBE-NODEPORT # handle 112 } }
NoteNote the
handle
number of the newly added rule. You need to remove thehandle
number in the following step.Remove the custom rule with the following sample command:
$ sudo nft -a delete rule ip nat PREROUTING handle 134
1.10. The multicast DNS protocol
You can use the multicast DNS protocol (mDNS) to allow name resolution and service discovery within a Local Area Network (LAN) using multicast exposed on the 5353/UDP
port.
Red Hat build of MicroShift includes an embedded mDNS server for deployment scenarios in which the authoritative DNS server cannot be reconfigured to point clients to services on Red Hat build of MicroShift. The embedded DNS server allows .local
domains exposed by Red Hat build of MicroShift to be discovered by other elements on the LAN.
Additional resources