Chapter 2. Understanding networking settings
Learn how to apply networking customization and default settings to MicroShift deployments. Each node is contained to a single machine and single MicroShift, so each deployment requires individual configuration, pods, and settings.
Cluster Administrators have several options for exposing applications that run inside a cluster to external traffic and securing network connections:
- A service such as NodePort
-
API resources, such as
Ingress
andRoute
By default, Kubernetes allocates each pod an internal IP address for applications running within the pod. Pods and their containers can have traffic between them, but clients outside the cluster do not have direct network access to pods except when exposed with a service such as NodePort.
2.1. Creating an OVN-Kubernetes configuration file Copy linkLink copied to clipboard!
MicroShift uses built-in default OVN-Kubernetes values if an OVN-Kubernetes configuration file is not created. You can write an OVN-Kubernetes configuration file to /etc/microshift/ovn.yaml
. An example file is provided for your configuration.
Procedure
To create your
ovn.yaml
file, run the following command:$ sudo cp /etc/microshift/ovn.yaml.default /etc/microshift/ovn.yaml
$ sudo cp /etc/microshift/ovn.yaml.default /etc/microshift/ovn.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To list the contents of the configuration file you created, run the following command:
$ cat /etc/microshift/ovn.yaml
$ cat /etc/microshift/ovn.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example YAML file with default maximum transmission unit (MTU) value
mtu: 1400
mtu: 1400
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To customize your configuration, you can change the MTU value. The table that follows provides details:
Expand Table 2.1. Supported optional OVN-Kubernetes configurations for MicroShift Field Type Default Description Example mtu
uint32
auto
MTU value used for the pods
1300
ImportantIf you change the
mtu
configuration value in theovn.yaml
file, you must restart the host that Red Hat build of MicroShift is running on to apply the updated setting.Example custom
ovn.yaml
configuration filemtu: 1300
mtu: 1300
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2. Restarting the ovnkube-master pod Copy linkLink copied to clipboard!
The following procedure restarts the ovnkube-master
pod.
Prerequisites
-
The OpenShift CLI (
oc
) is installed. -
Access to the cluster as a user with the
cluster-admin
role. - A cluster installed on infrastructure configured with the OVN-Kubernetes network plugin.
- The KUBECONFIG environment variable is set.
Procedure
Use the following steps to restart the ovnkube-master
pod.
Access the remote cluster by running the following command:
export KUBECONFIG=$PWD/kubeconfig
$ export KUBECONFIG=$PWD/kubeconfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Find the name of the
ovnkube-master
pod that you want to restart by running the following command:pod=$(oc get pods -n openshift-ovn-kubernetes | awk -F " " '/ovnkube-master/{print $1}')
$ pod=$(oc get pods -n openshift-ovn-kubernetes | awk -F " " '/ovnkube-master/{print $1}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
ovnkube-master
pod by running the following command:oc -n openshift-ovn-kubernetes delete pod $pod
$ oc -n openshift-ovn-kubernetes delete pod $pod
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that a new
ovnkube-master
pod is running by using the following command:oc get pods -n openshift-ovn-kubernetes
$ oc get pods -n openshift-ovn-kubernetes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The listing of the running pods shows a new
ovnkube-master
pod name and age.
2.3. Deploying MicroShift behind an HTTP or HTTPS proxy Copy linkLink copied to clipboard!
Deploy a MicroShift cluster behind an HTTP or HTTPS proxy when you want to add basic anonymity and security measures to your pods.
You must configure the host operating system to use the proxy service with all components initiating HTTP or HTTPS requests when deploying MicroShift behind a proxy.
All the user-specific workloads or pods with egress traffic, such as accessing cloud services, must be configured to use the proxy. There is no built-in transparent proxying of egress traffic in MicroShift.
2.4. Using the RPM-OStree HTTP or HTTPS proxy Copy linkLink copied to clipboard!
To use the HTTP or HTTPS proxy in RPM-OStree, you must add a Service
section to the configuration file and set the http_proxy environment
variable for the rpm-ostreed
service.
Procedure
Add this setting to the
/etc/systemd/system/rpm-ostreed.service.d/00-proxy.conf
file:[Service] Environment="http_proxy=http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"
[Service] Environment="http_proxy=http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Next, reload the configuration settings and restart the service to apply your changes.
Reload the configuration settings by running the following command:
sudo systemctl daemon-reload
$ sudo systemctl daemon-reload
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the
rpm-ostreed
service by running the following command:sudo systemctl restart rpm-ostreed.service
$ sudo systemctl restart rpm-ostreed.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5. Using a proxy in the CRI-O container runtime Copy linkLink copied to clipboard!
To use an HTTP or HTTPS proxy in CRI-O
, you must add a Service
section to the configuration file and set the HTTP_PROXY
and HTTPS_PROXY
environment variables. You can also set the NO_PROXY
variable to exclude a list of hosts from being proxied.
Procedure
Create the directory for the configuration file if it does not exist:
sudo mkdir /etc/systemd/system/crio.service.d/
$ sudo mkdir /etc/systemd/system/crio.service.d/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following settings to the
/etc/systemd/system/crio.service.d/00-proxy.conf
file:[Service] Environment=NO_PROXY="localhost,127.0.0.1" Environment=HTTP_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/" Environment=HTTPS_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"
[Service] Environment=NO_PROXY="localhost,127.0.0.1" Environment=HTTP_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/" Environment=HTTPS_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantYou must define the
Service
section of the configuration file for the environment variables or the proxy settings fail to apply.Reload the configuration settings:
sudo systemctl daemon-reload
$ sudo systemctl daemon-reload
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the CRI-O service:
sudo systemctl restart crio
$ sudo systemctl restart crio
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the MicroShift service to apply the settings:
sudo systemctl restart microshift
$ sudo systemctl restart microshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that pods are started by running the following command and examining the output:
oc get all -A
$ oc get all -A
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that MicroShift is able to pull container images by running the following command and examining the output:
sudo crictl images
$ sudo crictl images
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6. Getting a snapshot of OVS interfaces from a running cluster Copy linkLink copied to clipboard!
A snapshot represents the state and data of OVS interfaces at a specific point in time.
Procedure
To see a snapshot of OVS interfaces from a running MicroShift cluster, use the following command:
sudo ovs-vsctl show
$ sudo ovs-vsctl show
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example OVS interfaces in a running cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
patch-br-ex_localhost.localdomain-to-br-int
andpatch-br-int-to-br-ex_localhost.localdomain
are OVS patch ports that connectbr-ex
andbr-int
. - 2
- The
patch-br-ex_localhost.localdomain-to-br-int
andpatch-br-int-to-br-ex_localhost.localdomain
are OVS patch ports that connectbr-ex
andbr-int
. - 3
- The pod interface
eebee1ce5568761
is named with the first 15 bits of the pod sandbox ID and is plugged into thebr-int
bridge. - 4
- The pod interface
b47b1995ada84f4
is named with the first 15 bits of the pod sandbox ID and is plugged into thebr-int
bridge. - 5
- The pod interface
3031f43d67c167f
is named with the first 15 bits of the pod sandbox ID and is plugged into thebr-int
bridge. - 6
- The OVS internal port for hairpin traffic,
ovn-k8s-mp0
is created by theovnkube-master
container.
2.7. The MicroShift LoadBalancer service for workloads Copy linkLink copied to clipboard!
MicroShift has a built-in implementation of network load balancers that you can use for your workloads and applications within the cluster. You can create a LoadBalancer
service by configuring a pod to interpret ingress rules and serve as an ingress controller. The following procedure gives an example of a deployment-based LoadBalancer
service.
2.8. Deploying a load balancer for an application Copy linkLink copied to clipboard!
The following example procedure uses the node IP address as the external IP address for the LoadBalancer
service configuration file. Use this example as guidance for how to deploy load balancers.
Prerequisites
-
The OpenShift CLI (
oc
) is installed. - You installed a cluster on an infrastructure configured with the OVN-Kubernetes network plugin.
-
The
KUBECONFIG
environment variable is set.
Procedure
Verify that your pods are running by entering the following command:
oc get pods -A
$ oc get pods -A
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a namespace by running the following commands:
NAMESPACE=<nginx-lb-test>
$ NAMESPACE=<nginx-lb-test>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace _<nginx-lb-test> with the application namespace that you want to create.
oc create ns $NAMESPACE
$ oc create ns $NAMESPACE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example namespace
The following example deploys three replicas of the test
nginx
application in the created namespace:Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can verify that the three sample replicas started successfully by running the following command:
oc get pods -n $NAMESPACE
$ oc get pods -n $NAMESPACE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
LoadBalancer
service for thenginx
test application by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must ensure that the
port
parameter is a host port that is not occupied by otherLoadBalancer
services or MicroShift components.Verify that the service file exists, that the external IP address is properly assigned, and that the external IP is identical to the node IP by running the following command:
oc get svc -n $NAMESPACE
$ oc get svc -n $NAMESPACE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx LoadBalancer 10.43.183.104 192.168.1.241 81:32434/TCP 2m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx LoadBalancer 10.43.183.104 192.168.1.241 81:32434/TCP 2m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
The following command forms five connections to the example nginx
application using the external IP address of the LoadBalancer
service configuration. The result of the command is a list of those server IP addresses.
Verify that the load balancer sends requests to all the running applications by running the following command:
EXTERNAL_IP=192.168.1.241 seq 5 | xargs -Iz curl -s -I http://$EXTERNAL_IP:81 | grep X-Server-IP
EXTERNAL_IP=192.168.1.241 seq 5 | xargs -Iz curl -s -I http://$EXTERNAL_IP:81 | grep X-Server-IP
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output of the previous command contains different IP addresses if the
LoadBalancer
service is successfully distributing the traffic to the applications, for example:Example output
X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.43 X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.43
X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.43 X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.43
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.9. Blocking external access to the NodePort service on a specific host interface Copy linkLink copied to clipboard!
OVN-Kubernetes does not restrict the host interface where a NodePort service can be accessed from outside a Red Hat build of MicroShift node. The following procedure explains how to block the NodePort service on a specific host interface and restrict external access.
Prerequisites
- You must have an account with root privileges.
Procedure
Change the
NODEPORT
variable to the host port number assigned to your Kubernetes NodePort service by running the following command:export NODEPORT=30700
# export NODEPORT=30700
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the
INTERFACE_IP
value to the IP address from the host interface that you want to block. For example:export INTERFACE_IP=192.168.150.33
# export INTERFACE_IP=192.168.150.33
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Insert a new rule in the
nat
table PREROUTING chain to drop all packets that match the destination port and IP address. For example:sudo nft -a insert rule ip nat PREROUTING tcp dport $NODEPORT ip daddr $INTERFACE_IP drop
$ sudo nft -a insert rule ip nat PREROUTING tcp dport $NODEPORT ip daddr $INTERFACE_IP drop
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the new rule by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteNote the
handle
number of the newly added rule. You need to remove thehandle
number in the following step.Remove the custom rule with the following sample command:
sudo nft -a delete rule ip nat PREROUTING handle 134
$ sudo nft -a delete rule ip nat PREROUTING handle 134
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.10. The multicast DNS protocol Copy linkLink copied to clipboard!
You can use the multicast DNS protocol (mDNS) to allow name resolution and service discovery within a Local Area Network (LAN) using multicast exposed on the 5353/UDP
port.
MicroShift includes an embedded mDNS server for deployment scenarios in which the authoritative DNS server cannot be reconfigured to point clients to services on MicroShift. The embedded DNS server allows .local
domains exposed by MicroShift to be discovered by other elements on the LAN.
2.11. Auditing exposed network ports Copy linkLink copied to clipboard!
On MicroShift, the host port can be opened by a workload in the following cases. You can check logs to view the network services.
2.11.1. hostNetwork Copy linkLink copied to clipboard!
When a pod is configured with the hostNetwork:true
setting, the pod is running in the host network namespace. This configuration can independently open host ports. MicroShift component logs cannot be used to track this case, the ports are subject to firewalld rules. If the port opens in firewalld, you can view the port opening in the firewalld debug log.
Prerequisites
- You have root user access to your build host.
Procedure
Optional: You can check that the
hostNetwork:true
parameter is set in your ovnkube-node pod by using the following example command:sudo oc get pod -n openshift-ovn-kubernetes <ovnkube-node-pod-name> -o json | jq -r '.spec.hostNetwork' true
$ sudo oc get pod -n openshift-ovn-kubernetes <ovnkube-node-pod-name> -o json | jq -r '.spec.hostNetwork' true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable debug in the firewalld log by running the following command:
sudo vi /etc/sysconfig/firewalld
$ sudo vi /etc/sysconfig/firewalld FIREWALLD_ARGS=--debug=10
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the firewalld service:
sudo systemctl restart firewalld.service
$ sudo systemctl restart firewalld.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the debug option was added properly, run the following command:
sudo systemd-cgls -u firewalld.service
$ sudo systemd-cgls -u firewalld.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The firewalld debug log is stored in the
/var/log/firewalld
path.Example logs for when the port open rule is added
2023-06-28 10:46:37 DEBUG1: config.getZoneByName('public') 2023-06-28 10:46:37 DEBUG1: config.zone.7.addPort('8080', 'tcp') 2023-06-28 10:46:37 DEBUG1: config.zone.7.getSettings() 2023-06-28 10:46:37 DEBUG1: config.zone.7.update('...') 2023-06-28 10:46:37 DEBUG1: config.zone.7.Updated('public')
2023-06-28 10:46:37 DEBUG1: config.getZoneByName('public') 2023-06-28 10:46:37 DEBUG1: config.zone.7.addPort('8080', 'tcp') 2023-06-28 10:46:37 DEBUG1: config.zone.7.getSettings() 2023-06-28 10:46:37 DEBUG1: config.zone.7.update('...') 2023-06-28 10:46:37 DEBUG1: config.zone.7.Updated('public')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example logs for when the port open rule is removed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.11.2. hostPort Copy linkLink copied to clipboard!
You can access the hostPort setting logs in MicroShift. The following logs are examples for the hostPort setting:
Procedure
You can access the logs by running the following command:
journalctl -u crio | grep "local port"
$ journalctl -u crio | grep "local port"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example CRI-O logs when the host port is opened
Jun 25 16:27:37 rhel92 crio[77216]: time="2023-06-25 16:27:37.033003098+08:00" level=info msg="Opened local port tcp:443"
$ Jun 25 16:27:37 rhel92 crio[77216]: time="2023-06-25 16:27:37.033003098+08:00" level=info msg="Opened local port tcp:443"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example CRI-O logs when the host port is closed
Jun 25 16:24:11 rhel92 crio[77216]: time="2023-06-25 16:24:11.342088450+08:00" level=info msg="Closing host port tcp:443"
$ Jun 25 16:24:11 rhel92 crio[77216]: time="2023-06-25 16:24:11.342088450+08:00" level=info msg="Closing host port tcp:443"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.11.3. NodePort and LoadBalancer services Copy linkLink copied to clipboard!
OVN-Kubernetes opens host ports for NodePort
and LoadBalancer
service types. These services add iptables rules that take the ingress traffic from the host port and forwards it to the clusterIP. Logs for the NodePort
and LoadBalancer
services are presented in the following examples:
Procedure
To access the name of your
ovnkube-master
pods, run the following command:oc get pods -n openshift-ovn-kubernetes | awk '/ovnkube-master/{print $1}'
$ oc get pods -n openshift-ovn-kubernetes | awk '/ovnkube-master/{print $1}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ovnkube-master
pod nameovnkube-master-n2shv
ovnkube-master-n2shv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can access the
NodePort
andLoadBalancer
services logs using yourovnkube-master
pod and running the following example command:oc logs -n openshift-ovn-kubernetes <ovnkube-master-pod-name> ovnkube-master | grep -E "OVN-KUBE-NODEPORT|OVN-KUBE-EXTERNALIP"
$ oc logs -n openshift-ovn-kubernetes <ovnkube-master-pod-name> ovnkube-master | grep -E "OVN-KUBE-NODEPORT|OVN-KUBE-EXTERNALIP"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NodePort service:
Example logs in the ovnkube-master container of the ovnkube-master pod when a host port is open
I0625 09:07:00.992980 2118395 iptables.go:27] Adding rule in table: nat, chain: OVN-KUBE-NODEPORT with args: "-p TCP -m addrtype --dst-type LOCAL --dport 32718 -j DNAT --to-destination 10.96.178.142:8081" for protocol: 0
$ I0625 09:07:00.992980 2118395 iptables.go:27] Adding rule in table: nat, chain: OVN-KUBE-NODEPORT with args: "-p TCP -m addrtype --dst-type LOCAL --dport 32718 -j DNAT --to-destination 10.96.178.142:8081" for protocol: 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example logs in the ovnkube-master container of the ovnkube-master pod when a host port is closed
Deleting rule in table: nat, chain: OVN-KUBE-NODEPORT with args: "-p TCP -m addrtype --dst-type LOCAL --dport 32718 -j DNAT --to-destination 10.96.178.142:8081" for protocol: 0
$ Deleting rule in table: nat, chain: OVN-KUBE-NODEPORT with args: "-p TCP -m addrtype --dst-type LOCAL --dport 32718 -j DNAT --to-destination 10.96.178.142:8081" for protocol: 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow LoadBalancer service:
Example logs in the ovnkube-master container of the ovnkube-master pod when a host port is open
I0625 09:34:10.406067 128902 iptables.go:27] Adding rule in table: nat, chain: OVN-KUBE-EXTERNALIP with args: "-p TCP -d 172.16.47.129 --dport 8081 -j DNAT --to-destination 10.43.114.94:8081" for protocol: 0
$ I0625 09:34:10.406067 128902 iptables.go:27] Adding rule in table: nat, chain: OVN-KUBE-EXTERNALIP with args: "-p TCP -d 172.16.47.129 --dport 8081 -j DNAT --to-destination 10.43.114.94:8081" for protocol: 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example logs in the ovnkube-master container of the ovnkube-master pod when a host port is closed
I0625 09:37:00.976953 128902 iptables.go:63] Deleting rule in table: nat, chain: OVN-KUBE-EXTERNALIP with args: "-p TCP -d 172.16.47.129 --dport 8081 -j DNAT --to-destination 10.43.114.94:8081" for protocol: 0
$ I0625 09:37:00.976953 128902 iptables.go:63] Deleting rule in table: nat, chain: OVN-KUBE-EXTERNALIP with args: "-p TCP -d 172.16.47.129 --dport 8081 -j DNAT --to-destination 10.43.114.94:8081" for protocol: 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow