Chapter 2. Understanding networking settings
Learn how to apply networking customization and default settings to MicroShift deployments. Each node is contained to a single machine and single MicroShift, so each deployment requires individual configuration, pods, and settings.
Cluster Administrators have several options for exposing applications that run inside a cluster to external traffic and securing network connections:
- A service such as NodePort
-
API resources, such as
Ingress
andRoute
By default, Kubernetes allocates each pod an internal IP address for applications running within the pod. Pods and their containers can have traffic between them, but clients outside the cluster do not have direct network access to pods except when exposed with a service such as NodePort.
To troubleshoot connection problems with the NodePort service, read about the known issue in the Release Notes.
2.1. Creating an OVN-Kubernetes configuration file
MicroShift uses built-in default OVN-Kubernetes values if an OVN-Kubernetes configuration file is not created. You can write an OVN-Kubernetes configuration file to /etc/microshift/ovn.yaml
. An example file is provided for your configuration.
Procedure
To create your
ovn.yaml
file, run the following command:$ sudo cp /etc/microshift/ovn.yaml.default /etc/microshift/ovn.yaml
To list the contents of the configuration file you created, run the following command:
$ cat /etc/microshift/ovn.yaml
Example YAML file with default maximum transmission unit (MTU) value
mtu: 1400
To customize your configuration, you can change the MTU value. The table that follows provides details:
Table 2.1. Supported optional OVN-Kubernetes configurations for MicroShift Field Type Default Description Example mtu
uint32
auto
MTU value used for the pods
1300
ImportantIf you change the
mtu
configuration value in theovn.yaml
file, you must restart the host that Red Hat build of MicroShift is running on to apply the updated setting.Example custom
ovn.yaml
configuration filemtu: 1300
2.2. Restarting the ovnkube-master pod
The following procedure restarts the ovnkube-master
pod.
Prerequisites
-
The OpenShift CLI (
oc
) is installed. -
Access to the cluster as a user with the
cluster-admin
role. - A cluster installed on infrastructure configured with the OVN-Kubernetes network plugin.
- The KUBECONFIG environment variable is set.
Procedure
Use the following steps to restart the ovnkube-master
pod.
Access the remote cluster by running the following command:
$ export KUBECONFIG=$PWD/kubeconfig
Find the name of the
ovnkube-master
pod that you want to restart by running the following command:$ pod=$(oc get pods -n openshift-ovn-kubernetes | awk -F " " '/ovnkube-master/{print $1}')
Delete the
ovnkube-master
pod by running the following command:$ oc -n openshift-ovn-kubernetes delete pod $pod
Confirm that a new
ovnkube-master
pod is running by using the following command:$ oc get pods -n openshift-ovn-kubernetes
The listing of the running pods shows a new
ovnkube-master
pod name and age.
2.3. Deploying MicroShift behind an HTTP or HTTPS proxy
Deploy a MicroShift cluster behind an HTTP or HTTPS proxy when you want to add basic anonymity and security measures to your pods.
You must configure the host operating system to use the proxy service with all components initiating HTTP or HTTPS requests when deploying MicroShift behind a proxy.
All the user-specific workloads or pods with egress traffic, such as accessing cloud services, must be configured to use the proxy. There is no built-in transparent proxying of egress traffic in MicroShift.
2.4. Using the RPM-OStree HTTP or HTTPS proxy
To use the HTTP or HTTPS proxy in RPM-OStree, you must add a Service
section to the configuration file and set the http_proxy environment
variable for the rpm-ostreed
service.
Procedure
Add this setting to the
/etc/systemd/system/rpm-ostreed.service.d/00-proxy.conf
file:[Service] Environment="http_proxy=http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"
Next, reload the configuration settings and restart the service to apply your changes.
Reload the configuration settings by running the following command:
$ sudo systemctl daemon-reload
Restart the
rpm-ostreed
service by running the following command:$ sudo systemctl restart rpm-ostreed.service
2.5. Using a proxy in the CRI-O container runtime
To use an HTTP or HTTPS proxy in CRI-O
, you must add a Service
section to the configuration file and set the HTTP_PROXY
and HTTPS_PROXY
environment variables. You can also set the NO_PROXY
variable to exclude a list of hosts from being proxied.
Procedure
Create the directory for the configuration file if it does not exist:
$ sudo mkdir /etc/systemd/system/crio.service.d/
Add the following settings to the
/etc/systemd/system/crio.service.d/00-proxy.conf
file:[Service] Environment=NO_PROXY="localhost,127.0.0.1" Environment=HTTP_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/" Environment=HTTPS_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"
ImportantYou must define the
Service
section of the configuration file for the environment variables or the proxy settings fail to apply.Reload the configuration settings:
$ sudo systemctl daemon-reload
Restart the CRI-O service:
$ sudo systemctl restart crio
Restart the MicroShift service to apply the settings:
$ sudo systemctl restart microshift
Verification
Verify that pods are started by running the following command and examining the output:
$ oc get all -A
Verify that MicroShift is able to pull container images by running the following command and examining the output:
$ sudo crictl images
2.6. Getting a snapshot of OVS interfaces from a running cluster
A snapshot represents the state and data of OVS interfaces at a specific point in time.
Procedure
To see a snapshot of OVS interfaces from a running MicroShift cluster, use the following command:
$ sudo ovs-vsctl show
Example OVS interfaces in a running cluster
9d9f5ea2-9d9d-4e34-bbd2-dbac154fdc93 Bridge br-ex Port br-ex Interface br-ex type: internal Port patch-br-ex_localhost.localdomain-to-br-int 1 Interface patch-br-ex_localhost.localdomain-to-br-int type: patch options: {peer=patch-br-int-to-br-ex_localhost.localdomain} 2 Bridge br-int fail_mode: secure datapath_type: system Port patch-br-int-to-br-ex_localhost.localdomain Interface patch-br-int-to-br-ex_localhost.localdomain type: patch options: {peer=patch-br-ex_localhost.localdomain-to-br-int} Port eebee1ce5568761 Interface eebee1ce5568761 3 Port b47b1995ada84f4 Interface b47b1995ada84f4 4 Port "3031f43d67c167f" Interface "3031f43d67c167f" 5 Port br-int Interface br-int type: internal Port ovn-k8s-mp0 6 Interface ovn-k8s-mp0 type: internal ovs_version: "2.17.3"
- 1
- The
patch-br-ex_localhost.localdomain-to-br-int
andpatch-br-int-to-br-ex_localhost.localdomain
are OVS patch ports that connectbr-ex
andbr-int
. - 2
- The
patch-br-ex_localhost.localdomain-to-br-int
andpatch-br-int-to-br-ex_localhost.localdomain
are OVS patch ports that connectbr-ex
andbr-int
. - 3
- The pod interface
eebee1ce5568761
is named with the first 15 bits of the pod sandbox ID and is plugged into thebr-int
bridge. - 4
- The pod interface
b47b1995ada84f4
is named with the first 15 bits of the pod sandbox ID and is plugged into thebr-int
bridge. - 5
- The pod interface
3031f43d67c167f
is named with the first 15 bits of the pod sandbox ID and is plugged into thebr-int
bridge. - 6
- The OVS internal port for hairpin traffic,
ovn-k8s-mp0
is created by theovnkube-master
container.
2.7. Deploying a load balancer for a workload
MicroShift has a built-in implementation of network load balancers. The following example procedure uses the node IP address as the external IP address for the LoadBalancer
service configuration file. You can use this example as guidance for how to deploy load balancers for your workloads.
Prerequisites
-
The OpenShift CLI (
oc
) is installed. - You have access to the cluster as a user with the cluster administration role.
- You installed a cluster on an infrastructure configured with the OVN-Kubernetes network plugin.
-
The
KUBECONFIG
environment variable is set.
Procedure
Verify that your pods are running by running the following command:
$ oc get pods -A
Create the example namespace by running the following commands:
$ NAMESPACE=nginx-lb-test
$ oc create ns $NAMESPACE
The following example deploys three replicas of the test
nginx
application in your namespace:$ oc apply -n $NAMESPACE -f - <<EOF apiVersion: v1 kind: ConfigMap metadata: name: nginx data: headers.conf: | add_header X-Server-IP \$server_addr always; --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: quay.io/packit/nginx-unprivileged imagePullPolicy: Always name: nginx ports: - containerPort: 8080 volumeMounts: - name: nginx-configs subPath: headers.conf mountPath: /etc/nginx/conf.d/headers.conf securityContext: allowPrivilegeEscalation: false seccompProfile: type: RuntimeDefault capabilities: drop: ["ALL"] runAsNonRoot: true volumes: - name: nginx-configs configMap: name: nginx items: - key: headers.conf path: headers.conf EOF
You can verify that the three sample replicas started successfully by running the following command:
$ oc get pods -n $NAMESPACE
Create a
LoadBalancer
service for thenginx
test application with the following sample commands:$ oc create -n $NAMESPACE -f - <<EOF apiVersion: v1 kind: Service metadata: name: nginx spec: ports: - port: 81 targetPort: 8080 selector: app: nginx type: LoadBalancer EOF
NoteYou must ensure that the
port
parameter is a host port that is not occupied by otherLoadBalancer
services or Red Hat build of MicroShift components.Verify that the service file exists, that the external IP address is properly assigned, and that the external IP is identical to the node IP by running the following command:
$ oc get svc -n $NAMESPACE
Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx LoadBalancer 10.43.183.104 192.168.1.241 81:32434/TCP 2m
Verification
The following command forms five connections to the example
nginx
application using the external IP address of theLoadBalancer
service configuration. The result of the command is a list of those server IP addresses. Verify that the load balancer sends requests to all the running applications with the following command:EXTERNAL_IP=192.168.1.241 seq 5 | xargs -Iz curl -s -I http://$EXTERNAL_IP:81 | grep X-Server-IP
The output of the previous command contains different IP addresses if the load balancer is successfully distributing the traffic to the applications, for example:
Example output
X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.43 X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.43
2.8. Blocking external access to the NodePort service on a specific host interface
OVN-Kubernetes does not restrict the host interface where a NodePort service can be accessed from outside a Red Hat build of MicroShift node. The following procedure explains how to block the NodePort service on a specific host interface and restrict external access.
Prerequisites
- You must have an account with root privileges.
Procedure
Change the
NODEPORT
variable to the host port number assigned to your Kubernetes NodePort service by running the following command:# export NODEPORT=30700
Change the
INTERFACE_IP
value to the IP address from the host interface that you want to block. For example:# export INTERFACE_IP=192.168.150.33
Insert a new rule in the
nat
table PREROUTING chain to drop all packets that match the destination port and IP address. For example:$ sudo nft -a insert rule ip nat PREROUTING tcp dport $NODEPORT ip daddr $INTERFACE_IP drop
List the new rule by running the following command:
$ sudo nft -a list chain ip nat PREROUTING table ip nat { chain PREROUTING { # handle 1 type nat hook prerouting priority dstnat; policy accept; tcp dport 30700 ip daddr 192.168.150.33 drop # handle 134 counter packets 108 bytes 18074 jump OVN-KUBE-ETP # handle 116 counter packets 108 bytes 18074 jump OVN-KUBE-EXTERNALIP # handle 114 counter packets 108 bytes 18074 jump OVN-KUBE-NODEPORT # handle 112 } }
NoteNote the
handle
number of the newly added rule. You need to remove thehandle
number in the following step.Remove the custom rule with the following sample command:
$ sudo nft -a delete rule ip nat PREROUTING handle 134
2.9. The multicast DNS protocol
You can use the multicast DNS protocol (mDNS) to allow name resolution and service discovery within a Local Area Network (LAN) using multicast exposed on the 5353/UDP
port.
MicroShift includes an embedded mDNS server for deployment scenarios in which the authoritative DNS server cannot be reconfigured to point clients to services on MicroShift. The embedded DNS server allows .local
domains exposed by MicroShift to be discovered by other elements on the LAN.
2.10. Auditing exposed network ports
On MicroShift, the host port can be opened by a workload in the following cases. You can check logs to view the network services.
2.10.1. hostNetwork
When a pod is configured with the hostNetwork:true
setting, the pod is running in the host network namespace. This configuration can independently open host ports. MicroShift component logs cannot be used to track this case, the ports are subject to firewalld rules. If the port opens in firewalld, you can view the port opening in the firewalld debug log.
Prerequisites
- You have root user access to your build host.
Procedure
Optional: You can check that the
hostNetwork:true
parameter is set in your ovnkube-node pod by using the following example command:$ sudo oc get pod -n openshift-ovn-kubernetes <ovnkube-node-pod-name> -o json | jq -r '.spec.hostNetwork' true
Enable debug in the firewalld log by running the following command:
$ sudo vi /etc/sysconfig/firewalld FIREWALLD_ARGS=--debug=10
Restart the firewalld service:
$ sudo systemctl restart firewalld.service
To verify that the debug option was added properly, run the following command:
$ sudo systemd-cgls -u firewalld.service
The firewalld debug log is stored in the
/var/log/firewalld
path.Example logs for when the port open rule is added:
2023-06-28 10:46:37 DEBUG1: config.getZoneByName('public') 2023-06-28 10:46:37 DEBUG1: config.zone.7.addPort('8080', 'tcp') 2023-06-28 10:46:37 DEBUG1: config.zone.7.getSettings() 2023-06-28 10:46:37 DEBUG1: config.zone.7.update('...') 2023-06-28 10:46:37 DEBUG1: config.zone.7.Updated('public')
Example logs for when the port open rule is removed:
2023-06-28 10:47:57 DEBUG1: config.getZoneByName('public') 2023-06-28 10:47:57 DEBUG2: config.zone.7.Introspect() 2023-06-28 10:47:57 DEBUG1: config.zone.7.removePort('8080', 'tcp') 2023-06-28 10:47:57 DEBUG1: config.zone.7.getSettings() 2023-06-28 10:47:57 DEBUG1: config.zone.7.update('...') 2023-06-28 10:47:57 DEBUG1: config.zone.7.Updated('public')
2.10.2. hostPort
You can access the hostPort setting logs in MicroShift. The following logs are examples for the hostPort setting:
Procedure
You can access the logs by running the following command:
$ journalctl -u crio | grep "local port"
Example CRI-O logs when the host port is opened:
$ Jun 25 16:27:37 rhel92 crio[77216]: time="2023-06-25 16:27:37.033003098+08:00" level=info msg="Opened local port tcp:443"
Example CRI-O logs when the host port is closed:
$ Jun 25 16:24:11 rhel92 crio[77216]: time="2023-06-25 16:24:11.342088450+08:00" level=info msg="Closing host port tcp:443"
2.10.3. NodePort and LoadBalancer service
OVN-Kubernetes opens host ports for NodePort
and LoadBalancer
service types. These services add iptables rules that take the ingress traffic from the host port and forwards it to the clusterIP. Logs for the NodePort
and LoadBalancer
services are presented in the following examples:
Procedure
To access the name of your
ovnkube-master
pods, run the following command:$ oc get pods -n openshift-ovn-kubernetes | awk '/ovnkube-master/{print $1}'
Example
ovnkube-master
pod nameovnkube-master-n2shv
You can access the
NodePort
andLoadBalancer
services logs using yourovnkube-master
pod and running the following example command:$ oc logs -n openshift-ovn-kubernetes <ovnkube-master-pod-name> ovnkube-master | grep -E "OVN-KUBE-NODEPORT|OVN-KUBE-EXTERNALIP"
NodePort service:
Example logs in the ovnkube-master container of the ovnkube-master pod when a host port is open:
$ I0625 09:07:00.992980 2118395 iptables.go:27] Adding rule in table: nat, chain: OVN-KUBE-NODEPORT with args: "-p TCP -m addrtype --dst-type LOCAL --dport 32718 -j DNAT --to-destination 10.96.178.142:8081" for protocol: 0
Example logs in the ovnkube-master container of the ovnkube-master pod when a host port is closed:
$ Deleting rule in table: nat, chain: OVN-KUBE-NODEPORT with args: "-p TCP -m addrtype --dst-type LOCAL --dport 32718 -j DNAT --to-destination 10.96.178.142:8081" for protocol: 0
LoadBalancer service:
Example logs in the ovnkube-master container of the ovnkube-master pod when a host port is open:
$ I0625 09:34:10.406067 128902 iptables.go:27] Adding rule in table: nat, chain: OVN-KUBE-EXTERNALIP with args: "-p TCP -d 172.16.47.129 --dport 8081 -j DNAT --to-destination 10.43.114.94:8081" for protocol: 0
Example logs in the ovnkube-master container of the ovnkube-master pod when a host port is closed:
$ I0625 09:37:00.976953 128902 iptables.go:63] Deleting rule in table: nat, chain: OVN-KUBE-EXTERNALIP with args: "-p TCP -d 172.16.47.129 --dport 8081 -j DNAT --to-destination 10.43.114.94:8081" for protocol: 0