MicroShift is Technology Preview software only.
For more information about the support scope of Red Hat Technology Preview software, see Technology Preview Support Scope.이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Networking
Configuring and managing cluster networking
Abstract
Chapter 1. Understanding networking settings 링크 복사링크가 클립보드에 복사되었습니다!
Learn how to apply networking customization and default settings to Red Hat build of MicroShift deployments. Each node is contained to a single machine and single Red Hat build of MicroShift, so each deployment requires individual configuration, pods, and settings.
Cluster Administrators have several options for exposing applications that run inside a cluster to external traffic and securing network connections:
- A service such as NodePort
-
API resources, such as
IngressandRoute
By default, Kubernetes allocates each pod an internal IP address for applications running within the pod. Pods and their containers can have traffic between them, but clients outside the cluster do not have direct network access to pods except when exposed with a service such as NodePort.
To troubleshoot connection problems with the NodePort service, read about the known issue in the Release Notes.
1.1. About the OVN-Kubernetes network plugin 링크 복사링크가 클립보드에 복사되었습니다!
OVN-Kubernetes is the default networking solution for Red Hat build of MicroShift deployments. OVN-Kubernetes is a virtualized network for pods and services that is based on Open Virtual Network (OVN). The OVN-Kubernetes Container Network Interface (CNI) plugin is the network plugin for the cluster. A cluster that uses the OVN-Kubernetes network plugin also runs Open vSwitch (OVS) on the node. OVN configures OVS on the node to implement the declared network configuration.
1.1.1. Network topology 링크 복사링크가 클립보드에 복사되었습니다!
OVN-Kubernetes provides an overlay-based networking implementation. This overlay includes an OVS-based implementation of Service and NetworkPolicy. The overlay network uses the Geneve (Generic Network Virtualization Encapsulation) tunnel protocol. The pod maximum transmission unit (MTU) for the Geneve tunnel is set to a smaller value than the MTU of the physical interface on the host. This smaller MTU makes room for the required information that is added to the tunnel header before it is transmitted.
OVS runs as a systemd service on the Red Hat build of MicroShift node. The OVS RPM package is installed as a dependency to the microshift-networking RPM package. OVS is started immediately when the microshift-networking RPM is installed.
Red Hat build of MicroShift network topology
1.1.1.1. Description of the OVN logical components of the virtualized network 링크 복사링크가 클립보드에 복사되었습니다!
- OVN node switch
A virtual switch named
<node-name>. The OVN node switch is named according to the hostname of the node.-
In this example, the
node-nameismicroshift-dev.
-
In this example, the
- OVN cluster router
A virtual router named
ovn_cluster_router, also known as the distributed router.-
In this example, the cluster network is
10.42.0.0/16.
-
In this example, the cluster network is
- OVN join switch
-
A virtual switch named
join. - OVN gateway router
-
A virtual router named
GR_<node-name>, also known as the external gateway router. - OVN external switch
-
A virtual switch named
ext_<node-name>.
1.1.1.2. Description of the connections in the network topology figure 링크 복사링크가 클립보드에 복사되었습니다!
-
The north-south traffic between the network service device
enp1s0and the OVN external switchext_microshift-dev, is provided through the OVS patch port by the gateway bridgebr-ex. -
The OVN gateway router
GR_microshift-devis connected to the external network switchext_microshift-devthrough the logical router port 4. Port 4 is attached with the node IP address 192.168.122.14. The join switch
joinconnects the OVN gateway routerGR_microshift-devto the OVN cluster routerovn_cluster_router. The IP address range is 100.62.0.0/16.-
The OVN gateway router
GR_microshift-devconnects to the OVN join switchjointhrough the logical router port 3. Port 3 attaches with the internal IP address 100.64.0.2. -
The OVN cluster router
ovn_cluster_routerconnects to the join switchjointhrough the logical router port 2. Port 2 attaches with the internal IP address 100.64.0.1.
-
The OVN gateway router
-
The OVN cluster router
ovn_cluster_routerconnects to the node switchmicroshift-devthrough the logical router port 1. Port 1 is attached with the OVN cluster network IP address 10.42.0.1. -
The east-west traffic between the pods and the network service is provided by the OVN cluster router
ovn_cluster_routerand the node switchmicroshift-dev. The IP address range is 10.42.0.0/24. -
The east-west traffic between pods is provided by the node switch
microshift-devwithout network address translation (NAT). -
The north-south traffic between the pods and the external network is provided by the OVN cluster router
ovn_cluster_routerand the host network. This router is connected through theovn-kubernetesmanagement portovn-k8s-mp0, with the IP address 10.42.0.2. All the pods are connected to the OVN node switch through their interfaces.
-
In this example, Pod 1 and Pod 2 are connected to the node switch through
Interface 1andInterface 2.
-
In this example, Pod 1 and Pod 2 are connected to the node switch through
1.1.2. IP forward 링크 복사링크가 클립보드에 복사되었습니다!
The host network sysctl net.ipv4.ip_forward kernel parameter is automatically enabled by the ovnkube-master container when started. This is required to forward incoming traffic to the CNI. For example, accessing the NodePort service from outside of a cluster fails if ip_forward is disabled.
1.1.3. Network performance optimizations 링크 복사링크가 클립보드에 복사되었습니다!
By default, three performance optimizations are applied to OVS services to minimize resource consumption:
-
CPU affinity to
ovs-vswitchd.serviceandovsdb-server.service -
no-mlockalltoopenvswitch.service -
Limit handler and
revalidatorthreads toovs-vswitchd.service
1.1.4. Network features 링크 복사링크가 클립보드에 복사되었습니다!
Networking features available with Red Hat build of MicroShift 4.13 include:
- Kubernetes network policy
- Dynamic node IP
- Cluster network on specified host interface
Networking features not available with Red Hat build of MicroShift 4.13:
- Egress IP/firewall/qos: disabled
- Hybrid networking: not supported
- IPsec: not supported
- Hardware offload: not supported
1.1.5. Red Hat build of MicroShift networking components and services 링크 복사링크가 클립보드에 복사되었습니다!
This brief overview describes networking components and their operation in Red Hat build of MicroShift. The microshift-networking RPM is a package that automatically pulls in any networking-related dependencies and systemd services to initialize networking, for example, the microshift-ovs-init systemd service.
- NetworkManager
-
NetworkManager is required to set up the initial gateway bridge on the Red Hat build of MicroShift node. The NetworkManager and
NetworkManager-ovsRPM packages are installed as dependencies to themicroshift-networkingRPM package, which contains the necessary configuration files. NetworkManager in Red Hat build of MicroShift uses thekeyfileplugin and is restarted after installation of themicroshift-networkingRPM package. - microshift-ovs-init
-
The
microshift-ovs-init.serviceis installed by themicroshift-networkingRPM package as a dependent systemd service to microshift.service. It is responsible for setting up the OVS gateway bridge. - OVN containers
Two OVN-Kubernetes daemon sets are rendered and applied by Red Hat build of MicroShift.
-
ovnkube-master Includes the
northd,nbdb,sbdbandovnkube-mastercontainers. ovnkube-node The ovnkube-node includes the OVN-Controller container.
After Red Hat build of MicroShift boots, the OVN-Kubernetes daemon sets are deployed in the
openshift-ovn-kubernetesnamespace.
-
ovnkube-master Includes the
- Packaging
OVN-Kubernetes manifests and startup logic are built into Red Hat build of MicroShift. The systemd services and configurations included in
microshift-networkingRPM are:-
/etc/NetworkManager/conf.d/microshift-nm.conffor NetworkManager.service -
/etc/systemd/system/ovs-vswitchd.service.d/microshift-cpuaffinity.conffor ovs-vswitchd.service -
/etc/systemd/system/ovsdb-server.service.d/microshift-cpuaffinity.conf -
/usr/bin/configure-ovs-microshift.shfor microshift-ovs-init.service -
/usr/bin/configure-ovs.shfor microshift-ovs-init.service -
/etc/crio/crio.conf.d/microshift-ovn.conffor CRI-O service
-
1.1.6. Bridge mappings 링크 복사링크가 클립보드에 복사되었습니다!
Bridge mappings allow provider network traffic to reach the physical network. Traffic leaves the provider network and arrives at the br-int bridge. A patch port between br-int and br-ex then allows the traffic to traverse to and from the provider network and the edge network. Kubernetes pods are connected to the br-int bridge through virtual ethernet pair: one end of the virtual ethernet pair is attached to the pod namespace, and the other end is attached to the br-int bridge.
1.1.6.1. Primary gateway interface 링크 복사링크가 클립보드에 복사되었습니다!
You can specify the desired host interface name in the ovn.yaml config file as gatewayInterface. The specified interface is added in OVS bridge br-ex which acts as gateway bridge for the CNI network.
1.2. Creating an OVN-Kubernetes configuration file 링크 복사링크가 클립보드에 복사되었습니다!
Red Hat build of MicroShift uses built-in default OVN-Kubernetes values if an OVN-Kubernetes configuration file is not created. You can write an OVN-Kubernetes configuration file to /etc/microshift/ovn.yaml. An example file is provided for your configuration.
Procedure
To create your
ovn.yamlfile, run the following command:$ sudo cp /etc/microshift/ovn.yaml.default /etc/microshift/ovn.yaml
$ sudo cp /etc/microshift/ovn.yaml.default /etc/microshift/ovn.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow To list the contents of the configuration file you created, run the following command:
$ cat /etc/microshift/ovn.yaml.default
$ cat /etc/microshift/ovn.yaml.defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 'yaml' configuration file with default values
ovsInit: disableOVSInit: false gatewayInterface: "" mtu: 1400
ovsInit: disableOVSInit: false gatewayInterface: ""1 mtu: 1400Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The default value is an empty string that means "not-specified." The CNI network plugin auto-detects to interface with the default route.
To customize your configuration, use the following table that lists the valid values you can use:
Expand Table 1.1. Supported optional OVN-Kubernetes configurations for Red Hat build of MicroShift Field Type Default Description Example ovsInit.disableOVSInitbool
false
Skip configuring OVS bridge
br-exinmicroshift-ovs-init.servicetrue [1]
ovsInit.gatewayInterfaceAlpha
eth0
Ingress that is the API gateway
eth0
mtu
uint32
auto
MTU value used for the pods
1300
The OVS bridge is required. When
disableOVSInitis true, OVS bridgebr-exmust be configured manually.ImportantIf you change the
mtuconfiguration value in theovn.yamlfile, you must restart the host that Red Hat build of MicroShift is running on to apply the updated setting.
Example custom ovn.yaml configuration file
ovsInit: disableOVSInit: true gatewayInterface: eth0 mtu: 1300
ovsInit:
disableOVSInit: true
gatewayInterface: eth0
mtu: 1300
When disableOVSInit is set to true in the ovn.yaml config file, the br-ex OVS bridge must be manually configured.
1.3. Restarting the ovnkube-master pod 링크 복사링크가 클립보드에 복사되었습니다!
The following procedure restarts the ovnkube-master pod.
Prerequisites
-
The OpenShift CLI (
oc) is installed. -
Access to the cluster as a user with the
cluster-adminrole. - A cluster installed on infrastructure configured with the OVN-Kubernetes network plugin.
- The KUBECONFIG environment variable is set.
Procedure
Use the following steps to restart the ovnkube-master pod.
Access the remote cluster by running the following command:
export KUBECONFIG=$PWD/kubeconfig
$ export KUBECONFIG=$PWD/kubeconfigCopy to Clipboard Copied! Toggle word wrap Toggle overflow Find the name of the
ovnkube-masterpod that you want to restart by running the following command:pod=$(oc get pods -n openshift-ovn-kubernetes | awk -F " " '/ovnkube-master/{print $1}')$ pod=$(oc get pods -n openshift-ovn-kubernetes | awk -F " " '/ovnkube-master/{print $1}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
ovnkube-masterpod by running the following command:oc -n openshift-ovn-kubernetes delete pod $pod
$ oc -n openshift-ovn-kubernetes delete pod $podCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that a new
ovnkube-masterpod is running by using the following command:oc get pods -n openshift-ovn-kubernetes
$ oc get pods -n openshift-ovn-kubernetesCopy to Clipboard Copied! Toggle word wrap Toggle overflow The listing of the running pods shows a new
ovnkube-masterpod name and age.
1.4. Deploying Red Hat build of MicroShift behind an HTTP(S) proxy 링크 복사링크가 클립보드에 복사되었습니다!
Deploy a Red Hat build of MicroShift cluster behind an HTTP(S) proxy when you want to add basic anonymity and security measures to your pods.
You must configure the host operating system to use the proxy service with all components initiating HTTP(S) requests when deploying Red Hat build of MicroShift behind a proxy.
All the user-specific workloads or pods with egress traffic, such as accessing cloud services, must be configured to use the proxy. There is no built-in transparent proxying of egress traffic in Red Hat build of MicroShift.
1.5. Using the RPM-OStree HTTP(S) proxy 링크 복사링크가 클립보드에 복사되었습니다!
To use the HTTP(S) proxy in RPM-OStree, set the http_proxy environment variable for the rpm-ostreed service.
Procedure
Add this setting to the
/etc/systemd/system/rpm-ostreed.service.d/00-proxy.conffile by running the following command:Environment="http_proxy=http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"
Environment="http_proxy=http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Next, reload the configuration settings and restart the service to apply your changes.
Reload the configuration settings by running the following command:
sudo systemctl daemon-reload
$ sudo systemctl daemon-reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the rpm-ostree service by running the following command:
sudo systemctl restart rpm-ostreed.service
$ sudo systemctl restart rpm-ostreed.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.6. Using a proxy in the CRI-O container runtime 링크 복사링크가 클립보드에 복사되었습니다!
To use an HTTP(S) proxy in CRI-O, you need to set the HTTP_PROXY and HTTPS_PROXY environment variables. You can also set the NO_PROXY variable to exclude a list of hosts from being proxied.
Procedure
Add the following settings to the
/etc/systemd/system/crio.service.d/00-proxy.conffile:Environment=NO_PROXY="localhost,127.0.0.1" Environment=HTTP_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/" Environment=HTTPS_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"
Environment=NO_PROXY="localhost,127.0.0.1" Environment=HTTP_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/" Environment=HTTPS_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Reload the configuration settings:
sudo systemctl daemon-reload
$ sudo systemctl daemon-reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the CRI-O service to apply the settings:
sudo systemctl restart crio
$ sudo systemctl restart crioCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.7. Getting a snapshot of OVS interfaces from a running cluster 링크 복사링크가 클립보드에 복사되었습니다!
A snapshot represents the state and data of OVS interfaces at a specific point in time.
Procedure
- To see a snapshot of OVS interfaces from a running Red Hat build of MicroShift cluster, use the following command:
sudo ovs-vsctl show
$ sudo ovs-vsctl show
Example OVS interfaces in a running cluster
- 1 2
- The
patch-br-ex_localhost.localdomain-to-br-intandpatch-br-int-to-br-ex_localhost.localdomainare OVS patch ports that connectbr-exandbr-int. - 3 4 5
- The pod interfaces
eebee1ce5568761,b47b1995ada84f4and3031f43d67c167fare named with the first 15 bits of pod sandbox ID and are plugged in thebr-intbridge. - 6
- The OVS internal port for hairpin traffic,
ovn-k8s-mp0is created by theovnkube-mastercontainer.
1.8. Deploying a load balancer for a workload 링크 복사링크가 클립보드에 복사되었습니다!
Red Hat build of MicroShift offers a built-in implementation of network load balancers. The following example procedure uses the node IP address as the external IP address for the LoadBalancer service configuration file.
Prerequisites
-
The OpenShift CLI (
oc) is installed. - You have access to the cluster as a user with the cluster administration role.
- You installed a cluster on an infrastructure configured with the OVN-Kubernetes network plugin.
-
The
KUBECONFIGenvironment variable is set.
Procedure
Verify that your pods are running by running the following command:
oc get pods -A
$ oc get pods -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the example namespace by running the following commands:
NAMESPACE=nginx-lb-test
$ NAMESPACE=nginx-lb-testCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc create ns $NAMESPACE
$ oc create ns $NAMESPACECopy to Clipboard Copied! Toggle word wrap Toggle overflow The following example deploys three replicas of the test
nginxapplication in your namespace:Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can verify that the three sample replicas started successfully by running the following command:
oc get pods -n $NAMESPACE
$ oc get pods -n $NAMESPACECopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
LoadBalancerservice for thenginxtest application with the following sample commands:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must ensure that the
portparameter is a host port that is not occupied by otherLoadBalancerservices or Red Hat build of MicroShift components.Verify that the service file exists, that the external IP address is properly assigned, and that the external IP is identical to the node IP by running the following command:
oc get svc -n $NAMESPACE
$ oc get svc -n $NAMESPACECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx LoadBalancer 10.43.183.104 192.168.1.241 81:32434/TCP 2m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx LoadBalancer 10.43.183.104 192.168.1.241 81:32434/TCP 2mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
The following command forms five connections to the example
nginxapplication using the external IP address of theLoadBalancerservice configuration. The result of the command is a list of those server IP addresses. Verify that the load balancer sends requests to all the running applications with the following command:EXTERNAL_IP=192.168.1.241 seq 5 | xargs -Iz curl -s -I http://$EXTERNAL_IP:81 | grep X-Server-IP
EXTERNAL_IP=192.168.1.241 seq 5 | xargs -Iz curl -s -I http://$EXTERNAL_IP:81 | grep X-Server-IPCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output of the previous command contains different IP addresses if the load balancer is successfully distributing the traffic to the applications, for example:
Example output
X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.43 X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.43
X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.43 X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.43Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.9. Blocking external access to the NodePort service on a specific host interface 링크 복사링크가 클립보드에 복사되었습니다!
OVN-Kubernetes does not restrict the host interface where a NodePort service can be accessed from outside a Red Hat build of MicroShift node. The following procedure explains how to block the NodePort service on a specific host interface and restrict external access.
Prerequisites
- You must have an account with root privileges.
Procedure
Change the
NODEPORTvariable to the host port number assigned to your Kubernetes NodePort service by running the following command:export NODEPORT=30700
# export NODEPORT=30700Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the
INTERFACE_IPvalue to the IP address from the host interface that you want to block. For example:export INTERFACE_IP=192.168.150.33
# export INTERFACE_IP=192.168.150.33Copy to Clipboard Copied! Toggle word wrap Toggle overflow Insert a new rule in the
nattable PREROUTING chain to drop all packets that match the destination port and IP address. For example:sudo nft -a insert rule ip nat PREROUTING tcp dport $NODEPORT ip daddr $INTERFACE_IP drop
$ sudo nft -a insert rule ip nat PREROUTING tcp dport $NODEPORT ip daddr $INTERFACE_IP dropCopy to Clipboard Copied! Toggle word wrap Toggle overflow List the new rule by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteNote the
handlenumber of the newly added rule. You need to remove thehandlenumber in the following step.Remove the custom rule with the following sample command:
sudo nft -a delete rule ip nat PREROUTING handle 134
$ sudo nft -a delete rule ip nat PREROUTING handle 134Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.10. The multicast DNS protocol 링크 복사링크가 클립보드에 복사되었습니다!
You can use the multicast DNS protocol (mDNS) to allow name resolution and service discovery within a Local Area Network (LAN) using multicast exposed on the 5353/UDP port.
Red Hat build of MicroShift includes an embedded mDNS server for deployment scenarios in which the authoritative DNS server cannot be reconfigured to point clients to services on Red Hat build of MicroShift. The embedded DNS server allows .local domains exposed by Red Hat build of MicroShift to be discovered by other elements on the LAN.
Chapter 2. Using a firewall 링크 복사링크가 클립보드에 복사되었습니다!
Firewalls are not required in Red Hat build of MicroShift, but using a firewall can prevent undesired access to the Red Hat build of MicroShift API.
2.1. About network traffic through the firewall 링크 복사링크가 클립보드에 복사되었습니다!
Firewalld is a networking service that runs in the background and responds to connection requests, creating a dynamic customizable host-based firewall. If you are using Red Hat Enterprise Linux (RHEL) for Edge with Red Hat build of MicroShift, firewalld should already be installed and you just need to configure it. Details are provided in procedures that follow. Overall, you must explicitly allow the following OVN-Kubernetes traffic when the firewalld service is running:
- CNI pod to CNI pod
- CNI pod to Host-Network pod Host-Network pod to Host-Network pod
- CNI pod
- The Kubernetes pod that uses the CNI network
- Host-Network pod
-
The Kubernetes pod that uses host network You can configure the
firewalldservice by using the following procedures. In most cases, firewalld is part of {rhel} installations. If you do not have firewalld, you can install it with the simple procedure in this section.
Red Hat build of MicroShift pods must have access to the internal CoreDNS component and API servers.
2.2. Installing the firewalld service 링크 복사링크가 클립보드에 복사되었습니다!
If you are using RHEL for Edge, firewalld should be installed. To use the service, you can simply configure it. The following procedure can be used if you do not have firewalld, but want to use it.
Install and run the firewalld service for Red Hat build of MicroShift by using the following steps.
Procedure
Optional: Check for firewalld on your system by running the following command:
rpm -q firewalld
$ rpm -q firewalldCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the
firewalldservice is not installed, run the following command:sudo dnf install -y firewalld
$ sudo dnf install -y firewalldCopy to Clipboard Copied! Toggle word wrap Toggle overflow To start the firewall, run the following command:
sudo systemctl enable firewalld --now
$ sudo systemctl enable firewalld --nowCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3. Required firewall settings 링크 복사링크가 클립보드에 복사되었습니다!
An IP address range for the cluster network must be enabled during firewall configuration. You can use the default values or customize the IP address range. If you choose to customize the cluster network IP address range from the default 10.42.0.0/16 setting, you must also use the same custom range in the firewall configuration.
| IP Range | Firewall rule required | Description |
|---|---|---|
| 10.42.0.0/16 | No | Host network pod access to other pods |
| 169.254.169.1 | Yes | Host network pod access to Red Hat build of MicroShift API server |
The following are examples of commands for settings that are mandatory for firewall configuration:
Example commands
Configure host network pod access to other pods:
sudo firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16
$ sudo firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure host network pod access to services backed by Host endpoints, such as the Red Hat build of MicroShift API:
sudo firewall-cmd --permanent --zone=trusted --add-source=169.254.169.1
$ sudo firewall-cmd --permanent --zone=trusted --add-source=169.254.169.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4. Using optional port settings 링크 복사링크가 클립보드에 복사되었습니다!
The Red Hat build of MicroShift firewall service allows optional port settings.
Procedure
To add customized ports to your firewall configuration, use the following command syntax:
sudo firewall-cmd --permanent --zone=public --add-port=<port number>/<port protocol>
$ sudo firewall-cmd --permanent --zone=public --add-port=<port number>/<port protocol>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expand Table 2.2. Optional ports Port(s) Protocol(s) Description 80
TCP
HTTP port used to serve applications through the OpenShift Container Platform router.
443
TCP
HTTPS port used to serve applications through the OpenShift Container Platform router.
5353
UDP
mDNS service to respond for OpenShift Container Platform route mDNS hosts.
30000-32767
TCP
Port range reserved for NodePort services; can be used to expose applications on the LAN.
30000-32767
UDP
Port range reserved for NodePort services; can be used to expose applications on the LAN.
6443
TCP
HTTPS API port for the Red Hat build of MicroShift API.
The following are examples of commands used when requiring external access through the firewall to services running on Red Hat build of MicroShift, such as port 6443 for the API server, for example, ports 80 and 443 for applications exposed through the router.
Example commands
Configuring a port for the Red Hat build of MicroShift API server:
sudo firewall-cmd --permanent --zone=public --add-port=6443/tcp
$ sudo firewall-cmd --permanent --zone=public --add-port=6443/tcpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configuring ports for applications exposed through the router:
sudo firewall-cmd --permanent --zone=public --add-port=80/tcp
$ sudo firewall-cmd --permanent --zone=public --add-port=80/tcpCopy to Clipboard Copied! Toggle word wrap Toggle overflow sudo firewall-cmd --permanent --zone=public --add-port=443/tcp
$ sudo firewall-cmd --permanent --zone=public --add-port=443/tcpCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5. Allowing network traffic through the firewall 링크 복사링크가 클립보드에 복사되었습니다!
You can allow network traffic through the firewall by configuring the IP address range and inserting the DNS server to allow internal traffic from pods through the network gateway.
Procedure
Use one of the following commands to set the IP address range:
Configure the IP address range with default values by running the following command:
sudo firewall-offline-cmd --permanent --zone=trusted --add-source=10.42.0.0/16
$ sudo firewall-offline-cmd --permanent --zone=trusted --add-source=10.42.0.0/16Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the IP address range with custom values by running the following command:
sudo firewall-offline-cmd --permanent --zone=trusted --add-source=<custom IP range>
$ sudo firewall-offline-cmd --permanent --zone=trusted --add-source=<custom IP range>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To allow internal traffic from pods through the network gateway, run the following command:
sudo firewall-offline-cmd --permanent --zone=trusted --add-source=169.254.169.1
$ sudo firewall-offline-cmd --permanent --zone=trusted --add-source=169.254.169.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5.1. Applying firewall settings 링크 복사링크가 클립보드에 복사되었습니다!
To apply firewall settings, use the following one-step procedure:
Procedure
- After you have finished configuring network access through the firewall, run the following command to restart the firewall and apply the settings:
sudo firewall-cmd --reload
$ sudo firewall-cmd --reload
2.6. Verifying firewall settings 링크 복사링크가 클립보드에 복사되었습니다!
After you have restarted the firewall, you can verify your settings by listing them.
Procedure
To verify rules added in the default public zone, such as ports-related rules, run the following command:
sudo firewall-cmd --list-all
$ sudo firewall-cmd --list-allCopy to Clipboard Copied! Toggle word wrap Toggle overflow To verify rules added in the trusted zone, such as IP-range related rules, run the following command:
sudo firewall-cmd --zone=trusted --list-all
$ sudo firewall-cmd --zone=trusted --list-allCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7. Known firewall issue 링크 복사링크가 클립보드에 복사되었습니다!
-
To avoid breaking traffic flows with a firewall reload or restart, execute firewall commands before starting Red Hat build of MicroShift. The CNI driver in Red Hat build of MicroShift makes use of iptable rules for some traffic flows, such as those using the NodePort service. The iptable rules are generated and inserted by the CNI driver, but are deleted when the firewall reloads or restarts. The absence of the iptable rules breaks traffic flows. If firewall commands have to be executed after Red Hat build of MicroShift is running, manually restart
ovnkube-masterpod in theopenshift-ovn-kubernetesnamespace to reset the rules controlled by the CNI driver.