Este contenido no está disponible en el idioma seleccionado.
Chapter 2. Understanding networking settings
Learn how to apply networking customization and default settings to MicroShift deployments. Each node is contained to a single machine and single MicroShift, so each deployment requires individual configuration, pods, and settings.
Cluster Administrators have several options for exposing applications that run inside a cluster to external traffic and securing network connections:
- A service such as NodePort
-
API resources, such as
IngressandRoute
By default, Kubernetes allocates each pod an internal IP address for applications running within the pod. Pods and their containers can have traffic between them, but clients outside the cluster do not have direct network access to pods except when exposed with a service such as NodePort.
To troubleshoot connection problems with the NodePort service, read about the known issue in the Release Notes.
2.1. Creating an OVN-Kubernetes configuration file Copiar enlaceEnlace copiado en el portapapeles!
MicroShift uses built-in default OVN-Kubernetes values if an OVN-Kubernetes configuration file is not created. You can write an OVN-Kubernetes configuration file to /etc/microshift/ovn.yaml. An example file is provided for your configuration.
Procedure
To create your
ovn.yamlfile, run the following command:$ sudo cp /etc/microshift/ovn.yaml.default /etc/microshift/ovn.yaml
$ sudo cp /etc/microshift/ovn.yaml.default /etc/microshift/ovn.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow To list the contents of the configuration file you created, run the following command:
$ cat /etc/microshift/ovn.yaml.default
$ cat /etc/microshift/ovn.yaml.defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example YAML file with default maximum transmission unit (MTU) value
mtu: 1400
mtu: 1400Copy to Clipboard Copied! Toggle word wrap Toggle overflow To customize your configuration, you can change the MTU value. The table that follows provides details:
Expand Table 2.1. Supported optional OVN-Kubernetes configurations for MicroShift Field Type Default Description Example mtu
uint32
auto
MTU value used for the pods
1300
ImportantIf you change the
mtuconfiguration value in theovn.yamlfile, you must restart the host that Red Hat build of MicroShift is running on to apply the updated setting.Example custom
ovn.yamlconfiguration filemtu: 1300
mtu: 1300Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2. Restarting the ovnkube-master pod Copiar enlaceEnlace copiado en el portapapeles!
The following procedure restarts the ovnkube-master pod.
Prerequisites
-
The OpenShift CLI (
oc) is installed. -
Access to the cluster as a user with the
cluster-adminrole. - A cluster installed on infrastructure configured with the OVN-Kubernetes network plugin.
- The KUBECONFIG environment variable is set.
Procedure
Use the following steps to restart the ovnkube-master pod.
Access the remote cluster by running the following command:
export KUBECONFIG=$PWD/kubeconfig
$ export KUBECONFIG=$PWD/kubeconfigCopy to Clipboard Copied! Toggle word wrap Toggle overflow Find the name of the
ovnkube-masterpod that you want to restart by running the following command:pod=$(oc get pods -n openshift-ovn-kubernetes | awk -F " " '/ovnkube-master/{print $1}')$ pod=$(oc get pods -n openshift-ovn-kubernetes | awk -F " " '/ovnkube-master/{print $1}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
ovnkube-masterpod by running the following command:oc -n openshift-ovn-kubernetes delete pod $pod
$ oc -n openshift-ovn-kubernetes delete pod $podCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that a new
ovnkube-masterpod is running by using the following command:oc get pods -n openshift-ovn-kubernetes
$ oc get pods -n openshift-ovn-kubernetesCopy to Clipboard Copied! Toggle word wrap Toggle overflow The listing of the running pods shows a new
ovnkube-masterpod name and age.
2.3. Deploying MicroShift behind an HTTP(S) proxy Copiar enlaceEnlace copiado en el portapapeles!
Deploy a MicroShift cluster behind an HTTP(S) proxy when you want to add basic anonymity and security measures to your pods.
You must configure the host operating system to use the proxy service with all components initiating HTTP(S) requests when deploying MicroShift behind a proxy.
All the user-specific workloads or pods with egress traffic, such as accessing cloud services, must be configured to use the proxy. There is no built-in transparent proxying of egress traffic in MicroShift.
2.4. Using the RPM-OStree HTTP(S) proxy Copiar enlaceEnlace copiado en el portapapeles!
To use the HTTP(S) proxy in RPM-OStree, you must add a Service section to the configuration file and set the http_proxy environment variable for the rpm-ostreed service.
Procedure
Add this setting to the
/etc/systemd/system/rpm-ostreed.service.d/00-proxy.conffile:[Service] Environment="http_proxy=http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"
[Service] Environment="http_proxy=http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Next, reload the configuration settings and restart the service to apply your changes.
Reload the configuration settings by running the following command:
sudo systemctl daemon-reload
$ sudo systemctl daemon-reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the
rpm-ostreedservice by running the following command:sudo systemctl restart rpm-ostreed.service
$ sudo systemctl restart rpm-ostreed.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5. Using a proxy in the CRI-O container runtime Copiar enlaceEnlace copiado en el portapapeles!
To use an HTTP(S) proxy in CRI-O, you must add a Service section to the configuration file and set the HTTP_PROXY and HTTPS_PROXY environment variables. You can also set the NO_PROXY variable to exclude a list of hosts from being proxied.
Procedure
Create the directory for the configuration file if it does not exist:
sudo mkdir /etc/systemd/system/crio.service.d/
$ sudo mkdir /etc/systemd/system/crio.service.d/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following settings to the
/etc/systemd/system/crio.service.d/00-proxy.conffile:[Service] Environment=NO_PROXY="localhost,127.0.0.1" Environment=HTTP_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/" Environment=HTTPS_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"
[Service] Environment=NO_PROXY="localhost,127.0.0.1" Environment=HTTP_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/" Environment=HTTPS_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantYou must define the
Servicesection of the configuration file for the environment variables or the proxy settings fail to apply.Reload the configuration settings:
sudo systemctl daemon-reload
$ sudo systemctl daemon-reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the CRI-O service:
sudo systemctl restart crio
$ sudo systemctl restart crioCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the MicroShift service to apply the settings:
sudo systemctl restart microshift
$ sudo systemctl restart microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that pods are started by running the following command and examining the output:
oc get all -A
$ oc get all -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that MicroShift is able to pull container images by running the following command and examining the output:
sudo crictl images
$ sudo crictl imagesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6. Getting a snapshot of OVS interfaces from a running cluster Copiar enlaceEnlace copiado en el portapapeles!
A snapshot represents the state and data of OVS interfaces at a specific point in time.
Procedure
To see a snapshot of OVS interfaces from a running MicroShift cluster, use the following command:
sudo ovs-vsctl show
$ sudo ovs-vsctl showCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example OVS interfaces in a running cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
patch-br-ex_localhost.localdomain-to-br-intandpatch-br-int-to-br-ex_localhost.localdomainare OVS patch ports that connectbr-exandbr-int. - 2
- The
patch-br-ex_localhost.localdomain-to-br-intandpatch-br-int-to-br-ex_localhost.localdomainare OVS patch ports that connectbr-exandbr-int. - 3
- The pod interface
eebee1ce5568761is named with the first 15 bits of the pod sandbox ID and is plugged into thebr-intbridge. - 4
- The pod interface
b47b1995ada84f4is named with the first 15 bits of the pod sandbox ID and is plugged into thebr-intbridge. - 5
- The pod interface
3031f43d67c167fis named with the first 15 bits of the pod sandbox ID and is plugged into thebr-intbridge. - 6
- The OVS internal port for hairpin traffic,
ovn-k8s-mp0is created by theovnkube-mastercontainer.
2.7. Deploying a load balancer for a workload Copiar enlaceEnlace copiado en el portapapeles!
MicroShift has a built-in implementation of network load balancers. The following example procedure uses the node IP address as the external IP address for the LoadBalancer service configuration file. You can use this example as guidance for how to deploy load balancers for your workloads.
Prerequisites
-
The OpenShift CLI (
oc) is installed. - You have access to the cluster as a user with the cluster administration role.
- You installed a cluster on an infrastructure configured with the OVN-Kubernetes network plugin.
-
The
KUBECONFIGenvironment variable is set.
Procedure
Verify that your pods are running by running the following command:
oc get pods -A
$ oc get pods -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the example namespace by running the following commands:
NAMESPACE=nginx-lb-test
$ NAMESPACE=nginx-lb-testCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc create ns $NAMESPACE
$ oc create ns $NAMESPACECopy to Clipboard Copied! Toggle word wrap Toggle overflow The following example deploys three replicas of the test
nginxapplication in your namespace:Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can verify that the three sample replicas started successfully by running the following command:
oc get pods -n $NAMESPACE
$ oc get pods -n $NAMESPACECopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
LoadBalancerservice for thenginxtest application with the following sample commands:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must ensure that the
portparameter is a host port that is not occupied by otherLoadBalancerservices or Red Hat build of MicroShift components.Verify that the service file exists, that the external IP address is properly assigned, and that the external IP is identical to the node IP by running the following command:
oc get svc -n $NAMESPACE
$ oc get svc -n $NAMESPACECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx LoadBalancer 10.43.183.104 192.168.1.241 81:32434/TCP 2m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx LoadBalancer 10.43.183.104 192.168.1.241 81:32434/TCP 2mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
The following command forms five connections to the example
nginxapplication using the external IP address of theLoadBalancerservice configuration. The result of the command is a list of those server IP addresses. Verify that the load balancer sends requests to all the running applications with the following command:EXTERNAL_IP=192.168.1.241 seq 5 | xargs -Iz curl -s -I http://$EXTERNAL_IP:81 | grep X-Server-IP
EXTERNAL_IP=192.168.1.241 seq 5 | xargs -Iz curl -s -I http://$EXTERNAL_IP:81 | grep X-Server-IPCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output of the previous command contains different IP addresses if the load balancer is successfully distributing the traffic to the applications, for example:
Example output
X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.43 X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.43
X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.43 X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.43Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.8. Blocking external access to the NodePort service on a specific host interface Copiar enlaceEnlace copiado en el portapapeles!
OVN-Kubernetes does not restrict the host interface where a NodePort service can be accessed from outside a Red Hat build of MicroShift node. The following procedure explains how to block the NodePort service on a specific host interface and restrict external access.
Prerequisites
- You must have an account with root privileges.
Procedure
Change the
NODEPORTvariable to the host port number assigned to your Kubernetes NodePort service by running the following command:export NODEPORT=30700
# export NODEPORT=30700Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the
INTERFACE_IPvalue to the IP address from the host interface that you want to block. For example:export INTERFACE_IP=192.168.150.33
# export INTERFACE_IP=192.168.150.33Copy to Clipboard Copied! Toggle word wrap Toggle overflow Insert a new rule in the
nattable PREROUTING chain to drop all packets that match the destination port and IP address. For example:sudo nft -a insert rule ip nat PREROUTING tcp dport $NODEPORT ip daddr $INTERFACE_IP drop
$ sudo nft -a insert rule ip nat PREROUTING tcp dport $NODEPORT ip daddr $INTERFACE_IP dropCopy to Clipboard Copied! Toggle word wrap Toggle overflow List the new rule by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteNote the
handlenumber of the newly added rule. You need to remove thehandlenumber in the following step.Remove the custom rule with the following sample command:
sudo nft -a delete rule ip nat PREROUTING handle 134
$ sudo nft -a delete rule ip nat PREROUTING handle 134Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.9. The multicast DNS protocol Copiar enlaceEnlace copiado en el portapapeles!
You can use the multicast DNS protocol (mDNS) to allow name resolution and service discovery within a Local Area Network (LAN) using multicast exposed on the 5353/UDP port.
MicroShift includes an embedded mDNS server for deployment scenarios in which the authoritative DNS server cannot be reconfigured to point clients to services on MicroShift. The embedded DNS server allows .local domains exposed by MicroShift to be discovered by other elements on the LAN.