Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 2. Understanding networking settings

download PDF

Learn how to apply networking customization and default settings to MicroShift deployments. Each node is contained to a single machine and single MicroShift, so each deployment requires individual configuration, pods, and settings.

Cluster Administrators have several options for exposing applications that run inside a cluster to external traffic and securing network connections:

  • A service such as NodePort
  • API resources, such as Ingress and Route

By default, Kubernetes allocates each pod an internal IP address for applications running within the pod. Pods and their containers can have traffic between them, but clients outside the cluster do not have direct network access to pods except when exposed with a service such as NodePort.

2.1. Creating an OVN-Kubernetes configuration file

MicroShift uses built-in default OVN-Kubernetes values if an OVN-Kubernetes configuration file is not created. You can write an OVN-Kubernetes configuration file to /etc/microshift/ovn.yaml. An example file is provided for your configuration.

Procedure

  1. To create your ovn.yaml file, run the following command:

    $ sudo cp /etc/microshift/ovn.yaml.default /etc/microshift/ovn.yaml
  2. To list the contents of the configuration file you created, run the following command:

    $ cat /etc/microshift/ovn.yaml

    Example YAML file with default maximum transmission unit (MTU) value

    mtu: 1400

  3. To customize your configuration, you can change the MTU value. The table that follows provides details:

    Table 2.1. Supported optional OVN-Kubernetes configurations for MicroShift
    FieldTypeDefaultDescriptionExample

    mtu

    uint32

    auto

    MTU value used for the pods

    1300

    Important

    If you change the mtu configuration value in the ovn.yaml file, you must restart the host that Red Hat build of MicroShift is running on to apply the updated setting.

    Example custom ovn.yaml configuration file

    mtu: 1300

2.2. Restarting the ovnkube-master pod

The following procedure restarts the ovnkube-master pod.

Prerequisites

  • The OpenShift CLI (oc) is installed.
  • Access to the cluster as a user with the cluster-admin role.
  • A cluster installed on infrastructure configured with the OVN-Kubernetes network plugin.
  • The KUBECONFIG environment variable is set.

Procedure

Use the following steps to restart the ovnkube-master pod.

  1. Access the remote cluster by running the following command:

    $ export KUBECONFIG=$PWD/kubeconfig
  2. Find the name of the ovnkube-master pod that you want to restart by running the following command:

    $ pod=$(oc get pods -n openshift-ovn-kubernetes | awk -F " " '/ovnkube-master/{print $1}')
  3. Delete the ovnkube-master pod by running the following command:

    $ oc -n openshift-ovn-kubernetes delete pod $pod
  4. Confirm that a new ovnkube-master pod is running by using the following command:

    $ oc get pods -n openshift-ovn-kubernetes

    The listing of the running pods shows a new ovnkube-master pod name and age.

2.3. Deploying MicroShift behind an HTTP or HTTPS proxy

Deploy a MicroShift cluster behind an HTTP or HTTPS proxy when you want to add basic anonymity and security measures to your pods.

You must configure the host operating system to use the proxy service with all components initiating HTTP or HTTPS requests when deploying MicroShift behind a proxy.

All the user-specific workloads or pods with egress traffic, such as accessing cloud services, must be configured to use the proxy. There is no built-in transparent proxying of egress traffic in MicroShift.

2.4. Using the RPM-OStree HTTP or HTTPS proxy

To use the HTTP or HTTPS proxy in RPM-OStree, you must add a Service section to the configuration file and set the http_proxy environment variable for the rpm-ostreed service.

Procedure

  1. Add this setting to the /etc/systemd/system/rpm-ostreed.service.d/00-proxy.conf file:

    [Service]
    Environment="http_proxy=http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"
  2. Next, reload the configuration settings and restart the service to apply your changes.

    1. Reload the configuration settings by running the following command:

      $ sudo systemctl daemon-reload
    2. Restart the rpm-ostreed service by running the following command:

      $ sudo systemctl restart rpm-ostreed.service

2.5. Using a proxy in the CRI-O container runtime

To use an HTTP or HTTPS proxy in CRI-O, you must add a Service section to the configuration file and set the HTTP_PROXY and HTTPS_PROXY environment variables. You can also set the NO_PROXY variable to exclude a list of hosts from being proxied.

Procedure

  1. Create the directory for the configuration file if it does not exist:

    $ sudo mkdir /etc/systemd/system/crio.service.d/
  2. Add the following settings to the /etc/systemd/system/crio.service.d/00-proxy.conf file:

    [Service]
    Environment=NO_PROXY="localhost,127.0.0.1"
    Environment=HTTP_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"
    Environment=HTTPS_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"
    Important

    You must define the Service section of the configuration file for the environment variables or the proxy settings fail to apply.

  3. Reload the configuration settings:

    $ sudo systemctl daemon-reload
  4. Restart the CRI-O service:

    $ sudo systemctl restart crio
  5. Restart the MicroShift service to apply the settings:

    $ sudo systemctl restart microshift

Verification

  1. Verify that pods are started by running the following command and examining the output:

    $ oc get all -A
  2. Verify that MicroShift is able to pull container images by running the following command and examining the output:

    $ sudo crictl images

2.6. Getting a snapshot of OVS interfaces from a running cluster

A snapshot represents the state and data of OVS interfaces at a specific point in time.

Procedure

  • To see a snapshot of OVS interfaces from a running MicroShift cluster, use the following command:

    $ sudo ovs-vsctl show

    Example OVS interfaces in a running cluster

    9d9f5ea2-9d9d-4e34-bbd2-dbac154fdc93
        Bridge br-ex
            Port br-ex
                Interface br-ex
                    type: internal
            Port patch-br-ex_localhost.localdomain-to-br-int 1
                Interface patch-br-ex_localhost.localdomain-to-br-int
                    type: patch
                    options: {peer=patch-br-int-to-br-ex_localhost.localdomain} 2
        Bridge br-int
            fail_mode: secure
            datapath_type: system
            Port patch-br-int-to-br-ex_localhost.localdomain
                Interface patch-br-int-to-br-ex_localhost.localdomain
                    type: patch
                    options: {peer=patch-br-ex_localhost.localdomain-to-br-int}
            Port eebee1ce5568761
                Interface eebee1ce5568761 3
            Port b47b1995ada84f4
                Interface b47b1995ada84f4 4
            Port "3031f43d67c167f"
                Interface "3031f43d67c167f" 5
            Port br-int
                Interface br-int
                    type: internal
            Port ovn-k8s-mp0 6
                Interface ovn-k8s-mp0
                    type: internal
        ovs_version: "2.17.3"

    1
    The patch-br-ex_localhost.localdomain-to-br-int and patch-br-int-to-br-ex_localhost.localdomain are OVS patch ports that connect br-ex and br-int.
    2
    The patch-br-ex_localhost.localdomain-to-br-int and patch-br-int-to-br-ex_localhost.localdomain are OVS patch ports that connect br-ex and br-int.
    3
    The pod interface eebee1ce5568761 is named with the first 15 bits of the pod sandbox ID and is plugged into the br-int bridge.
    4
    The pod interface b47b1995ada84f4 is named with the first 15 bits of the pod sandbox ID and is plugged into the br-int bridge.
    5
    The pod interface 3031f43d67c167f is named with the first 15 bits of the pod sandbox ID and is plugged into the br-int bridge.
    6
    The OVS internal port for hairpin traffic,ovn-k8s-mp0 is created by the ovnkube-master container.

2.7. The MicroShift LoadBalancer service for workloads

MicroShift has a built-in implementation of network load balancers that you can use for your workloads and applications within the cluster. You can create a LoadBalancer service by configuring a pod to interpret ingress rules and serve as an ingress controller. The following procedure gives an example of a deployment-based LoadBalancer service.

2.8. Deploying a load balancer for an application

The following example procedure uses the node IP address as the external IP address for the LoadBalancer service configuration file. Use this example as guidance for how to deploy load balancers.

Prerequisites

  • The OpenShift CLI (oc) is installed.
  • You installed a cluster on an infrastructure configured with the OVN-Kubernetes network plugin.
  • The KUBECONFIG environment variable is set.

Procedure

  1. Verify that your pods are running by entering the following command:

    $ oc get pods -A

    Example output

    NAMESPACE                            NAME                                                     READY   STATUS   RESTARTS  AGE
    default                              i-06166fbb376f14a8bus-west-2computeinternal-debug-qtwcr  1/1     Running	   0		   46m
    kube-system                          csi-snapshot-controller-5c6586d546-lprv4                 1/1     Running	   0		   51m
    kube-system                          csi-snapshot-webhook-6bf8ddc7f5-kz6k9                    1/1     Running	   0		   51m
    openshift-dns                        dns-default-45jl7                                        2/2     Running	   0		   50m
    openshift-dns                        node-resolver-7wmzf                                      1/1     Running	   0		   51m
    openshift-ingress                    router-default-78b86fbf9d-qvj9s                          1/1     Running 	 0		   51m
    openshift-multus                     dhcp-daemon-j7qnf                                        1/1     Running    0		   51m
    openshift-multus                     multus-r758z                                             1/1     Running    0		   51m
    openshift-operator-lifecycle-manager catalog-operator-85fb86fcb9-t6zm7                        1/1     Running    0		   51m
    openshift-operator-lifecycle-manager olm-operator-87656d995-fvz84                             1/1     Running    0		   51m
    openshift-ovn-kubernetes             ovnkube-master-5rfhh                                     4/4     Running    0		   51m
    openshift-ovn-kubernetes             ovnkube-node-gcnt6                                       1/1     Running    0		   51m
    openshift-service-ca                 service-ca-bf5b7c9f8-pn6rk                               1/1     Running    0		   51m
    openshift-storage                    topolvm-controller-549f7fbdd5-7vrmv                      5/5     Running    0		   51m
    openshift-storage                    topolvm-node-rht2m                                       3/3     Running    0		   50m

  2. Create a namespace by running the following commands:

    $ NAMESPACE=<nginx-lb-test> 1
    1
    Replace _<nginx-lb-test> with the application namespace that you want to create.
    $ oc create ns $NAMESPACE

    Example namespace

    The following example deploys three replicas of the test nginx application in the created namespace:

    oc apply -n $NAMESPACE -f - <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: nginx
    data:
      headers.conf: |
        add_header X-Server-IP  \$server_addr always;
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - image: quay.io/packit/nginx-unprivileged
            imagePullPolicy: Always
            name: nginx
            ports:
            - containerPort: 8080
            volumeMounts:
            - name: nginx-configs
              subPath: headers.conf
              mountPath: /etc/nginx/conf.d/headers.conf
            securityContext:
              allowPrivilegeEscalation: false
              seccompProfile:
                type: RuntimeDefault
              capabilities:
                drop: ["ALL"]
              runAsNonRoot: true
          volumes:
            - name: nginx-configs
              configMap:
                name: nginx
                items:
                  - key: headers.conf
                    path: headers.conf
    EOF
  3. You can verify that the three sample replicas started successfully by running the following command:

    $ oc get pods -n $NAMESPACE
  4. Create a LoadBalancer service for the nginx test application by running the following command:

    oc create -n $NAMESPACE -f - <<EOF
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx
    spec:
      ports:
      - port: 81
        targetPort: 8080
      selector:
        app: nginx
      type: LoadBalancer
    EOF
    Note

    You must ensure that the port parameter is a host port that is not occupied by other LoadBalancer services or MicroShift components.

  5. Verify that the service file exists, that the external IP address is properly assigned, and that the external IP is identical to the node IP by running the following command:

    $ oc get svc -n $NAMESPACE

    Example output

    NAME    TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
    nginx   LoadBalancer   10.43.183.104   192.168.1.241   81:32434/TCP   2m

Verification

The following command forms five connections to the example nginx application using the external IP address of the LoadBalancer service configuration. The result of the command is a list of those server IP addresses.

  • Verify that the load balancer sends requests to all the running applications by running the following command:

    EXTERNAL_IP=192.168.1.241
    seq 5 | xargs -Iz curl -s -I http://$EXTERNAL_IP:81 | grep X-Server-IP

    The output of the previous command contains different IP addresses if the LoadBalancer service is successfully distributing the traffic to the applications, for example:

    Example output

    X-Server-IP: 10.42.0.41
    X-Server-IP: 10.42.0.41
    X-Server-IP: 10.42.0.43
    X-Server-IP: 10.42.0.41
    X-Server-IP: 10.42.0.43

2.9. Blocking external access to the NodePort service on a specific host interface

OVN-Kubernetes does not restrict the host interface where a NodePort service can be accessed from outside a Red Hat build of MicroShift node. The following procedure explains how to block the NodePort service on a specific host interface and restrict external access.

Prerequisites

  • You must have an account with root privileges.

Procedure

  1. Change the NODEPORT variable to the host port number assigned to your Kubernetes NodePort service by running the following command:

    # export NODEPORT=30700
  2. Change the INTERFACE_IP value to the IP address from the host interface that you want to block. For example:

    # export INTERFACE_IP=192.168.150.33
  3. Insert a new rule in the nat table PREROUTING chain to drop all packets that match the destination port and IP address. For example:

    $ sudo nft -a insert rule ip nat PREROUTING tcp dport $NODEPORT ip daddr $INTERFACE_IP drop
  4. List the new rule by running the following command:

    $ sudo nft -a list chain ip nat PREROUTING
    table ip nat {
    	chain PREROUTING { # handle 1
    		type nat hook prerouting priority dstnat; policy accept;
    		tcp dport 30700 ip daddr 192.168.150.33 drop # handle 134
    		counter packets 108 bytes 18074 jump OVN-KUBE-ETP # handle 116
    		counter packets 108 bytes 18074 jump OVN-KUBE-EXTERNALIP # handle 114
    		counter packets 108 bytes 18074 jump OVN-KUBE-NODEPORT # handle 112
    	}
    }
    Note

    Note the handle number of the newly added rule. You need to remove the handle number in the following step.

  5. Remove the custom rule with the following sample command:

    $ sudo nft -a delete rule ip nat PREROUTING handle 134

2.10. The multicast DNS protocol

You can use the multicast DNS protocol (mDNS) to allow name resolution and service discovery within a Local Area Network (LAN) using multicast exposed on the 5353/UDP port.

MicroShift includes an embedded mDNS server for deployment scenarios in which the authoritative DNS server cannot be reconfigured to point clients to services on MicroShift. The embedded DNS server allows .local domains exposed by MicroShift to be discovered by other elements on the LAN.

2.11. Auditing exposed network ports

On MicroShift, the host port can be opened by a workload in the following cases. You can check logs to view the network services.

2.11.1. hostNetwork

When a pod is configured with the hostNetwork:true setting, the pod is running in the host network namespace. This configuration can independently open host ports. MicroShift component logs cannot be used to track this case, the ports are subject to firewalld rules. If the port opens in firewalld, you can view the port opening in the firewalld debug log.

Prerequisites

  • You have root user access to your build host.

Procedure

  1. Optional: You can check that the hostNetwork:true parameter is set in your ovnkube-node pod by using the following example command:

    $ sudo oc get pod -n openshift-ovn-kubernetes <ovnkube-node-pod-name> -o json | jq -r '.spec.hostNetwork' true
  2. Enable debug in the firewalld log by running the following command:

    $ sudo vi /etc/sysconfig/firewalld
    FIREWALLD_ARGS=--debug=10
  3. Restart the firewalld service:

    $ sudo systemctl restart firewalld.service
  4. To verify that the debug option was added properly, run the following command:

    $ sudo systemd-cgls -u firewalld.service

    The firewalld debug log is stored in the /var/log/firewalld path.

    Example logs for when the port open rule is added:

    2023-06-28 10:46:37 DEBUG1: config.getZoneByName('public')
    2023-06-28 10:46:37 DEBUG1: config.zone.7.addPort('8080', 'tcp')
    2023-06-28 10:46:37 DEBUG1: config.zone.7.getSettings()
    2023-06-28 10:46:37 DEBUG1: config.zone.7.update('...')
    2023-06-28 10:46:37 DEBUG1: config.zone.7.Updated('public')

    Example logs for when the port open rule is removed:

    2023-06-28 10:47:57 DEBUG1: config.getZoneByName('public')
    2023-06-28 10:47:57 DEBUG2: config.zone.7.Introspect()
    2023-06-28 10:47:57 DEBUG1: config.zone.7.removePort('8080', 'tcp')
    2023-06-28 10:47:57 DEBUG1: config.zone.7.getSettings()
    2023-06-28 10:47:57 DEBUG1: config.zone.7.update('...')
    2023-06-28 10:47:57 DEBUG1: config.zone.7.Updated('public')

2.11.2. hostPort

You can access the hostPort setting logs in MicroShift. The following logs are examples for the hostPort setting:

Procedure

  • You can access the logs by running the following command:

    $ journalctl -u crio | grep "local port"

    Example CRI-O logs when the host port is opened:

    $ Jun 25 16:27:37 rhel92 crio[77216]: time="2023-06-25 16:27:37.033003098+08:00" level=info msg="Opened local port tcp:443"

    Example CRI-O logs when the host port is closed:

    $ Jun 25 16:24:11 rhel92 crio[77216]: time="2023-06-25 16:24:11.342088450+08:00" level=info msg="Closing host port tcp:443"

2.11.3. NodePort and LoadBalancer services

OVN-Kubernetes opens host ports for NodePort and LoadBalancer service types. These services add iptables rules that take the ingress traffic from the host port and forwards it to the clusterIP. Logs for the NodePort and LoadBalancer services are presented in the following examples:

Procedure

  1. To access the name of your ovnkube-master pods, run the following command:

    $ oc get pods -n openshift-ovn-kubernetes | awk '/ovnkube-master/{print $1}'

    Example ovnkube-master pod name

    ovnkube-master-n2shv

  2. You can access the NodePort and LoadBalancer services logs using your ovnkube-master pod and running the following example command:

    $ oc logs -n openshift-ovn-kubernetes <ovnkube-master-pod-name> ovnkube-master | grep -E "OVN-KUBE-NODEPORT|OVN-KUBE-EXTERNALIP"

    NodePort service:

    Example logs in the ovnkube-master container of the ovnkube-master pod when a host port is open:

    $ I0625 09:07:00.992980 2118395 iptables.go:27] Adding rule in table: nat, chain: OVN-KUBE-NODEPORT with args: "-p TCP -m addrtype --dst-type LOCAL --dport 32718 -j DNAT --to-destination 10.96.178.142:8081" for protocol: 0

    Example logs in the ovnkube-master container of the ovnkube-master pod when a host port is closed:

    $ Deleting rule in table: nat, chain: OVN-KUBE-NODEPORT with args: "-p TCP -m addrtype --dst-type LOCAL --dport 32718 -j DNAT --to-destination 10.96.178.142:8081" for protocol: 0

    LoadBalancer service:

    Example logs in the ovnkube-master container of the ovnkube-master pod when a host port is open:

    $ I0625 09:34:10.406067  128902 iptables.go:27] Adding rule in table: nat, chain: OVN-KUBE-EXTERNALIP with args: "-p TCP -d 172.16.47.129 --dport 8081 -j DNAT --to-destination 10.43.114.94:8081" for protocol: 0

    Example logs in the ovnkube-master container of the ovnkube-master pod when a host port is closed:

    $ I0625 09:37:00.976953  128902 iptables.go:63] Deleting rule in table: nat, chain: OVN-KUBE-EXTERNALIP with args: "-p TCP -d 172.16.47.129 --dport 8081 -j DNAT --to-destination 10.43.114.94:8081" for protocol: 0

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.