Chapter 15. Configuring the SDN


15.1. Overview

The OpenShift SDN enables communication between pods across the OpenShift Container Platform cluster, establishing a pod network. Two SDN plug-ins are currently available (ovs-subnet and ovs-multitenant), which provide different methods for configuring the pod network. A third (ovs-networkpolicy) is currently in Tech Preview.

15.2. Available SDN Providers

The upstream Kubernetes project does not come with a default network solution. Instead, Kubernetes has developed a Container Network Interface (CNI) to allow network providers for integration with their own SDN solutions.

There are several OpenShift SDN plugins available out of the box from Red Hat, as well as third-party plug-ins.

Red Hat has worked with a number of SDN providers to certify their SDN network solution on OpenShift Container Platform via the Kubernetes CNI interface, including a support process for their SDN plug-in through their product’s entitlement process. Should you open a support case with OpenShift, Red Hat can facilitate an exchange process so that both companies are involved in meeting your needs.

The following SDN solutions are validated and supported on OpenShift Container Platform directly by the 3rd party vendor:

  • Cisco Contiv (™)
  • Juniper Contrail (™)
  • Nokia Nuage (™)
  • Tigera Calico (™)
  • VMware NSX-T (™)

Installing VMware NSX-T (™) on OpenShift Container Platform

VMware NSX-T (™) provides an SDN and security infrastructure to build cloud-native application environments. In addition to vSphere hypervisors (ESX), these environments include KVM and native public clouds.

The current integration requires a new install of both NSX-T and OpenShift Container Platform. Currently, NSX-T version 2.0 is supported, and only supports the use of ESX and KVM hypervisors at this time.

See the NSX-T Container Plug-in for OpenShift - Installation and Administration Guide for more information.

15.3. Configuring the Pod Network with Ansible

For initial advanced installations, the ovs-subnet plug-in is installed and configured by default, though it can be overridden during installation using the os_sdn_network_plugin_name parameter, which is configurable in the Ansible inventory file.

Example 15.1. Example SDN Configuration with Ansible

# Configure the multi-tenant SDN plugin (default is 'redhat/openshift-ovs-subnet')
# os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'

# Configure the NetworkPolicy SDN plugin (Tech Preview)
# os_sdn_network_plugin_name='redhat/openshift-ovs-networkpolicy'

# Disable the OpenShift SDN plugin
# openshift_use_openshift_sdn=False

# Configure SDN cluster network CIDR block. This network block should
# be a private block and should not conflict with existing network
# blocks in your infrastructure that pods may require access to.
# Can not be changed after deployment.
#osm_cluster_network_cidr=10.1.0.0/16

# default subdomain to use for exposed routes
#openshift_master_default_subdomain=apps.test.example.com

# Configure SDN cluster network and kubernetes service CIDR blocks. These
# network blocks should be private and should not conflict with network blocks
# in your infrastructure that pods may require access to. Can not be changed
# after deployment.
#osm_cluster_network_cidr=10.1.0.0/16
#openshift_portal_net=172.30.0.0/16

# Configure number of bits to allocate to each host’s subnet e.g. 8
# would mean a /24 network on the host.
#osm_host_subnet_length=8

# This variable specifies the service proxy implementation to use:
# either iptables for the pure-iptables version (the default),
# or userspace for the userspace proxy.
#openshift_node_proxy_mode=iptables

For initial quick installations, the ovs-subnet plug-in is installed and configured by default as well, and can be reconfigured post-installation using the networkConfig stanza of the master-config.yaml file.

15.4. Configuring the Pod Network on Masters

Cluster administrators can control pod network settings on masters by modifying parameters in the networkConfig section of the master configuration file (located at /etc/origin/master/master-config.yaml by default):

networkConfig:
  clusterNetworkCIDR: 10.128.0.0/14 1
  hostSubnetLength: 9 2
  networkPluginName: "redhat/openshift-ovs-subnet" 3
  serviceNetworkCIDR: 172.30.0.0/16 4
1
Cluster network for node IP allocation
2
Number of bits for pod IP allocation within a node
3
Set to redhat/openshift-ovs-subnet for the ovs-subnet plug-in, redhat/openshift-ovs-multitenant for the ovs-multitenant plug-in, or redhat/openshift-ovs-networkpolicy for the ovs-networkpolicy plug-in
4
Service IP allocation for the cluster
Important

The serviceNetworkCIDR and hostSubnetLength values cannot be changed after the cluster is first created, and clusterNetworkCIDR can only be changed to be a larger network that still contains the original network. For example, given the default value of 10.128.0.0/14, you could change clusterNetworkCIDR to 10.128.0.0/9 (i.e., the entire upper half of net 10) but not to 10.64.0.0/16, because that does not overlap the original value.

15.5. Configuring the Pod Network on Nodes

Cluster administrators can control pod network settings on nodes by modifying parameters in the networkConfig section of the node configuration file (located at /etc/origin/node/node-config.yaml by default):

networkConfig:
  mtu: 1450 1
  networkPluginName: "redhat/openshift-ovs-subnet" 2
1
Maximum transmission unit (MTU) for the pod overlay network
2
Set to redhat/openshift-ovs-subnet for the ovs-subnet plug-in, redhat/openshift-ovs-multitenant for the ovs-multitenant plug-in, or redhat/openshift-ovs-networkpolicy for the ovs-networkpolicy plug-in

15.6. Migrating Between SDN Plug-ins

If you are already using one SDN plug-in and want to switch to another:

  1. Change the networkPluginName parameter on all masters and nodes in their configuration files.
  2. Restart the atomic-openshift-master service on masters and the atomic-openshift-node service on nodes.
  3. If you are switching from an OpenShift SDN plug-in to a third-party plug-in, then clean up OpenShift SDN-specific artifacts:
$ oc delete clusternetwork --all
$ oc delete hostsubnets --all
$ oc delete netnamespaces --all

When switching from the ovs-subnet to the ovs-multitenant OpenShift SDN plug-in, all the existing projects in the cluster will be fully isolated (assigned unique VNIDs). Cluster administrators can choose to modify the project networks using the administrator CLI.

Check VNIDs by running:

$ oc get netnamespace

15.7. External Access to the Cluster Network

If a host that is external to OpenShift Container Platform requires access to the cluster network, you have two options:

  1. Configure the host as an OpenShift Container Platform node but mark it unschedulable so that the master does not schedule containers on it.
  2. Create a tunnel between your host and a host that is on the cluster network.

Both options are presented as part of a practical use-case in the documentation for configuring routing from an edge load-balancer to containers within OpenShift SDN.

15.8. Using Flannel

As an alternate to the default SDN, OpenShift Container Platform also provides Ansible playbooks for installing flannel-based networking. This is useful if running OpenShift Container Platform within a cloud provider platform that also relies on SDN, such as Red Hat OpenStack Platform, and you want to avoid encapsulating packets twice through both platforms.

Flannel uses a single IP network space for all of the containers allocating a contiguous subset of the space to each instance. Consequently, nothing prevents a container from attempting to contact any IP address in the same network space. This hinders multi-tenancy because the network cannot be used to isolate containers in one application from another.

Depending on whether you prefer mutli-tenancy isolation or performance, you should determine the appropriate choice when deciding between OpenShift SDN (multi-tenancy) and flannel (performance) for internal networks.

Important

Flannel is only supported for OpenShift Container Platform on Red Hat OpenStack Platform.

Important

The current version of Neutron enforces port security on ports by default. This prevents the port from sending or receiving packets with a MAC address different from that on the port itself. Flannel creates virtual MACs and IP addresses and must send and receive packets on the port, so port security must be disabled on the ports that carry flannel traffic.

To enable flannel within your OpenShift Container Platform cluster:

  1. Neutron port security controls must be configured to be compatible with Flannel. The default configuration of Red Hat OpenStack Platform disables user control of port_security. Configure Neutron to allow users to control the port_security setting on individual ports.

    1. On the Neutron servers, add the following to the /etc/neutron/plugins/ml2/ml2_conf.ini file:

      [ml2]
      ...
      extension_drivers = port_security
    2. Then, restart the Neutron services:

      service neutron-dhcp-agent restart
      service neutron-ovs-cleanup restart
      service neutron-metadata-agentrestart
      service neutron-l3-agent restart
      service neutron-plugin-openvswitch-agent restart
      service neutron-vpn-agent restart
      service neutron-server  restart
  2. When creating the OpenShift Container Platform instances on Red Hat OpenStack Platform, disable both port security and security groups in the ports where the container network flannel interface will be:

    neutron port-update $port --no-security-groups --port-security-enabled=False
    Note

    Flannel gather information from etcd to configure and assign the subnets in the nodes. Therefore, the security group attached to the etcd hosts should allow access from nodes to port 2379/tcp, and nodes security group should allow egress communication to that port on the etcd hosts.

    1. Set the following variables in your Ansible inventory file before running the installation:

      openshift_use_openshift_sdn=false 1
      openshift_use_flannel=true 2
      flannel_interface=eth0
      1
      Set openshift_use_openshift_sdn to false to disable the default SDN.
      2
      Set openshift_use_flannel to true to enable flannel in place.
    2. Optionally, you can specify the interface to use for inter-host communication using the flannel_interface variable. Without this variable, the OpenShift Container Platform installation uses the default interface.

      Note

      Custom networking CIDR for pods and services using flannel will be supported in a future release. BZ#1473858

  3. After the OpenShift Container Platform installation, add a set of iptables rules on every OpenShift Container Platform node:

    iptables -A DOCKER -p all -j ACCEPT
    iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE

    To persist those changes in the /etc/sysconfig/iptables use the following command on every node:

    cp /etc/sysconfig/iptables{,.orig}
    sh -c "tac /etc/sysconfig/iptables.orig | sed -e '0,/:DOCKER -/ s/:DOCKER -/:DOCKER ACCEPT/' | awk '"\!"p && /POSTROUTING/{print \"-A POSTROUTING -o eth1 -j MASQUERADE\"; p=1} 1' | tac > /etc/sysconfig/iptables"
    Note

    The iptables-save command saves all the current in memory iptables rules. However, because Docker, Kubernetes and OpenShift Container Platform create a high number of iptables rules (services, etc.) not designed to be persisted, saving these rules can become problematic.

To isolate container traffic from the rest of the OpenShift Container Platform traffic, Red Hat recommends creating an isolated tenant network and attaching all the nodes to it. If you are using a different network interface (eth1), remember to configure the interface to start at boot time through the /etc/sysconfig/network-scripts/ifcfg-eth1 file:

DEVICE=eth1
TYPE=Ethernet
BOOTPROTO=dhcp
ONBOOT=yes
DEFTROUTE=no
PEERDNS=no
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.