Chapter 15. Configuring the SDN
15.1. Overview
The OpenShift Container Platform SDN enables communication between pods across the OpenShift Container Platform cluster, establishing a pod network. Two SDN plug-ins are currently available (ovs-subnet and ovs-multitenant), which provide different methods for configuring the pod network.
15.2. Configuring the Pod Network with Ansible
For initial advanced installations, the ovs-subnet plug-in is installed and configured by default, though it can be overridden during installation using the os_sdn_network_plugin_name
parameter, which is configurable in the Ansible inventory file.
Example 15.1. Example SDN Configuration with Ansible
# Configure the multi-tenant SDN plugin (default is 'redhat/openshift-ovs-subnet') # os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant' # Disable the OpenShift SDN plugin # openshift_use_openshift_sdn=False # Configure SDN cluster network CIDR block. This network block should # be a private block and should not conflict with existing network # blocks in your infrastructure that pods may require access to. # Can not be changed after deployment. #osm_cluster_network_cidr=10.1.0.0/16 # default subdomain to use for exposed routes #openshift_master_default_subdomain=apps.test.example.com # Configure SDN cluster network and kubernetes service CIDR blocks. These # network blocks should be private and should not conflict with network blocks # in your infrastructure that pods may require access to. Can not be changed # after deployment. #osm_cluster_network_cidr=10.1.0.0/16 #openshift_portal_net=172.30.0.0/16 # Configure number of bits to allocate to each host’s subnet e.g. 8 # would mean a /24 network on the host. #osm_host_subnet_length=8 # This variable specifies the service proxy implementation to use: # either iptables for the pure-iptables version (the default), # or userspace for the userspace proxy. #openshift_node_proxy_mode=iptables
For initial quick installations, the ovs-subnet plug-in is installed and configured by default as well, and can be reconfigured post-installation using the networkConfig
stanza of the master-config.yaml file.
15.3. Configuring the Pod Network on Masters
Cluster administrators can control pod network settings on masters by modifying parameters in the networkConfig
section of the master configuration file (located at /etc/origin/master/master-config.yaml by default):
networkConfig: clusterNetworkCIDR: 10.128.0.0/14 1 hostSubnetLength: 9 2 networkPluginName: "redhat/openshift-ovs-subnet" 3 serviceNetworkCIDR: 172.30.0.0/16 4
The serviceNetworkCIDR
and hostSubnetLength
values cannot be changed after the cluster is first created, and clusterNetworkCIDR
can only be changed to be a larger network that still contains the original network. For example, given the default value of 10.128.0.0/14, you could change clusterNetworkCIDR
to 10.128.0.0/9 (i.e., the entire upper half of net 10) but not to 10.64.0.0/16, because that does not overlap the original value.
15.4. Configuring the Pod Network on Nodes
Cluster administrators can control pod network settings on nodes by modifying parameters in the networkConfig
section of the node configuration file (located at /etc/origin/node/node-config.yaml by default):
15.5. Migrating Between SDN Plug-ins
If you are already using one SDN plug-in and want to switch to another:
-
Change the
networkPluginName
parameter on all masters and nodes in their configuration files. - Restart the atomic-openshift-master service on masters and the atomic-openshift-node service on nodes.
- If you are switching from an OpenShift Container Platform SDN plug-in to a third-party plug-in, then clean up OpenShift Container Platform SDN-specific artifacts:
$ oc delete clusternetwork --all $ oc delete hostsubnets --all $ oc delete netnamespaces --all
When switching from the ovs-subnet to the ovs-multitenant OpenShift Container Platform SDN plug-in, all the existing projects in the cluster will be fully isolated (assigned unique VNIDs). Cluster administrators can choose to modify the project networks using the administrator CLI.
Check VNIDs by running:
$ oc get netnamespace
15.6. External Access to the Cluster Network
If a host that is external to OpenShift Container Platform requires access to the cluster network, you have two options:
- Configure the host as an OpenShift Container Platform node but mark it unschedulable so that the master does not schedule containers on it.
- Create a tunnel between your host and a host that is on the cluster network.
Both options are presented as part of a practical use-case in the documentation for configuring routing from an edge load-balancer to containers within OpenShift Container Platform SDN.
15.7. Using Flannel
As an alternative to the default SDN, OpenShift Container Platform also provides Ansible playbooks for installing flannel-based networking. This is useful if running OpenShift Container Platform within a cloud provider platform that also relies on SDN, such as Red Hat OpenStack Platform, and you want to avoid encapsulating packets twice through both platforms.
This is only supported for OpenShift Container Platform on Red Hat OpenStack Platform.
Neutron port security must be configured, even when security groups are not being used.
To enable flannel within your OpenShift Container Platform cluster:
Neutron port security controls must be configured to be compatible with Flannel. The default configuration of Red Hat OpenStack Platform disables user control of
port_security
. Configure Neutron to allow users to control theport_security
setting on individual ports.On the Neutron servers, add the following to the /etc/neutron/plugins/ml2/ml2_conf.ini file:
[ml2] ... extension_drivers = port_security
Then, restart the Neutron services:
service neutron-dhcp-agent restart service neutron-ovs-cleanup restart service neutron-metadata-agentrestart service neutron-l3-agent restart service neutron-plugin-openvswitch-agent restart service neutron-vpn-agent restart service neutron-server restart
Set the following variables in your Ansible inventory file before running the installation:
openshift_use_openshift_sdn=false 1 openshift_use_flannel=true 2 flannel_interface=eth0
-
Optionally, you can specify the interface to use for inter-host communication using the
flannel_interface
variable. Without this variable, the OpenShift Container Platform installation uses the default interface.