Chapter 18. OpenShift SDN default CNI network provider
18.1. About the OpenShift SDN default CNI network provider
OpenShift Container Platform uses a software-defined networking (SDN) approach to provide a unified cluster network that enables communication between pods across the OpenShift Container Platform cluster. This pod network is established and maintained by the OpenShift SDN, which configures an overlay network using Open vSwitch (OVS).
18.1.1. OpenShift SDN network isolation modes
OpenShift SDN provides three SDN modes for configuring the pod network:
-
Network policy mode allows project administrators to configure their own isolation policies using
NetworkPolicy
objects. Network policy is the default mode in OpenShift Container Platform 4.10. - Multitenant mode provides project-level isolation for pods and services. Pods from different projects cannot send packets to or receive packets from pods and services of a different project. You can disable isolation for a project, allowing it to send network traffic to all pods and services in the entire cluster and receive network traffic from those pods and services.
- Subnet mode provides a flat pod network where every pod can communicate with every other pod and service. The network policy mode provides the same functionality as subnet mode.
18.1.2. Supported default CNI network provider feature matrix
OpenShift Container Platform offers two supported choices, OpenShift SDN and OVN-Kubernetes, for the default Container Network Interface (CNI) network provider. The following table summarizes the current feature support for both network providers:
Feature | OpenShift SDN | OVN-Kubernetes |
---|---|---|
Egress IPs | Supported | Supported |
Egress firewall [1] | Supported | Supported |
Egress router | Supported | Supported [2] |
IPsec encryption | Not supported | Supported |
IPv6 | Not supported | Supported [3] [4] |
Kubernetes network policy | Supported | Supported |
Kubernetes network policy logs | Not supported | Supported |
Multicast | Supported | Supported |
Hardware offloading | Not supported | Supported |
- Egress firewall is also known as egress network policy in OpenShift SDN. This is not the same as network policy egress.
- Egress router for OVN-Kubernetes supports only redirect mode.
- IPv6 is supported only on bare metal clusters.
- IPv6 single stack does not support Kubernetes NMState.
18.2. Configuring egress IPs for a project
As a cluster administrator, you can configure the OpenShift SDN Container Network Interface (CNI) cluster network provider to assign one or more egress IP addresses to a project.
18.2.1. Egress IP address architectural design and implementation
The OpenShift Container Platform egress IP address functionality allows you to ensure that the traffic from one or more pods in one or more namespaces has a consistent source IP address for services outside the cluster network.
For example, you might have a pod that periodically queries a database that is hosted on a server outside of your cluster. To enforce access requirements for the server, a packet filtering device is configured to allow traffic only from specific IP addresses. To ensure that you can reliably allow access to the server from only that specific pod, you can configure a specific egress IP address for the pod that makes the requests to the server.
An egress IP address assigned to a namespace is different from an egress router, which is used to send traffic to specific destinations.
In some cluster configurations, application pods and ingress router pods run on the same node. If you configure an egress IP address for an application project in this scenario, the IP address is not used when you send a request to a route from the application project.
An egress IP address is implemented as an additional IP address on the primary network interface of a node and must be in the same subnet as the primary IP address of the node. The additional IP address must not be assigned to any other node in the cluster.
Egress IP addresses must not be configured in any Linux network configuration files, such as ifcfg-eth0
.
18.2.1.1. Platform support
Support for the egress IP address functionality on various platforms is summarized in the following table:
Platform | Supported |
---|---|
Bare metal | Yes |
VMware vSphere | Yes |
Red Hat OpenStack Platform (RHOSP) | No |
Amazon Web Services (AWS) | Yes |
Google Cloud Platform (GCP) | Yes |
Microsoft Azure | Yes |
The assignment of egress IP addresses to control plane nodes with the EgressIP feature is not supported on a cluster provisioned on Amazon Web Services (AWS). (BZ#2039656)
18.2.1.2. Public cloud platform considerations
For clusters provisioned on public cloud infrastructure, there is a constraint on the absolute number of assignable IP addresses per node. The maximum number of assignable IP addresses per node, or the IP capacity, can be described in the following formula:
IP capacity = public cloud default capacity - sum(current IP assignments)
While the Egress IPs capability manages the IP address capacity per node, it is important to plan for this constraint in your deployments. For example, for a cluster installed on bare-metal infrastructure with 8 nodes you can configure 150 egress IP addresses. However, if a public cloud provider limits IP address capacity to 10 IP addresses per node, the total number of assignable IP addresses is only 80. To achieve the same IP address capacity in this example cloud provider, you would need to allocate 7 additional nodes.
To confirm the IP capacity and subnets for any node in your public cloud environment, you can enter the oc get node <node_name> -o yaml
command. The cloud.network.openshift.io/egress-ipconfig
annotation includes capacity and subnet information for the node.
The annotation value is an array with a single object with fields that provide the following information for the primary network interface:
-
interface
: Specifies the interface ID on AWS and Azure and the interface name on GCP. -
ifaddr
: Specifies the subnet mask for one or both IP address families. -
capacity
: Specifies the IP address capacity for the node. On AWS, the IP address capacity is provided per IP address family. On Azure and GCP, the IP address capacity includes both IPv4 and IPv6 addresses.
The following examples illustrate the annotation from nodes on several public cloud providers. The annotations are indented for readability.
Example cloud.network.openshift.io/egress-ipconfig
annotation on AWS
cloud.network.openshift.io/egress-ipconfig: [ { "interface":"eni-078d267045138e436", "ifaddr":{"ipv4":"10.0.128.0/18"}, "capacity":{"ipv4":14,"ipv6":15} } ]
Example cloud.network.openshift.io/egress-ipconfig
annotation on GCP
cloud.network.openshift.io/egress-ipconfig: [ { "interface":"nic0", "ifaddr":{"ipv4":"10.0.128.0/18"}, "capacity":{"ip":14} } ]
The following sections describe the IP address capacity for supported public cloud environments for use in your capacity calculation.
18.2.1.2.1. Amazon Web Services (AWS) IP address capacity limits
On AWS, constraints on IP address assignments depend on the instance type configured. For more information, see IP addresses per network interface per instance type
18.2.1.2.2. Google Cloud Platform (GCP) IP address capacity limits
On GCP, the networking model implements additional node IP addresses through IP address aliasing, rather than IP address assignments. However, IP address capacity maps directly to IP aliasing capacity.
The following capacity limits exist for IP aliasing assignment:
- Per node, the maximum number of IP aliases, both IPv4 and IPv6, is 10.
- Per VPC, the maximum number of IP aliases is unspecified, but OpenShift Container Platform scalability testing reveals the maximum to be approximately 15,000.
For more information, see Per instance quotas and Alias IP ranges overview.
18.2.1.2.3. Microsoft Azure IP address capacity limits
On Azure, the following capacity limits exist for IP address assignment:
- Per NIC, the maximum number of assignable IP addresses, for both IPv4 and IPv6, is 256.
- Per virtual network, the maximum number of assigned IP addresses cannot exceed 65,536.
For more information, see Networking limits.
18.2.1.3. Limitations
The following limitations apply when using egress IP addresses with the OpenShift SDN cluster network provider:
- You cannot use manually assigned and automatically assigned egress IP addresses on the same nodes.
- If you manually assign egress IP addresses from an IP address range, you must not make that range available for automatic IP assignment.
- You cannot share egress IP addresses across multiple namespaces using the OpenShift SDN egress IP address implementation.
If you need to share IP addresses across namespaces, the OVN-Kubernetes cluster network provider egress IP address implementation allows you to span IP addresses across multiple namespaces.
If you use OpenShift SDN in multitenant mode, you cannot use egress IP addresses with any namespace that is joined to another namespace by the projects that are associated with them. For example, if project1
and project2
are joined by running the oc adm pod-network join-projects --to=project1 project2
command, neither project can use an egress IP address. For more information, see BZ#1645577.
18.2.1.4. IP address assignment approaches
You can assign egress IP addresses to namespaces by setting the egressIPs
parameter of the NetNamespace
object. After an egress IP address is associated with a project, OpenShift SDN allows you to assign egress IP addresses to hosts in two ways:
- In the automatically assigned approach, an egress IP address range is assigned to a node.
- In the manually assigned approach, a list of one or more egress IP address is assigned to a node.
Namespaces that request an egress IP address are matched with nodes that can host those egress IP addresses, and then the egress IP addresses are assigned to those nodes. If the egressIPs
parameter is set on a NetNamespace
object, but no node hosts that egress IP address, then egress traffic from the namespace will be dropped.
High availability of nodes is automatic. If a node that hosts an egress IP address is unreachable and there are nodes that are able to host that egress IP address, then the egress IP address will move to a new node. When the unreachable node comes back online, the egress IP address automatically moves to balance egress IP addresses across nodes.
18.2.1.4.1. Considerations when using automatically assigned egress IP addresses
When using the automatic assignment approach for egress IP addresses the following considerations apply:
-
You set the
egressCIDRs
parameter of each node’sHostSubnet
resource to indicate the range of egress IP addresses that can be hosted by a node. OpenShift Container Platform sets theegressIPs
parameter of theHostSubnet
resource based on the IP address range you specify.
If the node hosting the namespace’s egress IP address is unreachable, OpenShift Container Platform will reassign the egress IP address to another node with a compatible egress IP address range. The automatic assignment approach works best for clusters installed in environments with flexibility in associating additional IP addresses with nodes.
18.2.1.4.2. Considerations when using manually assigned egress IP addresses
This approach allows you to control which nodes can host an egress IP address.
If your cluster is installed on public cloud infrastructure, you must ensure that each node that you assign egress IP addresses to has sufficient spare capacity to host the IP addresses. For more information, see "Platform considerations" in a previous section.
When using the manual assignment approach for egress IP addresses the following considerations apply:
-
You set the
egressIPs
parameter of each node’sHostSubnet
resource to indicate the IP addresses that can be hosted by a node. - Multiple egress IP addresses per namespace are supported.
If a namespace has multiple egress IP addresses and those addresses are hosted on multiple nodes, the following additional considerations apply:
- If a pod is on a node that is hosting an egress IP address, that pod always uses the egress IP address on the node.
- If a pod is not on a node that is hosting an egress IP address, that pod uses an egress IP address at random.
18.2.2. Configuring automatically assigned egress IP addresses for a namespace
In OpenShift Container Platform you can enable automatic assignment of an egress IP address for a specific namespace across one or more nodes.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
Update the
NetNamespace
object with the egress IP address using the following JSON:$ oc patch netnamespace <project_name> --type=merge -p \ '{ "egressIPs": [ "<ip_address>" ] }'
where:
<project_name>
- Specifies the name of the project.
<ip_address>
-
Specifies one or more egress IP addresses for the
egressIPs
array.
For example, to assign
project1
to an IP address of 192.168.1.100 andproject2
to an IP address of 192.168.1.101:$ oc patch netnamespace project1 --type=merge -p \ '{"egressIPs": ["192.168.1.100"]}' $ oc patch netnamespace project2 --type=merge -p \ '{"egressIPs": ["192.168.1.101"]}'
NoteBecause OpenShift SDN manages the
NetNamespace
object, you can make changes only by modifying the existingNetNamespace
object. Do not create a newNetNamespace
object.Indicate which nodes can host egress IP addresses by setting the
egressCIDRs
parameter for each host using the following JSON:$ oc patch hostsubnet <node_name> --type=merge -p \ '{ "egressCIDRs": [ "<ip_address_range>", "<ip_address_range>" ] }'
where:
<node_name>
- Specifies a node name.
<ip_address_range>
-
Specifies an IP address range in CIDR format. You can specify more than one address range for the
egressCIDRs
array.
For example, to set
node1
andnode2
to host egress IP addresses in the range 192.168.1.0 to 192.168.1.255:$ oc patch hostsubnet node1 --type=merge -p \ '{"egressCIDRs": ["192.168.1.0/24"]}' $ oc patch hostsubnet node2 --type=merge -p \ '{"egressCIDRs": ["192.168.1.0/24"]}'
OpenShift Container Platform automatically assigns specific egress IP addresses to available nodes in a balanced way. In this case, it assigns the egress IP address 192.168.1.100 to
node1
and the egress IP address 192.168.1.101 tonode2
or vice versa.
18.2.3. Configuring manually assigned egress IP addresses for a namespace
In OpenShift Container Platform you can associate one or more egress IP addresses with a namespace.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
Update the
NetNamespace
object by specifying the following JSON object with the desired IP addresses:$ oc patch netnamespace <project_name> --type=merge -p \ '{ "egressIPs": [ "<ip_address>" ] }'
where:
<project_name>
- Specifies the name of the project.
<ip_address>
-
Specifies one or more egress IP addresses for the
egressIPs
array.
For example, to assign the
project1
project to the IP addresses192.168.1.100
and192.168.1.101
:$ oc patch netnamespace project1 --type=merge \ -p '{"egressIPs": ["192.168.1.100","192.168.1.101"]}'
To provide high availability, set the
egressIPs
value to two or more IP addresses on different nodes. If multiple egress IP addresses are set, then pods use all egress IP addresses roughly equally.NoteBecause OpenShift SDN manages the
NetNamespace
object, you can make changes only by modifying the existingNetNamespace
object. Do not create a newNetNamespace
object.Manually assign the egress IP address to the node hosts.
If your cluster is installed on public cloud infrastructure, you must confirm that the node has available IP address capacity.
Set the
egressIPs
parameter on theHostSubnet
object on the node host. Using the following JSON, include as many IP addresses as you want to assign to that node host:$ oc patch hostsubnet <node_name> --type=merge -p \ '{ "egressIPs": [ "<ip_address>", "<ip_address>" ] }'
where:
<node_name>
- Specifies a node name.
<ip_address>
-
Specifies an IP address. You can specify more than one IP address for the
egressIPs
array.
For example, to specify that
node1
should have the egress IPs192.168.1.100
,192.168.1.101
, and192.168.1.102
:$ oc patch hostsubnet node1 --type=merge -p \ '{"egressIPs": ["192.168.1.100", "192.168.1.101", "192.168.1.102"]}'
In the previous example, all egress traffic for
project1
will be routed to the node hosting the specified egress IP, and then connected through Network Address Translation (NAT) to that IP address.
18.2.4. Additional resources
- If you are configuring manual egress IP address assignment, see Platform considerations for information about IP capacity planning.
18.3. Configuring an egress firewall for a project
As a cluster administrator, you can create an egress firewall for a project that restricts egress traffic leaving your OpenShift Container Platform cluster.
18.3.1. How an egress firewall works in a project
As a cluster administrator, you can use an egress firewall to limit the external hosts that some or all pods can access from within the cluster. An egress firewall supports the following scenarios:
- A pod can only connect to internal hosts and cannot initiate connections to the public internet.
- A pod can only connect to the public internet and cannot initiate connections to internal hosts that are outside the OpenShift Container Platform cluster.
- A pod cannot reach specified internal subnets or hosts outside the OpenShift Container Platform cluster.
- A pod can connect to only specific external hosts.
For example, you can allow one project access to a specified IP range but deny the same access to a different project. Or you can restrict application developers from updating from Python pip mirrors, and force updates to come only from approved sources.
Egress firewall does not apply to the host network namespace. Pods with host networking enabled are unaffected by egress firewall rules.
You configure an egress firewall policy by creating an EgressNetworkPolicy custom resource (CR) object. The egress firewall matches network traffic that meets any of the following criteria:
- An IP address range in CIDR format
- A DNS name that resolves to an IP address
If your egress firewall includes a deny rule for 0.0.0.0/0
, access to your OpenShift Container Platform API servers is blocked. To ensure that pods can access the OpenShift Container Platform API servers, you must include the built-in join network 100.64.0.0/16
of Open Virtual Network (OVN) to allow access when using node ports together with an EgressFirewall. You must also include the IP address range that the API servers listen on in your egress firewall rules, as in the following example:
apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow # ... - to: cidrSelector: 0.0.0.0/0 3 type: Deny
To find the IP address for your API servers, run oc get ep kubernetes -n default
.
For more information, see BZ#1988324.
You must have OpenShift SDN configured to use either the network policy or multitenant mode to configure an egress firewall.
If you use network policy mode, an egress firewall is compatible with only one policy per namespace and will not work with projects that share a network, such as global projects.
Egress firewall rules do not apply to traffic that goes through routers. Any user with permission to create a Route CR object can bypass egress firewall policy rules by creating a route that points to a forbidden destination.
18.3.1.1. Limitations of an egress firewall
An egress firewall has the following limitations:
- No project can have more than one EgressNetworkPolicy object.
- A maximum of one EgressNetworkPolicy object with a maximum of 1,000 rules can be defined per project.
-
The
default
project cannot use an egress firewall. When using the OpenShift SDN default Container Network Interface (CNI) network provider in multitenant mode, the following limitations apply:
-
Global projects cannot use an egress firewall. You can make a project global by using the
oc adm pod-network make-projects-global
command. -
Projects merged by using the
oc adm pod-network join-projects
command cannot use an egress firewall in any of the joined projects.
-
Global projects cannot use an egress firewall. You can make a project global by using the
Violating any of these restrictions results in a broken egress firewall for the project, and might cause all external network traffic to be dropped.
An Egress Firewall resource can be created in the kube-node-lease
, kube-public
, kube-system
, openshift
and openshift-
projects.
18.3.1.2. Matching order for egress firewall policy rules
The egress firewall policy rules are evaluated in the order that they are defined, from first to last. The first rule that matches an egress connection from a pod applies. Any subsequent rules are ignored for that connection.
18.3.1.3. How Domain Name Server (DNS) resolution works
If you use DNS names in any of your egress firewall policy rules, proper resolution of the domain names is subject to the following restrictions:
- Domain name updates are polled based on a time-to-live (TTL) duration. By default, the duration is 30 seconds. When the egress firewall controller queries the local name servers for a domain name, if the response includes a TTL that is less than 30 seconds, the controller sets the duration to the returned value. If the TTL in the response is greater than 30 minutes, the controller sets the duration to 30 minutes. If the TTL is between 30 seconds and 30 minutes, the controller ignores the value and sets the duration to 30 seconds.
- The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, the egress firewall might not be enforced consistently.
- Because the egress firewall controller and pods asynchronously poll the same local name server, the pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in EgressNetworkPolicy objects is only recommended for domains with infrequent IP address changes.
The egress firewall always allows pods access to the external interface of the node that the pod is on for DNS resolution.
If you use domain names in your egress firewall policy and your DNS resolution is not handled by a DNS server on the local node, then you must add egress firewall rules that allow access to your DNS server’s IP addresses. if you are using domain names in your pods.
18.3.2. EgressNetworkPolicy custom resource (CR) object
You can define one or more rules for an egress firewall. A rule is either an Allow
rule or a Deny
rule, with a specification for the traffic that the rule applies to.
The following YAML describes an EgressNetworkPolicy CR object:
EgressNetworkPolicy object
apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: <name> 1 spec: egress: 2 ...
18.3.2.1. EgressNetworkPolicy rules
The following YAML describes an egress firewall rule object. The egress
stanza expects an array of one or more objects.
Egress policy rule stanza
egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4
18.3.2.2. Example EgressNetworkPolicy CR objects
The following example defines several egress firewall policy rules:
apiVersion: network.openshift.io/v1
kind: EgressNetworkPolicy
metadata:
name: default
spec:
egress: 1
- type: Allow
to:
cidrSelector: 1.2.3.0/24
- type: Allow
to:
dnsName: www.example.com
- type: Deny
to:
cidrSelector: 0.0.0.0/0
- 1
- A collection of egress firewall policy rule objects.
18.3.3. Creating an egress firewall policy object
As a cluster administrator, you can create an egress firewall policy object for a project.
If the project already has an EgressNetworkPolicy object defined, you must edit the existing policy to make changes to the egress firewall rules.
Prerequisites
- A cluster that uses the OpenShift SDN default Container Network Interface (CNI) network provider plugin.
-
Install the OpenShift CLI (
oc
). - You must log in to the cluster as a cluster administrator.
Procedure
Create a policy rule:
-
Create a
<policy_name>.yaml
file where<policy_name>
describes the egress policy rules. - In the file you created, define an egress policy object.
-
Create a
Enter the following command to create the policy object. Replace
<policy_name>
with the name of the policy and<project>
with the project that the rule applies to.$ oc create -f <policy_name>.yaml -n <project>
In the following example, a new EgressNetworkPolicy object is created in a project named
project1
:$ oc create -f default.yaml -n project1
Example output
egressnetworkpolicy.network.openshift.io/v1 created
-
Optional: Save the
<policy_name>.yaml
file so that you can make changes later.
18.4. Editing an egress firewall for a project
As a cluster administrator, you can modify network traffic rules for an existing egress firewall.
18.4.1. Viewing an EgressNetworkPolicy object
You can view an EgressNetworkPolicy object in your cluster.
Prerequisites
- A cluster using the OpenShift SDN default Container Network Interface (CNI) network provider plugin.
-
Install the OpenShift Command-line Interface (CLI), commonly known as
oc
. - You must log in to the cluster.
Procedure
Optional: To view the names of the EgressNetworkPolicy objects defined in your cluster, enter the following command:
$ oc get egressnetworkpolicy --all-namespaces
To inspect a policy, enter the following command. Replace
<policy_name>
with the name of the policy to inspect.$ oc describe egressnetworkpolicy <policy_name>
Example output
Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0
18.5. Editing an egress firewall for a project
As a cluster administrator, you can modify network traffic rules for an existing egress firewall.
18.5.1. Editing an EgressNetworkPolicy object
As a cluster administrator, you can update the egress firewall for a project.
Prerequisites
- A cluster using the OpenShift SDN default Container Network Interface (CNI) network provider plugin.
-
Install the OpenShift CLI (
oc
). - You must log in to the cluster as a cluster administrator.
Procedure
Find the name of the EgressNetworkPolicy object for the project. Replace
<project>
with the name of the project.$ oc get -n <project> egressnetworkpolicy
Optional: If you did not save a copy of the EgressNetworkPolicy object when you created the egress network firewall, enter the following command to create a copy.
$ oc get -n <project> egressnetworkpolicy <name> -o yaml > <filename>.yaml
Replace
<project>
with the name of the project. Replace<name>
with the name of the object. Replace<filename>
with the name of the file to save the YAML to.After making changes to the policy rules, enter the following command to replace the EgressNetworkPolicy object. Replace
<filename>
with the name of the file containing the updated EgressNetworkPolicy object.$ oc replace -f <filename>.yaml
18.6. Removing an egress firewall from a project
As a cluster administrator, you can remove an egress firewall from a project to remove all restrictions on network traffic from the project that leaves the OpenShift Container Platform cluster.
18.6.1. Removing an EgressNetworkPolicy object
As a cluster administrator, you can remove an egress firewall from a project.
Prerequisites
- A cluster using the OpenShift SDN default Container Network Interface (CNI) network provider plugin.
-
Install the OpenShift CLI (
oc
). - You must log in to the cluster as a cluster administrator.
Procedure
Find the name of the EgressNetworkPolicy object for the project. Replace
<project>
with the name of the project.$ oc get -n <project> egressnetworkpolicy
Enter the following command to delete the EgressNetworkPolicy object. Replace
<project>
with the name of the project and<name>
with the name of the object.$ oc delete -n <project> egressnetworkpolicy <name>
18.7. Considerations for the use of an egress router pod
18.7.1. About an egress router pod
The OpenShift Container Platform egress router pod redirects traffic to a specified remote server from a private source IP address that is not used for any other purpose. An egress router pod can send network traffic to servers that are set up to allow access only from specific IP addresses.
The egress router pod is not intended for every outgoing connection. Creating large numbers of egress router pods can exceed the limits of your network hardware. For example, creating an egress router pod for every project or application could exceed the number of local MAC addresses that the network interface can handle before reverting to filtering MAC addresses in software.
The egress router image is not compatible with Amazon AWS, Azure Cloud, or any other cloud platform that does not support layer 2 manipulations due to their incompatibility with macvlan traffic.
18.7.1.1. Egress router modes
In redirect mode, an egress router pod configures iptables
rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be configured to access the service for the egress router rather than connecting directly to the destination IP. You can access the destination service and port from the application pod by using the curl
command. For example:
$ curl <router_service_IP> <port>
In HTTP proxy mode, an egress router pod runs as an HTTP proxy on port 8080
. This mode only works for clients that are connecting to HTTP-based or HTTPS-based services, but usually requires fewer changes to the client pods to get them to work. Many programs can be told to use an HTTP proxy by setting an environment variable.
In DNS proxy mode, an egress router pod runs as a DNS proxy for TCP-based services from its own IP address to one or more destination IP addresses. To make use of the reserved, source IP address, client pods must be modified to connect to the egress router pod rather than connecting directly to the destination IP address. This modification ensures that external destinations treat traffic as though it were coming from a known source.
Redirect mode works for all services except for HTTP and HTTPS. For HTTP and HTTPS services, use HTTP proxy mode. For TCP-based services with IP addresses or domain names, use DNS proxy mode.
18.7.1.2. Egress router pod implementation
The egress router pod setup is performed by an initialization container. That container runs in a privileged context so that it can configure the macvlan interface and set up iptables
rules. After the initialization container finishes setting up the iptables
rules, it exits. Next the egress router pod executes the container to handle the egress router traffic. The image used varies depending on the egress router mode.
The environment variables determine which addresses the egress-router image uses. The image configures the macvlan interface to use EGRESS_SOURCE
as its IP address, with EGRESS_GATEWAY
as the IP address for the gateway.
Network Address Translation (NAT) rules are set up so that connections to the cluster IP address of the pod on any TCP or UDP port are redirected to the same port on IP address specified by the EGRESS_DESTINATION
variable.
If only some of the nodes in your cluster are capable of claiming the specified source IP address and using the specified gateway, you can specify a nodeName
or nodeSelector
to identify which nodes are acceptable.
18.7.1.3. Deployment considerations
An egress router pod adds an additional IP address and MAC address to the primary network interface of the node. As a result, you might need to configure your hypervisor or cloud provider to allow the additional address.
- Red Hat OpenStack Platform (RHOSP)
If you deploy OpenShift Container Platform on RHOSP, you must allow traffic from the IP and MAC addresses of the egress router pod on your OpenStack environment. If you do not allow the traffic, then communication will fail:
$ openstack port set --allowed-address \ ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid>
- Red Hat Virtualization (RHV)
- If you are using RHV, you must select No Network Filter for the Virtual network interface controller (vNIC).
- VMware vSphere
- If you are using VMware vSphere, see the VMware documentation for securing vSphere standard switches. View and change VMware vSphere default settings by selecting the host virtual switch from the vSphere Web Client.
Specifically, ensure that the following are enabled:
18.7.1.4. Failover configuration
To avoid downtime, you can deploy an egress router pod with a Deployment
resource, as in the following example. To create a new Service
object for the example deployment, use the oc expose deployment/egress-demo-controller
command.
apiVersion: apps/v1 kind: Deployment metadata: name: egress-demo-controller spec: replicas: 1 1 selector: matchLabels: name: egress-router template: metadata: name: egress-router labels: name: egress-router annotations: pod.network.openshift.io/assign-macvlan: "true" spec: 2 initContainers: ... containers: ...
18.7.2. Additional resources
18.8. Deploying an egress router pod in redirect mode
As a cluster administrator, you can deploy an egress router pod that is configured to redirect traffic to specified destination IP addresses.
18.8.1. Egress router pod specification for redirect mode
Define the configuration for an egress router pod in the Pod
object. The following YAML describes the fields for the configuration of an egress router pod in redirect mode:
apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress_router> - name: EGRESS_GATEWAY 3 value: <egress_gateway> - name: EGRESS_DESTINATION 4 value: <egress_destination> - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod
- 1
- The annotation tells OpenShift Container Platform to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod’s network namespace. You must include the quotation marks around the
"true"
value. To have OpenShift Container Platform create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example,eth1
. - 2
- IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the
/24
suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with theEGRESS_GATEWAY
variable and no other hosts on the subnet. - 3
- Same value as the default gateway used by the node.
- 4
- External server to direct traffic to. Using this example, connections to the pod are redirected to
203.0.113.25
, with a source IP address of192.168.12.99
.
Example egress router pod specification
apiVersion: v1 kind: Pod metadata: name: egress-multi labels: name: egress-multi annotations: pod.network.openshift.io/assign-macvlan: "true" spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE value: 192.168.12.99/24 - name: EGRESS_GATEWAY value: 192.168.12.1 - name: EGRESS_DESTINATION value: | 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod
18.8.2. Egress destination configuration format
When an egress router pod is deployed in redirect mode, you can specify redirection rules by using one or more of the following formats:
-
<port> <protocol> <ip_address>
- Incoming connections to the given<port>
should be redirected to the same port on the given<ip_address>
.<protocol>
is eithertcp
orudp
. -
<port> <protocol> <ip_address> <remote_port>
- As above, except that the connection is redirected to a different<remote_port>
on<ip_address>
. -
<ip_address>
- If the last line is a single IP address, then any connections on any other port will be redirected to the corresponding port on that IP address. If there is no fallback IP address then connections on other ports are rejected.
In the example that follows several rules are defined:
-
The first line redirects traffic from local port
80
to port80
on203.0.113.25
. -
The second and third lines redirect local ports
8080
and8443
to remote ports80
and443
on203.0.113.26
. - The last line matches traffic for any ports not specified in the previous rules.
Example configuration
80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27
18.8.3. Deploying an egress router pod in redirect mode
In redirect mode, an egress router pod sets up iptables rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be configured to access the service for the egress router rather than connecting directly to the destination IP. You can access the destination service and port from the application pod by using the curl
command. For example:
$ curl <router_service_IP> <port>
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
- Create an egress router pod.
To ensure that other pods can find the IP address of the egress router pod, create a service to point to the egress router pod, as in the following example:
apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http port: 80 - name: https port: 443 type: ClusterIP selector: name: egress-1
Your pods can now connect to this service. Their connections are redirected to the corresponding ports on the external server, using the reserved egress IP address.
18.8.4. Additional resources
18.9. Deploying an egress router pod in HTTP proxy mode
As a cluster administrator, you can deploy an egress router pod configured to proxy traffic to specified HTTP and HTTPS-based services.
18.9.1. Egress router pod specification for HTTP mode
Define the configuration for an egress router pod in the Pod
object. The following YAML describes the fields for the configuration of an egress router pod in HTTP mode:
apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: http-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-http-proxy env: - name: EGRESS_HTTP_PROXY_DESTINATION 4 value: |- ... ...
- 1
- The annotation tells OpenShift Container Platform to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod’s network namespace. You must include the quotation marks around the
"true"
value. To have OpenShift Container Platform create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example,eth1
. - 2
- IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the
/24
suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with theEGRESS_GATEWAY
variable and no other hosts on the subnet. - 3
- Same value as the default gateway used by the node.
- 4
- A string or YAML multi-line string specifying how to configure the proxy. Note that this is specified as an environment variable in the HTTP proxy container, not with the other environment variables in the init container.
18.9.2. Egress destination configuration format
When an egress router pod is deployed in HTTP proxy mode, you can specify redirection rules by using one or more of the following formats. Each line in the configuration specifies one group of connections to allow or deny:
-
An IP address allows connections to that IP address, such as
192.168.1.1
. -
A CIDR range allows connections to that CIDR range, such as
192.168.1.0/24
. -
A hostname allows proxying to that host, such as
www.example.com
. -
A domain name preceded by
*.
allows proxying to that domain and all of its subdomains, such as*.example.com
. -
A
!
followed by any of the previous match expressions denies the connection instead. -
If the last line is
*
, then anything that is not explicitly denied is allowed. Otherwise, anything that is not allowed is denied.
You can also use *
to allow connections to all remote destinations.
Example configuration
!*.example.com !192.168.1.0/24 192.168.2.1 *
18.9.3. Deploying an egress router pod in HTTP proxy mode
In HTTP proxy mode, an egress router pod runs as an HTTP proxy on port 8080
. This mode only works for clients that are connecting to HTTP-based or HTTPS-based services, but usually requires fewer changes to the client pods to get them to work. Many programs can be told to use an HTTP proxy by setting an environment variable.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
- Create an egress router pod.
To ensure that other pods can find the IP address of the egress router pod, create a service to point to the egress router pod, as in the following example:
apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http-proxy port: 8080 1 type: ClusterIP selector: name: egress-1
- 1
- Ensure the
http
port is set to8080
.
To configure the client pod (not the egress proxy pod) to use the HTTP proxy, set the
http_proxy
orhttps_proxy
variables:apiVersion: v1 kind: Pod metadata: name: app-1 labels: name: app-1 spec: containers: env: - name: http_proxy value: http://egress-1:8080/ 1 - name: https_proxy value: http://egress-1:8080/ ...
- 1
- The service created in the previous step.
NoteUsing the
http_proxy
andhttps_proxy
environment variables is not necessary for all setups. If the above does not create a working setup, then consult the documentation for the tool or software you are running in the pod.
18.9.4. Additional resources
18.10. Deploying an egress router pod in DNS proxy mode
As a cluster administrator, you can deploy an egress router pod configured to proxy traffic to specified DNS names and IP addresses.
18.10.1. Egress router pod specification for DNS mode
Define the configuration for an egress router pod in the Pod
object. The following YAML describes the fields for the configuration of an egress router pod in DNS mode:
apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: dns-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-dns-proxy securityContext: privileged: true env: - name: EGRESS_DNS_PROXY_DESTINATION 4 value: |- ... - name: EGRESS_DNS_PROXY_DEBUG 5 value: "1" ...
- 1
- The annotation tells OpenShift Container Platform to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod’s network namespace. You must include the quotation marks around the
"true"
value. To have OpenShift Container Platform create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example,eth1
. - 2
- IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the
/24
suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with theEGRESS_GATEWAY
variable and no other hosts on the subnet. - 3
- Same value as the default gateway used by the node.
- 4
- Specify a list of one or more proxy destinations.
- 5
- Optional: Specify to output the DNS proxy log output to
stdout
.
18.10.2. Egress destination configuration format
When the router is deployed in DNS proxy mode, you specify a list of port and destination mappings. A destination may be either an IP address or a DNS name.
An egress router pod supports the following formats for specifying port and destination mappings:
- Port and remote address
-
You can specify a source port and a destination host by using the two field format:
<port> <remote_address>
.
The host can be an IP address or a DNS name. If a DNS name is provided, DNS resolution occurs at runtime. For a given host, the proxy connects to the specified source port on the destination host when connecting to the destination host IP address.
Port and remote address pair example
80 172.16.12.11 100 example.com
- Port, remote address, and remote port
-
You can specify a source port, a destination host, and a destination port by using the three field format:
<port> <remote_address> <remote_port>
.
The three field format behaves identically to the two field version, with the exception that the destination port can be different than the source port.
Port, remote address, and remote port example
8080 192.168.60.252 80 8443 web.example.com 443
18.10.3. Deploying an egress router pod in DNS proxy mode
In DNS proxy mode, an egress router pod acts as a DNS proxy for TCP-based services from its own IP address to one or more destination IP addresses.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
- Create an egress router pod.
Create a service for the egress router pod:
Create a file named
egress-router-service.yaml
that contains the following YAML. Setspec.ports
to the list of ports that you defined previously for theEGRESS_DNS_PROXY_DESTINATION
environment variable.apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: ... type: ClusterIP selector: name: egress-dns-proxy
For example:
apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: - name: con1 protocol: TCP port: 80 targetPort: 80 - name: con2 protocol: TCP port: 100 targetPort: 100 type: ClusterIP selector: name: egress-dns-proxy
To create the service, enter the following command:
$ oc create -f egress-router-service.yaml
Pods can now connect to this service. The connections are proxied to the corresponding ports on the external server, using the reserved egress IP address.
18.10.4. Additional resources
18.11. Configuring an egress router pod destination list from a config map
As a cluster administrator, you can define a ConfigMap
object that specifies destination mappings for an egress router pod. The specific format of the configuration depends on the type of egress router pod. For details on the format, refer to the documentation for the specific egress router pod.
18.11.1. Configuring an egress router destination mappings with a config map
For a large or frequently-changing set of destination mappings, you can use a config map to externally maintain the list. An advantage of this approach is that permission to edit the config map can be delegated to users without cluster-admin
privileges. Because the egress router pod requires a privileged container, it is not possible for users without cluster-admin
privileges to edit the pod definition directly.
The egress router pod does not automatically update when the config map changes. You must restart the egress router pod to get updates.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
Create a file containing the mapping data for the egress router pod, as in the following example:
# Egress routes for Project "Test", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 # Fallback 203.0.113.27
You can put blank lines and comments into this file.
Create a
ConfigMap
object from the file:$ oc delete configmap egress-routes --ignore-not-found
$ oc create configmap egress-routes \ --from-file=destination=my-egress-destination.txt
In the previous command, the
egress-routes
value is the name of theConfigMap
object to create andmy-egress-destination.txt
is the name of the file that the data is read from.TipYou can alternatively apply the following YAML to create the config map:
apiVersion: v1 kind: ConfigMap metadata: name: egress-routes data: destination: | # Egress routes for Project "Test", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 # Fallback 203.0.113.27
Create an egress router pod definition and specify the
configMapKeyRef
stanza for theEGRESS_DESTINATION
field in the environment stanza:... env: - name: EGRESS_DESTINATION valueFrom: configMapKeyRef: name: egress-routes key: destination ...
18.11.2. Additional resources
18.12. Enabling multicast for a project
18.12.1. About multicast
With IP multicast, data is broadcast to many IP addresses simultaneously.
At this time, multicast is best used for low-bandwidth coordination or service discovery and not a high-bandwidth solution.
Multicast traffic between OpenShift Container Platform pods is disabled by default. If you are using the OpenShift SDN default Container Network Interface (CNI) network provider, you can enable multicast on a per-project basis.
When using the OpenShift SDN network plugin in networkpolicy
isolation mode:
-
Multicast packets sent by a pod will be delivered to all other pods in the project, regardless of
NetworkPolicy
objects. Pods might be able to communicate over multicast even when they cannot communicate over unicast. -
Multicast packets sent by a pod in one project will never be delivered to pods in any other project, even if there are
NetworkPolicy
objects that allow communication between the projects.
When using the OpenShift SDN network plugin in multitenant
isolation mode:
- Multicast packets sent by a pod will be delivered to all other pods in the project.
- Multicast packets sent by a pod in one project will be delivered to pods in other projects only if each project is joined together and multicast is enabled in each joined project.
18.12.2. Enabling multicast between pods
You can enable multicast between pods for your project.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
You must log in to the cluster with a user that has the
cluster-admin
role.
Procedure
Run the following command to enable multicast for a project. Replace
<namespace>
with the namespace for the project you want to enable multicast for.$ oc annotate netnamespace <namespace> \ netnamespace.network.openshift.io/multicast-enabled=true
Verification
To verify that multicast is enabled for a project, complete the following procedure:
Change your current project to the project that you enabled multicast for. Replace
<project>
with the project name.$ oc project <project>
Create a pod to act as a multicast receiver:
$ cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi8 command: ["/bin/sh", "-c"] args: ["dnf -y install socat hostname && sleep inf"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF
Create a pod to act as a multicast sender:
$ cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi8 command: ["/bin/sh", "-c"] args: ["dnf -y install socat && sleep inf"] EOF
In a new terminal window or tab, start the multicast listener.
Get the IP address for the Pod:
$ POD_IP=$(oc get pods mlistener -o jsonpath='{.status.podIP}')
Start the multicast listener by entering the following command:
$ oc exec mlistener -i -t -- \ socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:$POD_IP,fork EXEC:hostname
Start the multicast transmitter.
Get the pod network IP address range:
$ CIDR=$(oc get Network.config.openshift.io cluster \ -o jsonpath='{.status.clusterNetwork[0].cidr}')
To send a multicast message, enter the following command:
$ oc exec msender -i -t -- \ /bin/bash -c "echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=$CIDR,ip-multicast-ttl=64"
If multicast is working, the previous command returns the following output:
mlistener
18.13. Disabling multicast for a project
18.13.1. Disabling multicast between pods
You can disable multicast between pods for your project.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
You must log in to the cluster with a user that has the
cluster-admin
role.
Procedure
Disable multicast by running the following command:
$ oc annotate netnamespace <namespace> \ 1 netnamespace.network.openshift.io/multicast-enabled-
- 1
- The
namespace
for the project you want to disable multicast for.
18.14. Configuring network isolation using OpenShift SDN
When your cluster is configured to use the multitenant isolation mode for the OpenShift SDN CNI plugin, each project is isolated by default. Network traffic is not allowed between pods or services in different projects in multitenant isolation mode.
You can change the behavior of multitenant isolation for a project in two ways:
- You can join one or more projects, allowing network traffic between pods and services in different projects.
- You can disable network isolation for a project. It will be globally accessible, accepting network traffic from pods and services in all other projects. A globally accessible project can access pods and services in all other projects.
18.14.1. Prerequisites
- You must have a cluster configured to use the OpenShift SDN Container Network Interface (CNI) plugin in multitenant isolation mode.
18.14.2. Joining projects
You can join two or more projects to allow network traffic between pods and services in different projects.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
You must log in to the cluster with a user that has the
cluster-admin
role.
Procedure
Use the following command to join projects to an existing project network:
$ oc adm pod-network join-projects --to=<project1> <project2> <project3>
Alternatively, instead of specifying specific project names, you can use the
--selector=<project_selector>
option to specify projects based upon an associated label.Optional: Run the following command to view the pod networks that you have joined together:
$ oc get netnamespaces
Projects in the same pod-network have the same network ID in the NETID column.
18.14.3. Isolating a project
You can isolate a project so that pods and services in other projects cannot access its pods and services.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
You must log in to the cluster with a user that has the
cluster-admin
role.
Procedure
To isolate the projects in the cluster, run the following command:
$ oc adm pod-network isolate-projects <project1> <project2>
Alternatively, instead of specifying specific project names, you can use the
--selector=<project_selector>
option to specify projects based upon an associated label.
18.14.4. Disabling network isolation for a project
You can disable network isolation for a project.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
You must log in to the cluster with a user that has the
cluster-admin
role.
Procedure
Run the following command for the project:
$ oc adm pod-network make-projects-global <project1> <project2>
Alternatively, instead of specifying specific project names, you can use the
--selector=<project_selector>
option to specify projects based upon an associated label.
18.15. Configuring kube-proxy
The Kubernetes network proxy (kube-proxy) runs on each node and is managed by the Cluster Network Operator (CNO). kube-proxy maintains network rules for forwarding connections for endpoints associated with services.
18.15.1. About iptables rules synchronization
The synchronization period determines how frequently the Kubernetes network proxy (kube-proxy) syncs the iptables rules on a node.
A sync begins when either of the following events occurs:
- An event occurs, such as service or endpoint is added to or removed from the cluster.
- The time since the last sync exceeds the sync period defined for kube-proxy.
18.15.2. kube-proxy configuration parameters
You can modify the following kubeProxyConfig
parameters.
Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod
parameter is no longer necessary.
Parameter | Description | Values | Default |
---|---|---|---|
|
The refresh period for |
A time interval, such as |
|
|
The minimum duration before refreshing |
A time interval, such as |
|
18.15.3. Modifying the kube-proxy configuration
You can modify the Kubernetes network proxy configuration for your cluster.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in to a running cluster with the
cluster-admin
role.
Procedure
Edit the
Network.operator.openshift.io
custom resource (CR) by running the following command:$ oc edit network.operator.openshift.io cluster
Modify the
kubeProxyConfig
parameter in the CR with your changes to the kube-proxy configuration, such as in the following example CR:apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: ["30s"]
Save the file and exit the text editor.
The syntax is validated by the
oc
command when you save the file and exit the editor. If your modifications contain a syntax error, the editor opens the file and displays an error message.Enter the following command to confirm the configuration update:
$ oc get networks.operator.openshift.io -o yaml
Example output
apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OpenShiftSDN kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: - 30s serviceNetwork: - 172.30.0.0/16 status: {} kind: List
Optional: Enter the following command to confirm that the Cluster Network Operator accepted the configuration change:
$ oc get clusteroperator network
Example output
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.1.0-0.9 True False False 1m
The
AVAILABLE
field isTrue
when the configuration update is applied successfully.