OVN-Kubernetes network plugin
In-depth configuration and troubleshooting for the OVN-Kubernetes network plugin in Red Hat OpenShift Service on AWS classic architecture
Abstract
Chapter 1. About the OVN-Kubernetes network plugin Copy linkLink copied to clipboard!
The Red Hat OpenShift Service on AWS classic architecture cluster uses a virtualized network for pod and service networks.
Part of Red Hat OpenShift Networking, the OVN-Kubernetes network plugin is the default network provider for Red Hat OpenShift Service on AWS classic architecture. OVN-Kubernetes is based on Open Virtual Network (OVN) and provides an overlay-based networking implementation. A cluster that uses the OVN-Kubernetes plugin also runs Open vSwitch (OVS) on each node. OVN configures OVS on each node to implement the declared network configuration.
OVN-Kubernetes is the default networking solution for Red Hat OpenShift Service on AWS classic architecture and single-node OpenShift deployments.
OVN-Kubernetes, which arose from the OVS project, uses many of the same constructs, such as open flow rules, to decide how packets travel through the network. For more information, see the Open Virtual Network website.
OVN-Kubernetes is a series of daemons for OVS that transform virtual network configurations into OpenFlow rules. OpenFlow is a protocol for communicating with network switches and routers, providing a means for remotely controlling the flow of network traffic on a network device. This means that network administrators can configure, manage, and watch the flow of network traffic.
OVN-Kubernetes provides more of the advanced functionality not available with OpenFlow. OVN supports distributed virtual routing, distributed logical switches, access control, Dynamic Host Configuration Protocol (DHCP), and DNS. OVN implements distributed virtual routing within logic flows that equate to open flows. For example, if you have a pod that sends out a DHCP request to the DHCP server on the network, a logic flow rule in the request helps the OVN-Kubernetes handle the packet. This means that the server can respond with gateway, DNS server, IP address, and other information.
OVN-Kubernetes runs a daemon on each node. There are daemon sets for the databases and for the OVN controller that run on every node. The OVN controller programs the Open vSwitch daemon on the nodes to support the following network provider features:
- Egress IPs
- Firewalls
- Hardware offloading
- Hybrid networking
- Internet Protocol Security (IPsec) encryption
- IPv6
- Multicast.
- Network policy and network policy logs
- Routers
1.1. OVN-Kubernetes purpose Copy linkLink copied to clipboard!
The OVN-Kubernetes network plugin is an open-source, fully-featured Kubernetes CNI plugin that uses Open Virtual Network (OVN) to manage network traffic flows. OVN is a community developed, vendor-agnostic network virtualization solution. The OVN-Kubernetes network plugin uses the following technologies:
- OVN to manage network traffic flows.
- Kubernetes network policy support and logs, including ingress and egress rules.
- The Generic Network Virtualization Encapsulation (Geneve) protocol, rather than Virtual Extensible LAN (VXLAN), to create an overlay network between nodes.
The OVN-Kubernetes network plugin supports the following capabilities:
- Hybrid clusters that can run both Linux and Microsoft Windows workloads. This environment is known as hybrid networking.
- Offloading of network data processing from the host central processing unit (CPU) to compatible network cards and data processing units (DPUs). This is known as hardware offloading.
- IPv4-primary dual-stack networking on bare-metal, VMware vSphere, IBM Power®, IBM Z®, and Red Hat OpenStack Platform (RHOSP) platforms.
- IPv6 single-stack networking on RHOSP and bare metal platforms.
- IPv6-primary dual-stack networking for a cluster running on a bare-metal, a VMware vSphere, or an RHOSP platform.
- Egress firewall devices and egress IP addresses.
- Egress router devices that operate in redirect mode.
- IPsec encryption of intracluster communications.
Red Hat does not support the following postinstallation configurations that use the OVN-Kubernetes network plugin:
- Configuring the primary network interface, including using the NMState Operator to configure bonding for the interface.
-
Configuring a sub-interface or additional network interface on a network device that uses the Open vSwitch (OVS) or an OVN-Kubernetes
br-exbridge network. - Creating additional virtual local area networks (VLANs) on the primary network interface.
-
Using the primary network interface, such as
eth0orbond0, that you created for a node during cluster installation to create additional secondary networks.
Red Hat does support the following postinstallation configurations that use the OVN-Kubernetes network plugin:
-
Creating additional VLANs from the base physical interface, such as
eth0.100, where you configured the primary network interface as a VLAN for a node during cluster installation. This works because the Open vSwitch (OVS) bridge attaches to the initial VLAN sub-interface, such aseth0.100, leaving the base physical interface available for new configurations. -
Creating an additional OVN secondary network with a
localnettopology network requires that you define the secondary network in aNodeNetworkConfigurationPolicy(NNCP) object. After you create the network, pods or virtual machines (VMs) can then attach to the network. These secondary networks give a dedicated connection to the physical network, which might or might not use VLAN tagging. You cannot access these networks from the host network of a node where the host does not have the required setup, such as the required network settings.
1.2. OVN-Kubernetes IPv6 and dual-stack limitations Copy linkLink copied to clipboard!
The OVN-Kubernetes network plugin has the following limitations:
For clusters configured for dual-stack networking, both IPv4 and IPv6 traffic must use the same network interface as the default gateway.
If this requirement is not met, pods on the host in the
ovnkube-nodedaemon set enter theCrashLoopBackOffstate.If you display a pod with a command such as
oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml, thestatusfield has more than one message about the default gateway, as shown in the following output:I1006 16:09:50.985852 60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1 I1006 16:09:50.985923 60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4 F1006 16:09:50.985939 60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4
I1006 16:09:50.985852 60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1 I1006 16:09:50.985923 60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4 F1006 16:09:50.985939 60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4Copy to Clipboard Copied! Toggle word wrap Toggle overflow The only resolution is to reconfigure the host networking so that both IP families use the same network interface for the default gateway.
For clusters configured for dual-stack networking, both the IPv4 and IPv6 routing tables must contain the default gateway.
If this requirement is not met, pods on the host in the
ovnkube-nodedaemon set enter theCrashLoopBackOffstate.If you display a pod with a command such as
oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml, thestatusfield has more than one message about the default gateway, as shown in the following output:I0512 19:07:17.589083 108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1 F0512 19:07:17.589141 108432 ovnkube.go:133] failed to get default gateway interface
I0512 19:07:17.589083 108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1 F0512 19:07:17.589141 108432 ovnkube.go:133] failed to get default gateway interfaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow The only resolution is to reconfigure the host networking so that both IP families contain the default gateway.
-
If you set the
ipv6.disableparameter to1in thekernelArgumentsection of theMachineConfigcustom resource (CR) for your cluster, OVN-Kubernetes pods enter aCrashLoopBackOffstate. Additionally, updating your cluster to a later version of Red Hat OpenShift Service on AWS classic architecture fails because the Network Operator remains on aDegradedstate. Red Hat does not support disabling IPv6 adddresses for your cluster so do not set theipv6.disableparameter to1.
1.3. Session affinity Copy linkLink copied to clipboard!
Session affinity is a feature that applies to Kubernetes Service objects. You can use session affinity if you want to ensure that each time you connect to a <service_VIP>:<Port>, the traffic is always load balanced to the same back end. For more information, including how to set session affinity based on a client’s IP address, see Session affinity.
1.3.1. Stickiness timeout for session affinity Copy linkLink copied to clipboard!
The OVN-Kubernetes network plugin for Red Hat OpenShift Service on AWS classic architecture calculates the stickiness timeout for a session from a client based on the last packet. For example, if you run a curl command 10 times, the sticky session timer starts from the tenth packet not the first. As a result, if the client is continuously contacting the service, then the session never times out. The timeout starts when the service has not received a packet for the amount of time set by the timeoutSeconds parameter.
Chapter 2. Configuring an egress IP address Copy linkLink copied to clipboard!
As a cluster administrator, you can configure the OVN-Kubernetes Container Network Interface (CNI) network plugin to assign one or more egress IP addresses to a namespace, or to specific pods in a namespace.
2.1. Egress IP address architectural design and implementation Copy linkLink copied to clipboard!
By using the Red Hat OpenShift Service on AWS classic architecture egress IP address functionality, you can ensure that the traffic from one or more pods in one or more namespaces has a consistent source IP address for services outside the cluster network.
For example, you might have a pod that periodically queries a database that is hosted on a server outside of your cluster. To enforce access requirements for the server, a packet filtering device is configured to allow traffic only from specific IP addresses. To ensure that you can reliably allow access to the server from only that specific pod, you can configure a specific egress IP address for the pod that makes the requests to the server.
An egress IP address assigned to a namespace is different from an egress router, which is used to send traffic to specific destinations.
In ROSA with HCP clusters, application pods and ingress router pods run on the same node. If you configure an egress IP address for an application project in this scenario, the IP address is not used when you send a request to a route from the application project.
The assignment of egress IP addresses to control plane nodes with the EgressIP feature is not supported.
The following examples illustrate the annotation from nodes on several public cloud providers. The annotations are indented for readability.
Example cloud.network.openshift.io/egress-ipconfig annotation on AWS
The following sections describe the IP address capacity for supported public cloud environments for use in your capacity calculation.
2.1.1. Amazon Web Services (AWS) IP address capacity limits Copy linkLink copied to clipboard!
On AWS, constraints on IP address assignments depend on the instance type configured. For more information, see IP addresses per network interface per instance type
2.1.2. Architectural diagram of an egress IP address configuration Copy linkLink copied to clipboard!
The following diagram depicts an egress IP address configuration. The diagram describes four pods in two different namespaces running on three nodes in a cluster. The nodes are assigned IP addresses from the 192.168.126.0/18 CIDR block on the host network.
Both Node 1 and Node 3 are labeled with k8s.ovn.org/egress-assignable: "" and thus available for the assignment of egress IP addresses.
The dashed lines in the diagram depict the traffic flow from pod1, pod2, and pod3 traveling through the pod network to egress the cluster from Node 1 and Node 3. When an external service receives traffic from any of the pods selected by the example EgressIP object, the source IP address is either 192.168.126.10 or 192.168.126.102. The traffic is balanced roughly equally between these two nodes.
Based on the diagram, the following manifest file defines namespaces:
Namespace objects
Based on the diagram, the following EgressIP object describes a configuration that selects all pods in any namespace with the env label set to prod. The egress IP addresses for the selected pods are 192.168.126.10 and 192.168.126.102.
EgressIP object
For the configuration in the previous example, Red Hat OpenShift Service on AWS classic architecture assigns both egress IP addresses to the available nodes. The status field reflects whether and where the egress IP addresses are assigned.
2.2. EgressIP object Copy linkLink copied to clipboard!
View the following YAML files to better understand how you can effectively configure an EgressIP object to better meet your needs.
The following YAML describes the API for the EgressIP object. The scope of the object is cluster-wide and is not created in a namespace.
where:
<name>-
The name for the
EgressIPsobject. <egressIPs>- An array of one or more IP addresses.
<namespaceSelector>- One or more selectors for the namespaces to associate the egress IP addresses with.
<podSelector>- Optional parameter. One or more selectors for pods in the specified namespaces to associate egress IP addresses with. Applying these selectors allows for the selection of a subset of pods within a namespace.
The following YAML describes the stanza for the namespace selector:
Namespace selector stanza
namespaceSelector:
matchLabels:
<label_name>: <label_value>
namespaceSelector:
matchLabels:
<label_name>: <label_value>
where:
<namespaceSelector>- One or more matching rules for namespaces. If more than one match rule is provided, all matching namespaces are selected.
The following YAML describes the optional stanza for the pod selector:
Pod selector stanza
podSelector:
matchLabels:
<label_name>: <label_value>
podSelector:
matchLabels:
<label_name>: <label_value>
where:
<podSelector>-
Optional parameter. One or more matching rules for pods in the namespaces that match the specified
namespaceSelectorrules. If specified, only pods that match are selected. Others pods in the namespace are not selected.
In the following example, the EgressIP object associates the 192.168.126.11 and 192.168.126.102 egress IP addresses with pods that have the app label set to web and are in the namespaces that have the env label set to prod:
Example EgressIP object
In the following example, the EgressIP object associates the 192.168.127.30 and 192.168.127.40 egress IP addresses with any pods that do not have the environment label set to development:
Example EgressIP object
2.3. Assignment of egress IPs to a namespace, nodes, and pods Copy linkLink copied to clipboard!
To assign one or more egress IPs to a namespace or specific pods in a namespace, the following conditions must be satisfied:
-
At least one node in your cluster must have the
k8s.ovn.org/egress-assignable: ""label. -
An
EgressIPobject exists that defines one or more egress IP addresses to use as the source IP address for traffic leaving the cluster from pods in a namespace.
If you create EgressIP objects prior to labeling any nodes in your cluster for egress IP assignment, Red Hat OpenShift Service on AWS classic architecture might assign every egress IP address to the first node with the k8s.ovn.org/egress-assignable: "" label.
To ensure that egress IP addresses are widely distributed across nodes in the cluster, always apply the label to the nodes you intent to host the egress IP addresses before creating any EgressIP objects.
When creating an EgressIP object, the following conditions apply to nodes that are labeled with the k8s.ovn.org/egress-assignable: "" label:
- An egress IP address is never assigned to more than one node at a time.
- An egress IP address is equally balanced between available nodes that can host the egress IP address.
If the
spec.EgressIPsarray in anEgressIPobject specifies more than one IP address, the following conditions apply:- No node will ever host more than one of the specified IP addresses.
- Traffic is balanced roughly equally between the specified IP addresses for a given namespace.
- If a node becomes unavailable, any egress IP addresses assigned to it are automatically reassigned, subject to the previously described conditions.
When a pod matches the selector for multiple EgressIP objects, there is no guarantee which of the egress IP addresses that are specified in the EgressIP objects is assigned as the egress IP address for the pod.
Additionally, if an EgressIP object specifies multiple egress IP addresses, there is no guarantee which of the egress IP addresses might be used. For example, if a pod matches a selector for an EgressIP object with two egress IP addresses, 10.10.20.1 and 10.10.20.2, either might be used for each TCP connection or UDP conversation.
2.4. Assigning an egress IP address to a namespace Copy linkLink copied to clipboard!
You can assign one or more egress IP addresses to a namespace or to specific pods in a namespace.
Prerequisites
-
Install the OpenShift CLI (
oc). - Log in to the cluster as a cluster administrator.
- Configure at least one node to host an egress IP address.
Procedure
Create an
EgressIPobject.-
Create a
<egressips_name>.yamlfile where<egressips_name>is the name of the object. In the file that you created, define an
EgressIPobject, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Create a
To create the object, enter the following command.
oc apply -f <egressips_name>.yaml
$ oc apply -f <egressips_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<egressips_name>-
Replace
<egressips_name>with the name of the object.
Example output
egressips.k8s.ovn.org/<egressips_name> created
egressips.k8s.ovn.org/<egressips_name> createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Optional: Store the
<egressips_name>.yamlfile so that you can make changes later. Add labels to the namespace that requires egress IP addresses. To add a label to the namespace of an
EgressIPobject defined in step 1, run the following command:oc label ns <namespace> env=qa
$ oc label ns <namespace> env=qaCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<namespace>-
Replace
<namespace>with the namespace that requires egress IP addresses.
Verification
To show all egress IP addresses that are in use in your cluster, enter the following command:
oc get egressip -o yaml
$ oc get egressip -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe command
oc get egressiponly returns one egress IP address regardless of how many are configured. This is not a bug and is a limitation of Kubernetes. As a workaround, you can pass in the-o yamlor-o jsonflags to return all egress IPs addresses in use.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5. Labeling a node to host egress IP addresses Copy linkLink copied to clipboard!
You can apply the k8s.ovn.org/egress-assignable="" label to a node in your cluster so that Red Hat OpenShift Service on AWS classic architecture can assign one or more egress IP addresses to the node.
Prerequisites
-
Install the ROSA CLI (
rosa). - Log in to the cluster as a cluster administrator.
Procedure
To label a node so that it can host one or more egress IP addresses, enter the following command:
rosa edit machinepool <machinepool_name> --cluster=<cluster_name> --labels "k8s.ovn.org/egress-assignable="
$ rosa edit machinepool <machinepool_name> --cluster=<cluster_name> --labels "k8s.ovn.org/egress-assignable="Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThis command replaces any exciting node labels on your machinepool. You should include any of the desired labels to the
--labelsfield to ensure that your existing node labels persist.
Chapter 3. Migrating from OpenShift SDN network plugin to OVN-Kubernetes network plugin Copy linkLink copied to clipboard!
As a Red Hat OpenShift Service on AWS classic architecture cluster administrator, you can initiate the migration from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin and verify the migration status using the ROSA CLI.
Some considerations before starting migration initiation are:
- The cluster version must be 4.16.43 and above.
- The migration process cannot be interrupted.
- Migrating back to the SDN network plugin is not possible.
- Cluster nodes will be rebooted during migration.
- There will be no impact to workloads that are resilient to node disruptions.
- Migration time can vary between several minutes and hours, depending on the cluster size and workload configurations.
3.1. Initiating migration using the ROSA CLI Copy linkLink copied to clipboard!
You can only initiate migration on clusters that are version 4.16.43 and above.
To initiate the migration, run the following command:
rosa edit cluster -c <cluster_id>
$ rosa edit cluster -c <cluster_id>
--network-type OVNKubernetes
--ovn-internal-subnets <configuration>
- 1
- Replace
<cluster_id>with the ID of the cluster you want to migrate to the OVN-Kubernetes network plugin. - 2
- Optional: Users can create key-value pairs to configure internal subnets using any or all of the options
join, masquerade, transitalong with a single CIDR per option. For example,--ovn-internal-subnets="join=0.0.0.0/24,transit=0.0.0.0/24,masquerade=0.0.0.0/24".
You cannot include the optional flag --ovn-internal-subnets in the command unless you define a value for the flag --network-type.
Verification
To check the status of the migration, run the following command:
rosa describe cluster -c <cluster_id>
$ rosa describe cluster -c <cluster_id>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<cluster_id>with the ID of the cluster to check the migration status.
Chapter 4. Configuring a cluster-wide proxy Copy linkLink copied to clipboard!
If you are using an existing Virtual Private Cloud (VPC), you can configure a cluster-wide proxy during a Red Hat OpenShift Service on AWS classic architecture cluster installation or after the cluster is installed. When you enable a proxy, the core cluster components are denied direct access to the internet, but the proxy does not affect user workloads.
Only cluster system egress traffic is proxied, including calls to the cloud provider API.
If you use a cluster-wide proxy, you are responsible for maintaining the availability of the proxy to the cluster. If the proxy becomes unavailable, then it might impact the health and supportability of the cluster.
4.1. Prerequisites for configuring a cluster-wide proxy Copy linkLink copied to clipboard!
To configure a cluster-wide proxy, you must meet the following requirements. These requirements are valid when you configure a proxy during installation or postinstallation.
4.1.1. General requirements Copy linkLink copied to clipboard!
- You are the cluster owner.
- Your account has sufficient privileges.
- You have an existing Virtual Private Cloud (VPC) for your cluster.
- The proxy can access the VPC for the cluster and the private subnets of the VPC. The proxy must also be accessible from the VPC for the cluster and from the private subnets of the VPC.
You have added the following endpoints to your VPC endpoint:
-
ec2.<aws_region>.amazonaws.com -
elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.comThese endpoints are required to complete requests from the nodes to the AWS EC2 API. Because the proxy works at the container level and not at the node level, you must route these requests to the AWS EC2 API through the AWS private network. Adding the public IP address of the EC2 API to your allowlist in your proxy server is not enough.
ImportantWhen using a cluster-wide proxy, you must configure the
s3.<aws_region>.amazonaws.comendpoint as typeGateway.
-
4.1.2. Network requirements Copy linkLink copied to clipboard!
If your proxy re-encrypts egress traffic, you must create exclusions to several domain and port combinations required by OpenShift.
Your proxy must exclude re-encrypting the following OpenShift URLs:
| Address | Protocol/Port | Function |
|---|---|---|
|
| https/443 | Required. Used for Managed OpenShift-specific telemetry. |
|
| https/443 |
The https://console.redhat.com/openshift site uses authentication from |
4.2. Responsibilities for additional trust bundles Copy linkLink copied to clipboard!
If you supply an additional trust bundle, you are responsible for the following requirements:
- Ensuring that the contents of the additional trust bundle are valid
- Ensuring that the certificates, including intermediary certificates, contained in the additional trust bundle have not expired
- Tracking the expiry and performing any necessary renewals for certificates contained in the additional trust bundle
- Updating the cluster configuration with the updated additional trust bundle
4.3. Configuring a proxy during installation Copy linkLink copied to clipboard!
You can configure an HTTP or HTTPS proxy when you install a Red Hat OpenShift Service on AWS classic architecture cluster into an existing Virtual Private Cloud (VPC). You can configure the proxy during installation by using Red Hat OpenShift Cluster Manager or the ROSA CLI (rosa).
4.3.1. Configuring a proxy during installation using OpenShift Cluster Manager Copy linkLink copied to clipboard!
If you are installing a Red Hat OpenShift Service on AWS classic architecture cluster into an existing Virtual Private Cloud (VPC), you can use Red Hat OpenShift Cluster Manager to enable a cluster-wide HTTP or HTTPS proxy during installation.
Prior to the installation, you must verify that the proxy is accessible from the VPC that the cluster is being installed into. The proxy must also be accessible from the private subnets of the VPC.
For detailed steps to configure a cluster-wide proxy during installation by using OpenShift Cluster Manager, see Creating a cluster with customizations by using OpenShift Cluster Manager.
4.3.2. Configuring a proxy during installation using the CLI Copy linkLink copied to clipboard!
If you are installing a Red Hat OpenShift Service on AWS classic architecture cluster into an existing Virtual Private Cloud (VPC), you can use the ROSA CLI (rosa) to enable a cluster-wide HTTP or HTTPS proxy during installation.
The following procedure provides details about the ROSA CLI (rosa) arguments that are used to configure a cluster-wide proxy during installation.
For general installation steps using the ROSA CLI, see Creating a cluster with customizations using the CLI.
Prerequisites
- You have verified that the proxy is accessible from the VPC that the cluster is being installed into. The proxy must also be accessible from the private subnets of the VPC.
Procedure
Specify a proxy configuration when you create your cluster:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1 4 6
- The
additional-trust-bundle-file,http-proxy, andhttps-proxyarguments are all optional. - 2
- The
additional-trust-bundle-fileargument is a file path pointing to a bundle of PEM-encoded X.509 certificates, which are all concatenated together. The additional-trust-bundle-file argument is required for users who use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. This applies regardless of whether the proxy is transparent or requires explicit configuration using the http-proxy and https-proxy arguments. - 3 5 7
- The
http-proxyandhttps-proxyarguments must point to a valid URL. - 8
- A comma-separated list of destination domain names, IP addresses, or network CIDRs to exclude proxying.
Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by thenetworking.machineNetwork[].cidrfield from the installation configuration, you must add them to this list to prevent connection issues.This field is ignored if neither the
httpProxyorhttpsProxyfields are set.
4.4. Configuring a proxy after installation Copy linkLink copied to clipboard!
You can configure an HTTP or HTTPS proxy after you install a Red Hat OpenShift Service on AWS classic architecture cluster into an existing Virtual Private Cloud (VPC). You can configure the proxy after installation by using Red Hat OpenShift Cluster Manager or the ROSA CLI (rosa).
4.4.1. Configuring a proxy after installation using OpenShift Cluster Manager Copy linkLink copied to clipboard!
You can use Red Hat OpenShift Cluster Manager to add a cluster-wide proxy configuration to an existing Red Hat OpenShift Service on AWS classic architecture cluster in a Virtual Private Cloud (VPC).
You can also use OpenShift Cluster Manager to update an existing cluster-wide proxy configuration. For example, you might need to update the network address for the proxy or replace the additional trust bundle if any of the certificate authorities for the proxy expire.
The cluster applies the proxy configuration to the control plane and compute nodes. While applying the configuration, each cluster node is temporarily placed in an unschedulable state and drained of its workloads. Each node is restarted as part of the process.
Prerequisites
- You have an Red Hat OpenShift Service on AWS classic architecture cluster.
- Your cluster is deployed in a VPC.
Procedure
- Navigate to OpenShift Cluster Manager and select your cluster.
- Under the Virtual Private Cloud (VPC) section on the Networking page, click Edit cluster-wide proxy.
On the Edit cluster-wide proxy page, provide your proxy configuration details:
Enter a value in at least one of the following fields:
- Specify a valid HTTP proxy URL.
- Specify a valid HTTPS proxy URL.
In the Additional trust bundle field, provide a PEM encoded X.509 certificate bundle.
If you are replacing an existing trust bundle file, select Replace file to view the field. The bundle is added to the trusted certificate store for the cluster nodes. An additional trust bundle file is required if you use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. This requirement applies regardless of whether the proxy is transparent or requires explicit configuration using the
http-proxyandhttps-proxyarguments.
- Click Confirm.
Verification
- Under the Virtual Private Cloud (VPC) section on the Networking page, verify that the proxy configuration for your cluster is as expected.
4.4.2. Configuring a proxy after installation using the CLI Copy linkLink copied to clipboard!
You can use the ROSA CLI (rosa) to add a cluster-wide proxy configuration to an existing ROSA cluster in a Virtual Private Cloud (VPC).
You can also use rosa to update an existing cluster-wide proxy configuration. For example, you might need to update the network address for the proxy or replace the additional trust bundle if any of the certificate authorities for the proxy expire.
The cluster applies the proxy configuration to the control plane and compute nodes. While applying the configuration, each cluster node is temporarily placed in an unschedulable state and drained of its workloads. Each node is restarted as part of the process.
Prerequisites
-
You have installed and configured the latest ROSA (
rosa) and OpenShift (oc) CLIs on your installation host. - You have a Red Hat OpenShift Service on AWS classic architecture cluster that is deployed in a VPC.
Procedure
Edit the cluster configuration to add or update the cluster-wide proxy details:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1 4 6
- The
additional-trust-bundle-file,http-proxy, andhttps-proxyarguments are all optional. - 2
- The
additional-trust-bundle-fileargument is a file path pointing to a bundle of PEM-encoded X.509 certificates, which are all concatenated together. The additional-trust-bundle-file argument is a file path pointing to a bundle of PEM-encoded X.509 certificates, which are all concatenated together. The additional-trust-bundle-file argument is required for users who use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. This applies regardless of whether the proxy is transparent or requires explicit configuration using thehttp-proxyandhttps-proxyarguments.ImportantDo not attempt to change the proxy or additional trust bundle configuration on the cluster directly. Any changes must be applied by using the ROSA CLI (
rosa) or Red Hat OpenShift Cluster Manager. Any changes made directly to managed resources on the cluster are reverted automatically. - 3 5 7
- The
http-proxyandhttps-proxyarguments must point to a valid URL. - 8
- A comma-separated list of destination domain names, IP addresses, or network CIDRs to exclude proxying.
Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass proxy for all destinations.If you scale up workers that are not included in the network defined by the
networking.machineNetwork[].cidrfield from the installation configuration, you must add them to this list to prevent connection issues.This field is ignored if neither the
httpProxyorhttpsProxyfields are set.
Verification
List the status of the machine config pools and verify that they are updated:
oc get machineconfigpools
$ oc get machineconfigpoolsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-d9a03f612a432095dcde6dcf44597d90 True False False 3 3 3 0 31h worker rendered-worker-f6827a4efe21e155c25c21b43c46f65e True False False 6 6 6 0 31h
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-d9a03f612a432095dcde6dcf44597d90 True False False 3 3 3 0 31h worker rendered-worker-f6827a4efe21e155c25c21b43c46f65e True False False 6 6 6 0 31hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Display the proxy configuration for your cluster and verify that the details are as expected:
oc get proxy cluster -o yaml
$ oc get proxy cluster -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5. Removing a cluster-wide proxy Copy linkLink copied to clipboard!
You can remove your cluster-wide proxy by using the ROSA CLI. After removing the cluster, you should also remove any trust bundles that are added to the cluster.
4.5.1. Removing the cluster-wide proxy using CLI Copy linkLink copied to clipboard!
You must use the ROSA CLI, rosa, to remove the proxy’s address from your cluster.
Prerequisites
- You must have cluster administrator privileges.
-
You have installed the ROSA CLI (
rosa).
Procedure
Use the
rosa editcommand to modify the proxy. You must pass empty strings to the--http-proxyand--https-proxyarguments to clear the proxy from the cluster:rosa edit cluster -c <cluster_name> --http-proxy "" --https-proxy ""
$ rosa edit cluster -c <cluster_name> --http-proxy "" --https-proxy ""Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhile your proxy might only use one of the proxy arguments, the empty fields are ignored, so passing empty strings to both the
--http-proxyand--https-proxyarguments does not cause any issues.Example Output
I: Updated cluster <cluster_name>
I: Updated cluster <cluster_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
You can verify that the proxy has been removed from the cluster by using the
rosa describecommand:$ rosa describe cluster -c <cluster_name>
$ rosa describe cluster -c <cluster_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Before removal, the proxy IP displays in a proxy section:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After removing the proxy, the proxy section is removed:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5.2. Removing certificate authorities on a Red Hat OpenShift Service on AWS classic architecture cluster Copy linkLink copied to clipboard!
You can remove certificate authorities (CA) from your cluster with the ROSA CLI, rosa.
Prerequisites
- You must have cluster administrator privileges.
-
You have installed the ROSA CLI (
rosa). - Your cluster has certificate authorities added.
Procedure
Use the
rosa editcommand to modify the CA trust bundle. You must pass empty strings to the--additional-trust-bundle-fileargument to clear the trust bundle from the cluster:rosa edit cluster -c <cluster_name> --additional-trust-bundle-file ""
$ rosa edit cluster -c <cluster_name> --additional-trust-bundle-file ""Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example Output
I: Updated cluster <cluster_name>
I: Updated cluster <cluster_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
You can verify that the trust bundle has been removed from the cluster by using the
rosa describecommand:$ rosa describe cluster -c <cluster_name>
$ rosa describe cluster -c <cluster_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Before removal, the Additional trust bundle section appears, redacting its value for security purposes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After removing the proxy, the Additional trust bundle section is removed:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 5. Enabling multicast for a project Copy linkLink copied to clipboard!
5.1. About multicast Copy linkLink copied to clipboard!
With IP multicast, data is broadcast to many IP addresses simultaneously.
- At this time, multicast is best used for low-bandwidth coordination or service discovery and not a high-bandwidth solution.
-
By default, network policies affect all connections in a namespace. However, multicast is unaffected by network policies. If multicast is enabled in the same namespace as your network policies, it is always allowed, even if there is a
deny-allnetwork policy. Cluster administrators should consider the implications to the exemption of multicast from network policies before enabling it.
Multicast traffic between Red Hat OpenShift Service on AWS classic architecture pods is disabled by default. If you are using the OVN-Kubernetes network plugin, you can enable multicast on a per-project basis.
5.2. Enabling multicast between pods Copy linkLink copied to clipboard!
You can enable multicast between pods for your project.
Prerequisites
-
Install the OpenShift CLI (
oc). -
You must log in to the cluster with a user that has the
cluster-adminor thededicated-adminrole.
Procedure
Run the following command to enable multicast for a project. Replace
<namespace>with the namespace for the project you want to enable multicast for.oc annotate namespace <namespace> \ k8s.ovn.org/multicast-enabled=true$ oc annotate namespace <namespace> \ k8s.ovn.org/multicast-enabled=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can alternatively apply the following YAML to add the annotation:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify that multicast is enabled for a project, complete the following procedure:
Change your current project to the project that you enabled multicast for. Replace
<project>with the project name.oc project <project>
$ oc project <project>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a pod to act as a multicast receiver:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a pod to act as a multicast sender:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In a new terminal window or tab, start the multicast listener.
Get the IP address for the Pod:
POD_IP=$(oc get pods mlistener -o jsonpath='{.status.podIP}')$ POD_IP=$(oc get pods mlistener -o jsonpath='{.status.podIP}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the multicast listener by entering the following command:
oc exec mlistener -i -t -- \ socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:$POD_IP,fork EXEC:hostname$ oc exec mlistener -i -t -- \ socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:$POD_IP,fork EXEC:hostnameCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Start the multicast transmitter.
Get the pod network IP address range:
CIDR=$(oc get Network.config.openshift.io cluster \ -o jsonpath='{.status.clusterNetwork[0].cidr}')$ CIDR=$(oc get Network.config.openshift.io cluster \ -o jsonpath='{.status.clusterNetwork[0].cidr}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow To send a multicast message, enter the following command:
oc exec msender -i -t -- \ /bin/bash -c "echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=$CIDR,ip-multicast-ttl=64"$ oc exec msender -i -t -- \ /bin/bash -c "echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=$CIDR,ip-multicast-ttl=64"Copy to Clipboard Copied! Toggle word wrap Toggle overflow If multicast is working, the previous command returns the following output:
mlistener
mlistenerCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Legal Notice
Copy linkLink copied to clipboard!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.