OVN-Kubernetes network plugin


Red Hat OpenShift Service on AWS classic architecture 4

In-depth configuration and troubleshooting for the OVN-Kubernetes network plugin in Red Hat OpenShift Service on AWS classic architecture

Red Hat OpenShift Documentation Team

Abstract

This document provides information on the architecture, configuration, and troubleshooting of the OVN-Kubernetes network plugin in Red Hat OpenShift Service on AWS classic architecture.

The Red Hat OpenShift Service on AWS classic architecture cluster uses a virtualized network for pod and service networks.

Part of Red Hat OpenShift Networking, the OVN-Kubernetes network plugin is the default network provider for Red Hat OpenShift Service on AWS classic architecture. OVN-Kubernetes is based on Open Virtual Network (OVN) and provides an overlay-based networking implementation. A cluster that uses the OVN-Kubernetes plugin also runs Open vSwitch (OVS) on each node. OVN configures OVS on each node to implement the declared network configuration.

Note

OVN-Kubernetes is the default networking solution for Red Hat OpenShift Service on AWS classic architecture and single-node OpenShift deployments.

OVN-Kubernetes, which arose from the OVS project, uses many of the same constructs, such as open flow rules, to decide how packets travel through the network. For more information, see the Open Virtual Network website.

OVN-Kubernetes is a series of daemons for OVS that transform virtual network configurations into OpenFlow rules. OpenFlow is a protocol for communicating with network switches and routers, providing a means for remotely controlling the flow of network traffic on a network device. This means that network administrators can configure, manage, and watch the flow of network traffic.

OVN-Kubernetes provides more of the advanced functionality not available with OpenFlow. OVN supports distributed virtual routing, distributed logical switches, access control, Dynamic Host Configuration Protocol (DHCP), and DNS. OVN implements distributed virtual routing within logic flows that equate to open flows. For example, if you have a pod that sends out a DHCP request to the DHCP server on the network, a logic flow rule in the request helps the OVN-Kubernetes handle the packet. This means that the server can respond with gateway, DNS server, IP address, and other information.

OVN-Kubernetes runs a daemon on each node. There are daemon sets for the databases and for the OVN controller that run on every node. The OVN controller programs the Open vSwitch daemon on the nodes to support the following network provider features:

  • Egress IPs
  • Firewalls
  • Hardware offloading
  • Hybrid networking
  • Internet Protocol Security (IPsec) encryption
  • IPv6
  • Multicast.
  • Network policy and network policy logs
  • Routers

1.1. OVN-Kubernetes purpose

The OVN-Kubernetes network plugin is an open-source, fully-featured Kubernetes CNI plugin that uses Open Virtual Network (OVN) to manage network traffic flows. OVN is a community developed, vendor-agnostic network virtualization solution. The OVN-Kubernetes network plugin uses the following technologies:

  • OVN to manage network traffic flows.
  • Kubernetes network policy support and logs, including ingress and egress rules.
  • The Generic Network Virtualization Encapsulation (Geneve) protocol, rather than Virtual Extensible LAN (VXLAN), to create an overlay network between nodes.

The OVN-Kubernetes network plugin supports the following capabilities:

  • Hybrid clusters that can run both Linux and Microsoft Windows workloads. This environment is known as hybrid networking.
  • Offloading of network data processing from the host central processing unit (CPU) to compatible network cards and data processing units (DPUs). This is known as hardware offloading.
  • IPv4-primary dual-stack networking on bare-metal, VMware vSphere, IBM Power®, IBM Z®, and Red Hat OpenStack Platform (RHOSP) platforms.
  • IPv6 single-stack networking on RHOSP and bare metal platforms.
  • IPv6-primary dual-stack networking for a cluster running on a bare-metal, a VMware vSphere, or an RHOSP platform.
  • Egress firewall devices and egress IP addresses.
  • Egress router devices that operate in redirect mode.
  • IPsec encryption of intracluster communications.

Red Hat does not support the following postinstallation configurations that use the OVN-Kubernetes network plugin:

  • Configuring the primary network interface, including using the NMState Operator to configure bonding for the interface.
  • Configuring a sub-interface or additional network interface on a network device that uses the Open vSwitch (OVS) or an OVN-Kubernetes br-ex bridge network.
  • Creating additional virtual local area networks (VLANs) on the primary network interface.
  • Using the primary network interface, such as eth0 or bond0, that you created for a node during cluster installation to create additional secondary networks.

Red Hat does support the following postinstallation configurations that use the OVN-Kubernetes network plugin:

  • Creating additional VLANs from the base physical interface, such as eth0.100, where you configured the primary network interface as a VLAN for a node during cluster installation. This works because the Open vSwitch (OVS) bridge attaches to the initial VLAN sub-interface, such as eth0.100, leaving the base physical interface available for new configurations.
  • Creating an additional OVN secondary network with a localnet topology network requires that you define the secondary network in a NodeNetworkConfigurationPolicy (NNCP) object. After you create the network, pods or virtual machines (VMs) can then attach to the network. These secondary networks give a dedicated connection to the physical network, which might or might not use VLAN tagging. You cannot access these networks from the host network of a node where the host does not have the required setup, such as the required network settings.

The OVN-Kubernetes network plugin has the following limitations:

  • For clusters configured for dual-stack networking, both IPv4 and IPv6 traffic must use the same network interface as the default gateway.

    If this requirement is not met, pods on the host in the ovnkube-node daemon set enter the CrashLoopBackOff state.

    If you display a pod with a command such as oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml, the status field has more than one message about the default gateway, as shown in the following output:

    I1006 16:09:50.985852   60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1
    I1006 16:09:50.985923   60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4
    F1006 16:09:50.985939   60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4
    Copy to Clipboard Toggle word wrap

    The only resolution is to reconfigure the host networking so that both IP families use the same network interface for the default gateway.

  • For clusters configured for dual-stack networking, both the IPv4 and IPv6 routing tables must contain the default gateway.

    If this requirement is not met, pods on the host in the ovnkube-node daemon set enter the CrashLoopBackOff state.

    If you display a pod with a command such as oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml, the status field has more than one message about the default gateway, as shown in the following output:

    I0512 19:07:17.589083  108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1
    F0512 19:07:17.589141  108432 ovnkube.go:133] failed to get default gateway interface
    Copy to Clipboard Toggle word wrap

    The only resolution is to reconfigure the host networking so that both IP families contain the default gateway.

  • If you set the ipv6.disable parameter to 1 in the kernelArgument section of the MachineConfig custom resource (CR) for your cluster, OVN-Kubernetes pods enter a CrashLoopBackOff state. Additionally, updating your cluster to a later version of Red Hat OpenShift Service on AWS classic architecture fails because the Network Operator remains on a Degraded state. Red Hat does not support disabling IPv6 adddresses for your cluster so do not set the ipv6.disable parameter to 1.

1.3. Session affinity

Session affinity is a feature that applies to Kubernetes Service objects. You can use session affinity if you want to ensure that each time you connect to a <service_VIP>:<Port>, the traffic is always load balanced to the same back end. For more information, including how to set session affinity based on a client’s IP address, see Session affinity.

1.3.1. Stickiness timeout for session affinity

The OVN-Kubernetes network plugin for Red Hat OpenShift Service on AWS classic architecture calculates the stickiness timeout for a session from a client based on the last packet. For example, if you run a curl command 10 times, the sticky session timer starts from the tenth packet not the first. As a result, if the client is continuously contacting the service, then the session never times out. The timeout starts when the service has not received a packet for the amount of time set by the timeoutSeconds parameter.

Chapter 2. Configuring an egress IP address

As a cluster administrator, you can configure the OVN-Kubernetes Container Network Interface (CNI) network plugin to assign one or more egress IP addresses to a namespace, or to specific pods in a namespace.

By using the Red Hat OpenShift Service on AWS classic architecture egress IP address functionality, you can ensure that the traffic from one or more pods in one or more namespaces has a consistent source IP address for services outside the cluster network.

For example, you might have a pod that periodically queries a database that is hosted on a server outside of your cluster. To enforce access requirements for the server, a packet filtering device is configured to allow traffic only from specific IP addresses. To ensure that you can reliably allow access to the server from only that specific pod, you can configure a specific egress IP address for the pod that makes the requests to the server.

An egress IP address assigned to a namespace is different from an egress router, which is used to send traffic to specific destinations.

In ROSA with HCP clusters, application pods and ingress router pods run on the same node. If you configure an egress IP address for an application project in this scenario, the IP address is not used when you send a request to a route from the application project.

Important

The assignment of egress IP addresses to control plane nodes with the EgressIP feature is not supported.

The following examples illustrate the annotation from nodes on several public cloud providers. The annotations are indented for readability.

Example cloud.network.openshift.io/egress-ipconfig annotation on AWS

cloud.network.openshift.io/egress-ipconfig: [
  {
    "interface":"eni-078d267045138e436",
    "ifaddr":{"ipv4":"10.0.128.0/18"},
    "capacity":{"ipv4":14,"ipv6":15}
  }
]
Copy to Clipboard Toggle word wrap

The following sections describe the IP address capacity for supported public cloud environments for use in your capacity calculation.

On AWS, constraints on IP address assignments depend on the instance type configured. For more information, see IP addresses per network interface per instance type

The following diagram depicts an egress IP address configuration. The diagram describes four pods in two different namespaces running on three nodes in a cluster. The nodes are assigned IP addresses from the 192.168.126.0/18 CIDR block on the host network.

Both Node 1 and Node 3 are labeled with k8s.ovn.org/egress-assignable: "" and thus available for the assignment of egress IP addresses.

The dashed lines in the diagram depict the traffic flow from pod1, pod2, and pod3 traveling through the pod network to egress the cluster from Node 1 and Node 3. When an external service receives traffic from any of the pods selected by the example EgressIP object, the source IP address is either 192.168.126.10 or 192.168.126.102. The traffic is balanced roughly equally between these two nodes.

Based on the diagram, the following manifest file defines namespaces:

Namespace objects

apiVersion: v1
kind: Namespace
metadata:
  name: namespace1
  labels:
    env: prod
---
apiVersion: v1
kind: Namespace
metadata:
  name: namespace2
  labels:
    env: prod
Copy to Clipboard Toggle word wrap

Based on the diagram, the following EgressIP object describes a configuration that selects all pods in any namespace with the env label set to prod. The egress IP addresses for the selected pods are 192.168.126.10 and 192.168.126.102.

EgressIP object

apiVersion: k8s.ovn.org/v1
kind: EgressIP
metadata:
  name: egressips-prod
spec:
  egressIPs:
  - 192.168.126.10
  - 192.168.126.102
  namespaceSelector:
    matchLabels:
      env: prod
status:
  items:
  - node: node1
    egressIP: 192.168.126.10
  - node: node3
    egressIP: 192.168.126.102
Copy to Clipboard Toggle word wrap

For the configuration in the previous example, Red Hat OpenShift Service on AWS classic architecture assigns both egress IP addresses to the available nodes. The status field reflects whether and where the egress IP addresses are assigned.

2.2. EgressIP object

View the following YAML files to better understand how you can effectively configure an EgressIP object to better meet your needs.

The following YAML describes the API for the EgressIP object. The scope of the object is cluster-wide and is not created in a namespace.

apiVersion: k8s.ovn.org/v1
kind: EgressIP
metadata:
  name: <name>
spec:
  egressIPs:
  - <ip_address>
  namespaceSelector:
    ...
  podSelector:
    ...
Copy to Clipboard Toggle word wrap

where:

<name>
The name for the EgressIPs object.
<egressIPs>
An array of one or more IP addresses.
<namespaceSelector>
One or more selectors for the namespaces to associate the egress IP addresses with.
<podSelector>
Optional parameter. One or more selectors for pods in the specified namespaces to associate egress IP addresses with. Applying these selectors allows for the selection of a subset of pods within a namespace.

The following YAML describes the stanza for the namespace selector:

Namespace selector stanza

namespaceSelector:
  matchLabels:
    <label_name>: <label_value>
Copy to Clipboard Toggle word wrap

where:

<namespaceSelector>
One or more matching rules for namespaces. If more than one match rule is provided, all matching namespaces are selected.

The following YAML describes the optional stanza for the pod selector:

Pod selector stanza

podSelector:
  matchLabels:
    <label_name>: <label_value>
Copy to Clipboard Toggle word wrap

where:

<podSelector>
Optional parameter. One or more matching rules for pods in the namespaces that match the specified namespaceSelector rules. If specified, only pods that match are selected. Others pods in the namespace are not selected.

In the following example, the EgressIP object associates the 192.168.126.11 and 192.168.126.102 egress IP addresses with pods that have the app label set to web and are in the namespaces that have the env label set to prod:

Example EgressIP object

apiVersion: k8s.ovn.org/v1
kind: EgressIP
metadata:
  name: egress-group1
spec:
  egressIPs:
  - 192.168.126.11
  - 192.168.126.102
  podSelector:
    matchLabels:
      app: web
  namespaceSelector:
    matchLabels:
      env: prod
Copy to Clipboard Toggle word wrap

In the following example, the EgressIP object associates the 192.168.127.30 and 192.168.127.40 egress IP addresses with any pods that do not have the environment label set to development:

Example EgressIP object

apiVersion: k8s.ovn.org/v1
kind: EgressIP
metadata:
  name: egress-group2
spec:
  egressIPs:
  - 192.168.127.30
  - 192.168.127.40
  namespaceSelector:
    matchExpressions:
    - key: environment
      operator: NotIn
      values:
      - development
Copy to Clipboard Toggle word wrap

To assign one or more egress IPs to a namespace or specific pods in a namespace, the following conditions must be satisfied:

  • At least one node in your cluster must have the k8s.ovn.org/egress-assignable: "" label.
  • An EgressIP object exists that defines one or more egress IP addresses to use as the source IP address for traffic leaving the cluster from pods in a namespace.
Important

If you create EgressIP objects prior to labeling any nodes in your cluster for egress IP assignment, Red Hat OpenShift Service on AWS classic architecture might assign every egress IP address to the first node with the k8s.ovn.org/egress-assignable: "" label.

To ensure that egress IP addresses are widely distributed across nodes in the cluster, always apply the label to the nodes you intent to host the egress IP addresses before creating any EgressIP objects.

When creating an EgressIP object, the following conditions apply to nodes that are labeled with the k8s.ovn.org/egress-assignable: "" label:

  • An egress IP address is never assigned to more than one node at a time.
  • An egress IP address is equally balanced between available nodes that can host the egress IP address.
  • If the spec.EgressIPs array in an EgressIP object specifies more than one IP address, the following conditions apply:

    • No node will ever host more than one of the specified IP addresses.
    • Traffic is balanced roughly equally between the specified IP addresses for a given namespace.
  • If a node becomes unavailable, any egress IP addresses assigned to it are automatically reassigned, subject to the previously described conditions.

When a pod matches the selector for multiple EgressIP objects, there is no guarantee which of the egress IP addresses that are specified in the EgressIP objects is assigned as the egress IP address for the pod.

Additionally, if an EgressIP object specifies multiple egress IP addresses, there is no guarantee which of the egress IP addresses might be used. For example, if a pod matches a selector for an EgressIP object with two egress IP addresses, 10.10.20.1 and 10.10.20.2, either might be used for each TCP connection or UDP conversation.

2.4. Assigning an egress IP address to a namespace

You can assign one or more egress IP addresses to a namespace or to specific pods in a namespace.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in to the cluster as a cluster administrator.
  • Configure at least one node to host an egress IP address.

Procedure

  1. Create an EgressIP object.

    1. Create a <egressips_name>.yaml file where <egressips_name> is the name of the object.
    2. In the file that you created, define an EgressIP object, as in the following example:

      apiVersion: k8s.ovn.org/v1
      kind: EgressIP
      metadata:
        name: egress-project1
      spec:
        egressIPs:
        - 192.168.127.10
        - 192.168.127.11
        namespaceSelector:
          matchLabels:
            env: qa
      Copy to Clipboard Toggle word wrap
  2. To create the object, enter the following command.

    $ oc apply -f <egressips_name>.yaml
    Copy to Clipboard Toggle word wrap

    where:

    <egressips_name>
    Replace <egressips_name> with the name of the object.

    Example output

    egressips.k8s.ovn.org/<egressips_name> created
    Copy to Clipboard Toggle word wrap

  3. Optional: Store the <egressips_name>.yaml file so that you can make changes later.
  4. Add labels to the namespace that requires egress IP addresses. To add a label to the namespace of an EgressIP object defined in step 1, run the following command:

    $ oc label ns <namespace> env=qa
    Copy to Clipboard Toggle word wrap

    where:

    <namespace>
    Replace <namespace> with the namespace that requires egress IP addresses.

Verification

  • To show all egress IP addresses that are in use in your cluster, enter the following command:

    $ oc get egressip -o yaml
    Copy to Clipboard Toggle word wrap
    Note

    The command oc get egressip only returns one egress IP address regardless of how many are configured. This is not a bug and is a limitation of Kubernetes. As a workaround, you can pass in the -o yaml or -o json flags to return all egress IPs addresses in use.

    Example output

    # ...
    spec:
      egressIPs:
      - 192.168.127.10
      - 192.168.127.11
    # ...
    Copy to Clipboard Toggle word wrap

2.5. Labeling a node to host egress IP addresses

You can apply the k8s.ovn.org/egress-assignable="" label to a node in your cluster so that Red Hat OpenShift Service on AWS classic architecture can assign one or more egress IP addresses to the node.

Prerequisites

  • Install the ROSA CLI (rosa).
  • Log in to the cluster as a cluster administrator.

Procedure

  • To label a node so that it can host one or more egress IP addresses, enter the following command:

    $ rosa edit machinepool <machinepool_name> --cluster=<cluster_name> --labels "k8s.ovn.org/egress-assignable="
    Copy to Clipboard Toggle word wrap
    Important

    This command replaces any exciting node labels on your machinepool. You should include any of the desired labels to the --labels field to ensure that your existing node labels persist.

As a Red Hat OpenShift Service on AWS classic architecture cluster administrator, you can initiate the migration from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin and verify the migration status using the ROSA CLI.

Some considerations before starting migration initiation are:

  • The cluster version must be 4.16.43 and above.
  • The migration process cannot be interrupted.
  • Migrating back to the SDN network plugin is not possible.
  • Cluster nodes will be rebooted during migration.
  • There will be no impact to workloads that are resilient to node disruptions.
  • Migration time can vary between several minutes and hours, depending on the cluster size and workload configurations.

3.1. Initiating migration using the ROSA CLI

Warning

You can only initiate migration on clusters that are version 4.16.43 and above.

To initiate the migration, run the following command:

$ rosa edit cluster -c <cluster_id> 
1

  --network-type OVNKubernetes
  --ovn-internal-subnets <configuration> 
2
Copy to Clipboard Toggle word wrap
1
Replace <cluster_id> with the ID of the cluster you want to migrate to the OVN-Kubernetes network plugin.
2
Optional: Users can create key-value pairs to configure internal subnets using any or all of the options join, masquerade, transit along with a single CIDR per option. For example, --ovn-internal-subnets="join=0.0.0.0/24,transit=0.0.0.0/24,masquerade=0.0.0.0/24".
Important

You cannot include the optional flag --ovn-internal-subnets in the command unless you define a value for the flag --network-type.

Verification

  • To check the status of the migration, run the following command:

    $ rosa describe cluster -c <cluster_id> 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <cluster_id> with the ID of the cluster to check the migration status.

Chapter 4. Configuring a cluster-wide proxy

If you are using an existing Virtual Private Cloud (VPC), you can configure a cluster-wide proxy during a Red Hat OpenShift Service on AWS classic architecture cluster installation or after the cluster is installed. When you enable a proxy, the core cluster components are denied direct access to the internet, but the proxy does not affect user workloads.

Note

Only cluster system egress traffic is proxied, including calls to the cloud provider API.

If you use a cluster-wide proxy, you are responsible for maintaining the availability of the proxy to the cluster. If the proxy becomes unavailable, then it might impact the health and supportability of the cluster.

To configure a cluster-wide proxy, you must meet the following requirements. These requirements are valid when you configure a proxy during installation or postinstallation.

4.1.1. General requirements

  • You are the cluster owner.
  • Your account has sufficient privileges.
  • You have an existing Virtual Private Cloud (VPC) for your cluster.
  • The proxy can access the VPC for the cluster and the private subnets of the VPC. The proxy must also be accessible from the VPC for the cluster and from the private subnets of the VPC.
  • You have added the following endpoints to your VPC endpoint:

    • ec2.<aws_region>.amazonaws.com
    • elasticloadbalancing.<aws_region>.amazonaws.com
    • s3.<aws_region>.amazonaws.com

      These endpoints are required to complete requests from the nodes to the AWS EC2 API. Because the proxy works at the container level and not at the node level, you must route these requests to the AWS EC2 API through the AWS private network. Adding the public IP address of the EC2 API to your allowlist in your proxy server is not enough.

      Important

      When using a cluster-wide proxy, you must configure the s3.<aws_region>.amazonaws.com endpoint as type Gateway.

4.1.2. Network requirements

If your proxy re-encrypts egress traffic, you must create exclusions to several domain and port combinations required by OpenShift.

Your proxy must exclude re-encrypting the following OpenShift URLs:

Expand
Table 4.1. URLs to exclude from egress traffic re-encryption
AddressProtocol/PortFunction

observatorium-mst.api.openshift.com

https/443

Required. Used for Managed OpenShift-specific telemetry.

sso.redhat.com

https/443

The https://console.redhat.com/openshift site uses authentication from sso.redhat.com to download the cluster pull secret and use Red Hat SaaS solutions to facilitate monitoring of your subscriptions, cluster inventory, and chargeback reporting.

4.2. Responsibilities for additional trust bundles

If you supply an additional trust bundle, you are responsible for the following requirements:

  • Ensuring that the contents of the additional trust bundle are valid
  • Ensuring that the certificates, including intermediary certificates, contained in the additional trust bundle have not expired
  • Tracking the expiry and performing any necessary renewals for certificates contained in the additional trust bundle
  • Updating the cluster configuration with the updated additional trust bundle

4.3. Configuring a proxy during installation

You can configure an HTTP or HTTPS proxy when you install a Red Hat OpenShift Service on AWS classic architecture cluster into an existing Virtual Private Cloud (VPC). You can configure the proxy during installation by using Red Hat OpenShift Cluster Manager or the ROSA CLI (rosa).

If you are installing a Red Hat OpenShift Service on AWS classic architecture cluster into an existing Virtual Private Cloud (VPC), you can use Red Hat OpenShift Cluster Manager to enable a cluster-wide HTTP or HTTPS proxy during installation.

Prior to the installation, you must verify that the proxy is accessible from the VPC that the cluster is being installed into. The proxy must also be accessible from the private subnets of the VPC.

For detailed steps to configure a cluster-wide proxy during installation by using OpenShift Cluster Manager, see Creating a cluster with customizations by using OpenShift Cluster Manager.

If you are installing a Red Hat OpenShift Service on AWS classic architecture cluster into an existing Virtual Private Cloud (VPC), you can use the ROSA CLI (rosa) to enable a cluster-wide HTTP or HTTPS proxy during installation.

The following procedure provides details about the ROSA CLI (rosa) arguments that are used to configure a cluster-wide proxy during installation.

For general installation steps using the ROSA CLI, see Creating a cluster with customizations using the CLI.

Prerequisites

  • You have verified that the proxy is accessible from the VPC that the cluster is being installed into. The proxy must also be accessible from the private subnets of the VPC.

Procedure

  • Specify a proxy configuration when you create your cluster:

    $ rosa create cluster \
     <other_arguments_here> \
     --additional-trust-bundle-file <path_to_ca_bundle_file> \ 
    1
     
    2
     
    3
    
     --http-proxy http://<username>:<password>@<ip>:<port> \ 
    4
     
    5
    
     --https-proxy https://<username>:<password>@<ip>:<port> \ 
    6
     
    7
    
     --no-proxy example.com 
    8
    Copy to Clipboard Toggle word wrap
    1 4 6
    The additional-trust-bundle-file, http-proxy, and https-proxy arguments are all optional.
    2
    The additional-trust-bundle-file argument is a file path pointing to a bundle of PEM-encoded X.509 certificates, which are all concatenated together. The additional-trust-bundle-file argument is required for users who use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. This applies regardless of whether the proxy is transparent or requires explicit configuration using the http-proxy and https-proxy arguments.
    3 5 7
    The http-proxy and https-proxy arguments must point to a valid URL.
    8
    A comma-separated list of destination domain names, IP addresses, or network CIDRs to exclude proxying.

    Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues.

    This field is ignored if neither the httpProxy or httpsProxy fields are set.

4.4. Configuring a proxy after installation

You can configure an HTTP or HTTPS proxy after you install a Red Hat OpenShift Service on AWS classic architecture cluster into an existing Virtual Private Cloud (VPC). You can configure the proxy after installation by using Red Hat OpenShift Cluster Manager or the ROSA CLI (rosa).

You can use Red Hat OpenShift Cluster Manager to add a cluster-wide proxy configuration to an existing Red Hat OpenShift Service on AWS classic architecture cluster in a Virtual Private Cloud (VPC).

You can also use OpenShift Cluster Manager to update an existing cluster-wide proxy configuration. For example, you might need to update the network address for the proxy or replace the additional trust bundle if any of the certificate authorities for the proxy expire.

Important

The cluster applies the proxy configuration to the control plane and compute nodes. While applying the configuration, each cluster node is temporarily placed in an unschedulable state and drained of its workloads. Each node is restarted as part of the process.

Prerequisites

  • You have an Red Hat OpenShift Service on AWS classic architecture cluster.
  • Your cluster is deployed in a VPC.

Procedure

  1. Navigate to OpenShift Cluster Manager and select your cluster.
  2. Under the Virtual Private Cloud (VPC) section on the Networking page, click Edit cluster-wide proxy.
  3. On the Edit cluster-wide proxy page, provide your proxy configuration details:

    1. Enter a value in at least one of the following fields:

      • Specify a valid HTTP proxy URL.
      • Specify a valid HTTPS proxy URL.
      • In the Additional trust bundle field, provide a PEM encoded X.509 certificate bundle.

        If you are replacing an existing trust bundle file, select Replace file to view the field. The bundle is added to the trusted certificate store for the cluster nodes. An additional trust bundle file is required if you use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. This requirement applies regardless of whether the proxy is transparent or requires explicit configuration using the http-proxy and https-proxy arguments.

    2. Click Confirm.

Verification

  • Under the Virtual Private Cloud (VPC) section on the Networking page, verify that the proxy configuration for your cluster is as expected.

You can use the ROSA CLI (rosa) to add a cluster-wide proxy configuration to an existing ROSA cluster in a Virtual Private Cloud (VPC).

You can also use rosa to update an existing cluster-wide proxy configuration. For example, you might need to update the network address for the proxy or replace the additional trust bundle if any of the certificate authorities for the proxy expire.

Important

The cluster applies the proxy configuration to the control plane and compute nodes. While applying the configuration, each cluster node is temporarily placed in an unschedulable state and drained of its workloads. Each node is restarted as part of the process.

Prerequisites

  • You have installed and configured the latest ROSA (rosa) and OpenShift (oc) CLIs on your installation host.
  • You have a Red Hat OpenShift Service on AWS classic architecture cluster that is deployed in a VPC.

Procedure

  • Edit the cluster configuration to add or update the cluster-wide proxy details:

    $ rosa edit cluster \
     --cluster $CLUSTER_NAME \
     --additional-trust-bundle-file <path_to_ca_bundle_file> \ 
    1
     
    2
     
    3
    
     --http-proxy http://<username>:<password>@<ip>:<port> \ 
    4
     
    5
    
     --https-proxy https://<username>:<password>@<ip>:<port> \ 
    6
     
    7
    
      --no-proxy example.com 
    8
    Copy to Clipboard Toggle word wrap
    1 4 6
    The additional-trust-bundle-file, http-proxy, and https-proxy arguments are all optional.
    2
    The additional-trust-bundle-file argument is a file path pointing to a bundle of PEM-encoded X.509 certificates, which are all concatenated together. The additional-trust-bundle-file argument is a file path pointing to a bundle of PEM-encoded X.509 certificates, which are all concatenated together. The additional-trust-bundle-file argument is required for users who use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. This applies regardless of whether the proxy is transparent or requires explicit configuration using the http-proxy and https-proxy arguments.
    Important

    Do not attempt to change the proxy or additional trust bundle configuration on the cluster directly. Any changes must be applied by using the ROSA CLI (rosa) or Red Hat OpenShift Cluster Manager. Any changes made directly to managed resources on the cluster are reverted automatically.

    3 5 7
    The http-proxy and https-proxy arguments must point to a valid URL.
    8
    A comma-separated list of destination domain names, IP addresses, or network CIDRs to exclude proxying.

    Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass proxy for all destinations.

    If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues.

    This field is ignored if neither the httpProxy or httpsProxy fields are set.

Verification

  1. List the status of the machine config pools and verify that they are updated:

    $ oc get machineconfigpools
    Copy to Clipboard Toggle word wrap

    Example output

    NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
    master   rendered-master-d9a03f612a432095dcde6dcf44597d90   True      False      False      3              3                   3                     0                      31h
    worker   rendered-worker-f6827a4efe21e155c25c21b43c46f65e   True      False      False      6              6                   6                     0                      31h
    Copy to Clipboard Toggle word wrap

  2. Display the proxy configuration for your cluster and verify that the details are as expected:

    $ oc get proxy cluster -o yaml
    Copy to Clipboard Toggle word wrap

    Example output

    apiVersion: config.openshift.io/v1
    kind: Proxy
    spec:
      httpProxy: http://proxy.host.domain:<port>
      httpsProxy: https://proxy.host.domain:<port>
      <...more...>
    status:
      httpProxy: http://proxy.host.domain:<port>
      httpsProxy: https://proxy.host.domain:<port>
      <...more...>
    Copy to Clipboard Toggle word wrap

4.5. Removing a cluster-wide proxy

You can remove your cluster-wide proxy by using the ROSA CLI. After removing the cluster, you should also remove any trust bundles that are added to the cluster.

4.5.1. Removing the cluster-wide proxy using CLI

You must use the ROSA CLI, rosa, to remove the proxy’s address from your cluster.

Prerequisites

  • You must have cluster administrator privileges.
  • You have installed the ROSA CLI (rosa).

Procedure

  • Use the rosa edit command to modify the proxy. You must pass empty strings to the --http-proxy and --https-proxy arguments to clear the proxy from the cluster:

    $ rosa edit cluster -c <cluster_name> --http-proxy "" --https-proxy ""
    Copy to Clipboard Toggle word wrap
    Note

    While your proxy might only use one of the proxy arguments, the empty fields are ignored, so passing empty strings to both the --http-proxy and --https-proxy arguments does not cause any issues.

    Example Output

    I: Updated cluster <cluster_name>
    Copy to Clipboard Toggle word wrap

Verification

  • You can verify that the proxy has been removed from the cluster by using the rosa describe command:

    $ rosa describe cluster -c <cluster_name>
    Copy to Clipboard Toggle word wrap

    Before removal, the proxy IP displays in a proxy section:

    Name:                       <cluster_name>
    ID:                         <cluster_internal_id>
    External ID:                <cluster_external_id>
    OpenShift Version:          4.0
    Channel Group:              stable
    DNS:                        <dns>
    AWS Account:                <aws_account_id>
    API URL:                    <api_url>
    Console URL:                <console_url>
    Region:                     us-east-1
    Multi-AZ:                   false
    Nodes:
     - Control plane:           3
     - Infra:                   2
     - Compute:                 2
    Network:
     - Type:                    OVNKubernetes
     - Service CIDR:            <service_cidr>
     - Machine CIDR:            <machine_cidr>
     - Pod CIDR:                <pod_cidr>
     - Host Prefix:             <host_prefix>
    Proxy:
     - HTTPProxy:               <proxy_url>
    Additional trust bundle:    REDACTED
    Copy to Clipboard Toggle word wrap

    After removing the proxy, the proxy section is removed:

    Name:                       <cluster_name>
    ID:                         <cluster_internal_id>
    External ID:                <cluster_external_id>
    OpenShift Version:          4.0
    Channel Group:              stable
    DNS:                        <dns>
    AWS Account:                <aws_account_id>
    API URL:                    <api_url>
    Console URL:                <console_url>
    Region:                     us-east-1
    Multi-AZ:                   false
    Nodes:
     - Control plane:           3
     - Infra:                   2
     - Compute:                 2
    Network:
     - Type:                    OVNKubernetes
     - Service CIDR:            <service_cidr>
     - Machine CIDR:            <machine_cidr>
     - Pod CIDR:                <pod_cidr>
     - Host Prefix:             <host_prefix>
    Additional trust bundle:    REDACTED
    Copy to Clipboard Toggle word wrap

You can remove certificate authorities (CA) from your cluster with the ROSA CLI, rosa.

Prerequisites

  • You must have cluster administrator privileges.
  • You have installed the ROSA CLI (rosa).
  • Your cluster has certificate authorities added.

Procedure

  • Use the rosa edit command to modify the CA trust bundle. You must pass empty strings to the --additional-trust-bundle-file argument to clear the trust bundle from the cluster:

    $ rosa edit cluster -c <cluster_name> --additional-trust-bundle-file ""
    Copy to Clipboard Toggle word wrap

    Example Output

    I: Updated cluster <cluster_name>
    Copy to Clipboard Toggle word wrap

Verification

  • You can verify that the trust bundle has been removed from the cluster by using the rosa describe command:

    $ rosa describe cluster -c <cluster_name>
    Copy to Clipboard Toggle word wrap

    Before removal, the Additional trust bundle section appears, redacting its value for security purposes:

    Name:                       <cluster_name>
    ID:                         <cluster_internal_id>
    External ID:                <cluster_external_id>
    OpenShift Version:          4.0
    Channel Group:              stable
    DNS:                        <dns>
    AWS Account:                <aws_account_id>
    API URL:                    <api_url>
    Console URL:                <console_url>
    Region:                     us-east-1
    Multi-AZ:                   false
    Nodes:
     - Control plane:           3
     - Infra:                   2
     - Compute:                 2
    Network:
     - Type:                    OVNKubernetes
     - Service CIDR:            <service_cidr>
     - Machine CIDR:            <machine_cidr>
     - Pod CIDR:                <pod_cidr>
     - Host Prefix:             <host_prefix>
    Proxy:
     - HTTPProxy:               <proxy_url>
    Additional trust bundle:    REDACTED
    Copy to Clipboard Toggle word wrap

    After removing the proxy, the Additional trust bundle section is removed:

    Name:                       <cluster_name>
    ID:                         <cluster_internal_id>
    External ID:                <cluster_external_id>
    OpenShift Version:          4.0
    Channel Group:              stable
    DNS:                        <dns>
    AWS Account:                <aws_account_id>
    API URL:                    <api_url>
    Console URL:                <console_url>
    Region:                     us-east-1
    Multi-AZ:                   false
    Nodes:
     - Control plane:           3
     - Infra:                   2
     - Compute:                 2
    Network:
     - Type:                    OVNKubernetes
     - Service CIDR:            <service_cidr>
     - Machine CIDR:            <machine_cidr>
     - Pod CIDR:                <pod_cidr>
     - Host Prefix:             <host_prefix>
    Proxy:
     - HTTPProxy:               <proxy_url>
    Copy to Clipboard Toggle word wrap

Chapter 5. Enabling multicast for a project

5.1. About multicast

With IP multicast, data is broadcast to many IP addresses simultaneously.

Important
  • At this time, multicast is best used for low-bandwidth coordination or service discovery and not a high-bandwidth solution.
  • By default, network policies affect all connections in a namespace. However, multicast is unaffected by network policies. If multicast is enabled in the same namespace as your network policies, it is always allowed, even if there is a deny-all network policy. Cluster administrators should consider the implications to the exemption of multicast from network policies before enabling it.

Multicast traffic between Red Hat OpenShift Service on AWS classic architecture pods is disabled by default. If you are using the OVN-Kubernetes network plugin, you can enable multicast on a per-project basis.

5.2. Enabling multicast between pods

You can enable multicast between pods for your project.

Prerequisites

  • Install the OpenShift CLI (oc).
  • You must log in to the cluster with a user that has the cluster-admin or the dedicated-admin role.

Procedure

  • Run the following command to enable multicast for a project. Replace <namespace> with the namespace for the project you want to enable multicast for.

    $ oc annotate namespace <namespace> \
        k8s.ovn.org/multicast-enabled=true
    Copy to Clipboard Toggle word wrap
    Tip

    You can alternatively apply the following YAML to add the annotation:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: <namespace>
      annotations:
        k8s.ovn.org/multicast-enabled: "true"
    Copy to Clipboard Toggle word wrap

Verification

To verify that multicast is enabled for a project, complete the following procedure:

  1. Change your current project to the project that you enabled multicast for. Replace <project> with the project name.

    $ oc project <project>
    Copy to Clipboard Toggle word wrap
  2. Create a pod to act as a multicast receiver:

    $ cat <<EOF| oc create -f -
    apiVersion: v1
    kind: Pod
    metadata:
      name: mlistener
      labels:
        app: multicast-verify
    spec:
      containers:
        - name: mlistener
          image: registry.access.redhat.com/ubi9
          command: ["/bin/sh", "-c"]
          args:
            ["dnf -y install socat hostname && sleep inf"]
          ports:
            - containerPort: 30102
              name: mlistener
              protocol: UDP
    EOF
    Copy to Clipboard Toggle word wrap
  3. Create a pod to act as a multicast sender:

    $ cat <<EOF| oc create -f -
    apiVersion: v1
    kind: Pod
    metadata:
      name: msender
      labels:
        app: multicast-verify
    spec:
      containers:
        - name: msender
          image: registry.access.redhat.com/ubi9
          command: ["/bin/sh", "-c"]
          args:
            ["dnf -y install socat && sleep inf"]
    EOF
    Copy to Clipboard Toggle word wrap
  4. In a new terminal window or tab, start the multicast listener.

    1. Get the IP address for the Pod:

      $ POD_IP=$(oc get pods mlistener -o jsonpath='{.status.podIP}')
      Copy to Clipboard Toggle word wrap
    2. Start the multicast listener by entering the following command:

      $ oc exec mlistener -i -t -- \
          socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:$POD_IP,fork EXEC:hostname
      Copy to Clipboard Toggle word wrap
  5. Start the multicast transmitter.

    1. Get the pod network IP address range:

      $ CIDR=$(oc get Network.config.openshift.io cluster \
          -o jsonpath='{.status.clusterNetwork[0].cidr}')
      Copy to Clipboard Toggle word wrap
    2. To send a multicast message, enter the following command:

      $ oc exec msender -i -t -- \
          /bin/bash -c "echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=$CIDR,ip-multicast-ttl=64"
      Copy to Clipboard Toggle word wrap

      If multicast is working, the previous command returns the following output:

      mlistener
      Copy to Clipboard Toggle word wrap

Legal Notice

Copyright © 2025 Red Hat

OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).

Modified versions must remove all Red Hat trademarks.

Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.

Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.

Linux® is the registered trademark of Linus Torvalds in the United States and other countries.

Java® is a registered trademark of Oracle and/or its affiliates.

XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.

MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.

Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.

The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat