Configuring network settings
General networking configuration processes in OpenShift Container Platform
Abstract
Chapter 1. Configuring system controls and interface attributes using the tuning plugin Copy linkLink copied to clipboard!
In Linux, sysctl allows an administrator to modify kernel parameters at runtime. You can modify interface-level network sysctls using the tuning Container Network Interface (CNI) meta plugin. The tuning CNI meta plugin operates in a chain with a main CNI plugin as illustrated.
The main CNI plugin assigns the interface and passes this interface to the tuning CNI meta plugin at runtime. You can change some sysctls and several interface attributes such as promiscuous mode, all-multicast mode, MTU, and MAC address in the network namespace by using the tuning CNI meta plugin.
1.1. Configuring system controls by using the tuning CNI Copy linkLink copied to clipboard!
The following procedure configures the tuning CNI to change the interface-level network net.ipv4.conf.IFNAME.accept_redirects
sysctl. This example enables accepting and sending ICMP-redirected packets. In the tuning CNI meta plugin configuration, the interface name is represented by the IFNAME
token and is replaced with the actual name of the interface at runtime.
Procedure
Create a network attachment definition, such as
tuning-example.yaml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the name for the additional network attachment to create. The name must be unique within the specified namespace.
- 2
- Specifies the namespace that the object is associated with.
- 3
- Specifies the CNI specification version.
- 4
- Specifies the name for the configuration. It is recommended to match the configuration name to the name value of the network attachment definition.
- 5
- Specifies the name of the main CNI plugin to configure.
- 6
- Specifies the name of the CNI meta plugin.
- 7
- Specifies the sysctl to set. The interface name is represented by the
IFNAME
token and is replaced with the actual name of the interface at runtime.
An example YAML file is shown here:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the YAML by running the following command:
oc apply -f tuning-example.yaml
$ oc apply -f tuning-example.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
networkattachmentdefinition.k8.cni.cncf.io/tuningnad created
networkattachmentdefinition.k8.cni.cncf.io/tuningnad created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a pod such as
examplepod.yaml
with the network attachment definition similar to the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the configured
NetworkAttachmentDefinition
. - 2
runAsUser
controls which user ID the container is run with.- 3
runAsGroup
controls which primary group ID the containers is run with.- 4
allowPrivilegeEscalation
determines if a pod can request to allow privilege escalation. If unspecified, it defaults to true. This boolean directly controls whether theno_new_privs
flag gets set on the container process.- 5
capabilities
permit privileged actions without giving full root access. This policy ensures all capabilities are dropped from the pod.- 6
runAsNonRoot: true
requires that the container will run with a user with any UID other than 0.- 7
RuntimeDefault
enables the default seccomp profile for a pod or container workload.
Apply the yaml by running the following command:
oc apply -f examplepod.yaml
$ oc apply -f examplepod.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the pod is created by running the following command:
oc get pod
$ oc get pod
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s
NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to the pod by running the following command:
oc rsh tunepod
$ oc rsh tunepod
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the values of the configured sysctl flags. For example, find the value
net.ipv4.conf.net1.accept_redirects
by running the following command:sysctl net.ipv4.conf.net1.accept_redirects
sh-4.4# sysctl net.ipv4.conf.net1.accept_redirects
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expected output
net.ipv4.conf.net1.accept_redirects = 1
net.ipv4.conf.net1.accept_redirects = 1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2. Enabling all-multicast mode by using the tuning CNI Copy linkLink copied to clipboard!
You can enable all-multicast mode by using the tuning Container Network Interface (CNI) meta plugin.
The following procedure describes how to configure the tuning CNI to enable the all-multicast mode.
Procedure
Create a network attachment definition, such as
tuning-example.yaml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the name for the additional network attachment to create. The name must be unique within the specified namespace.
- 2
- Specifies the namespace that the object is associated with.
- 3
- Specifies the CNI specification version.
- 4
- Specifies the name for the configuration. Match the configuration name to the name value of the network attachment definition.
- 5
- Specifies the name of the main CNI plugin to configure.
- 6
- Specifies the name of the CNI meta plugin.
- 7
- Changes the all-multicast mode of interface. If enabled, all multicast packets on the network will be received by the interface.
An example YAML file is shown here:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the settings specified in the YAML file by running the following command:
oc apply -f tuning-allmulti.yaml
$ oc apply -f tuning-allmulti.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
networkattachmentdefinition.k8s.cni.cncf.io/setallmulti created
networkattachmentdefinition.k8s.cni.cncf.io/setallmulti created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a pod with a network attachment definition similar to that specified in the following
examplepod.yaml
sample file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the name of the configured
NetworkAttachmentDefinition
. - 2
- Specifies the user ID the container is run with.
- 3
- Specifies which primary group ID the containers is run with.
- 4
- Specifies if a pod can request privilege escalation. If unspecified, it defaults to
true
. This boolean directly controls whether theno_new_privs
flag gets set on the container process. - 5
- Specifies the container capabilities. The
drop: ["ALL"]
statement indicates that all Linux capabilities are dropped from the pod, providing a more restrictive security profile. - 6
- Specifies that the container will run with a user with any UID other than 0.
- 7
- Specifies the container’s seccomp profile. In this case, the type is set to
RuntimeDefault
. Seccomp is a Linux kernel feature that restricts the system calls available to a process, enhancing security by minimizing the attack surface.
Apply the settings specified in the YAML file by running the following command:
oc apply -f examplepod.yaml
$ oc apply -f examplepod.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the pod is created by running the following command:
oc get pod
$ oc get pod
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE allmultipod 1/1 Running 0 23s
NAME READY STATUS RESTARTS AGE allmultipod 1/1 Running 0 23s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to the pod by running the following command:
oc rsh allmultipod
$ oc rsh allmultipod
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List all the interfaces associated with the pod by running the following command:
ip link
sh-4.4# ip link
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 2. Configuring the node port service range Copy linkLink copied to clipboard!
During cluster installation, you can configure the node port range to meet the requirements of your cluster. After cluster installation, only a cluster administrator can expand the range as a postinstallation task. If your cluster uses a large number of node ports, consider increasing the available port range according to the requirements of your cluster.
If you do not set a node port range during cluster installation, the default range of 30000-32768
applies to your cluster. In this situation, you can expand the range on either side, but you must preserve 30000-32768
within your new port range.
Red Hat has not performed testing outside the default port range of 30000-32768
. For ranges outside the default port range, ensure that you test to verify the expanding node port range does not impact your cluster. In particular, ensure that there is:
- No overlap with any ports already in use by host processes
- No overlap with any ports already in use by pods that are configured with host networking
If you expanded the range and a port allocation issue occurs, create a new cluster and set the required range for it.
If you expand the node port range and OpenShift CLI (oc
) stops working because of a port conflict with the OpenShift Container Platform API server, you must create a new cluster.
2.1. Expanding the node port range Copy linkLink copied to clipboard!
You can expand the node port range for your cluster. After you install your OpenShift Container Platform cluster, you cannot shrink the node port range on either side of the currently configured range.
Red Hat has not performed testing outside the default port range of 30000-32768
. For ranges outside the default port range, ensure that you test to verify that expanding your node port range does not impact your cluster. If you expanded the range and a port allocation issue occurs, create a new cluster and set the required range for it.
Prerequisites
-
Installed the OpenShift CLI (
oc
). -
Logged in to the cluster as a user with
cluster-admin
privileges. -
You ensured that your cluster infrastructure allows access to the ports that exist in the extended range. For example, if you expand the node port range to
30000-32900
, your firewall or packet filtering configuration must allow the inclusive port range of30000-32900
.
Procedure
To expand the range for the
serviceNodePortRange
parameter in thenetwork.config.openshift.io
object that your cluster uses to manage traffic for pods, enter the following command:oc patch network.config.openshift.io cluster --type=merge -p \ '{
$ oc patch network.config.openshift.io cluster --type=merge -p \ '{ "spec": { "serviceNodePortRange": "<port_range>" } }'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<port_range>
-
specifies your expanded range, such as
30000-32900
.
TipYou can also apply the following YAML to update the node port range:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
network.config.openshift.io/cluster patched
network.config.openshift.io/cluster patched
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To confirm that the updated configuration is active, enter the following command. The update can take several minutes to apply.
oc get configmaps -n openshift-kube-apiserver config \ -o jsonpath="{.data['config\.yaml']}" | \ grep -Eo '"service-node-port-range":["[[:digit:]]+-[[:digit:]]+"]'
$ oc get configmaps -n openshift-kube-apiserver config \ -o jsonpath="{.data['config\.yaml']}" | \ grep -Eo '"service-node-port-range":["[[:digit:]]+-[[:digit:]]+"]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
"service-node-port-range":["30000-32900"]
"service-node-port-range":["30000-32900"]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 3. Configuring the cluster network range Copy linkLink copied to clipboard!
As a cluster administrator, you can expand the cluster network range after cluster installation. You might want to expand the cluster network range if you need more IP addresses for additional nodes.
For example, if you deployed a cluster and specified 10.128.0.0/19
as the cluster network range and a host prefix of 23
, you are limited to 16 nodes. You can expand that to 510 nodes by changing the CIDR mask on a cluster to /14
.
When expanding the cluster network address range, your cluster must use the OVN-Kubernetes network plugin. Other network plugins are not supported.
The following limitations apply when modifying the cluster network IP address range:
- The CIDR mask size specified must always be smaller than the currently configured CIDR mask size, because you can only increase IP space by adding more nodes to an installed cluster
- The host prefix cannot be modified
- Pods that are configured with an overridden default gateway must be recreated after the cluster network expands
3.1. Expanding the cluster network IP address range Copy linkLink copied to clipboard!
You can expand the IP address range for the cluster network. Because this change requires rolling out a new Operator configuration across the cluster, it can take up to 30 minutes to take effect.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in to the cluster with a user with
cluster-admin
privileges. - Ensure that the cluster uses the OVN-Kubernetes network plugin.
Procedure
To obtain the cluster network range and host prefix for your cluster, enter the following command:
oc get network.operator.openshift.io \ -o jsonpath="{.items[0].spec.clusterNetwork}"
$ oc get network.operator.openshift.io \ -o jsonpath="{.items[0].spec.clusterNetwork}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
[{"cidr":"10.217.0.0/22","hostPrefix":23}]
[{"cidr":"10.217.0.0/22","hostPrefix":23}]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To expand the cluster network IP address range, enter the following command. Use the CIDR IP address range and host prefix returned from the output of the previous command.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<network>
-
Specifies the network part of the
cidr
field that you obtained from the previous step. You cannot change this value. <cidr>
-
Specifies the network prefix length. For example,
14
. Change this value to a smaller number than the value from the output in the previous step to expand the cluster network range. <prefix>
-
Specifies the current host prefix for your cluster. This value must be the same value for the
hostPrefix
field that you obtained from the previous step.
Example command
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
network.config.openshift.io/cluster patched
network.config.openshift.io/cluster patched
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To confirm that the configuration is active, enter the following command. It can take up to 30 minutes for this change to take effect.
oc get network.operator.openshift.io \ -o jsonpath="{.items[0].spec.clusterNetwork}"
$ oc get network.operator.openshift.io \ -o jsonpath="{.items[0].spec.clusterNetwork}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
[{"cidr":"10.217.0.0/14","hostPrefix":23}]
[{"cidr":"10.217.0.0/14","hostPrefix":23}]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Configuring IP failover Copy linkLink copied to clipboard!
This topic describes configuring IP failover for pods and services on your OpenShift Container Platform cluster.
IP failover uses Keepalived to host a set of externally accessible Virtual IP (VIP) addresses on a set of hosts. Each VIP address is only serviced by a single host at a time. Keepalived uses the Virtual Router Redundancy Protocol (VRRP) to determine which host, from the set of hosts, services which VIP. If a host becomes unavailable, or if the service that Keepalived is watching does not respond, the VIP is switched to another host from the set. This means a VIP is always serviced as long as a host is available.
Every VIP in the set is serviced by a node selected from the set. If a single node is available, the VIPs are served. There is no way to explicitly distribute the VIPs over the nodes, so there can be nodes with no VIPs and other nodes with many VIPs. If there is only one node, all VIPs are on it.
The administrator must ensure that all of the VIP addresses meet the following requirements:
- Accessible on the configured hosts from outside the cluster.
- Not used for any other purpose within the cluster.
Keepalived on each node determines whether the needed service is running. If it is, VIPs are supported and Keepalived participates in the negotiation to determine which node serves the VIP. For a node to participate, the service must be listening on the watch port on a VIP or the check must be disabled.
Each VIP in the set might be served by a different node.
IP failover monitors a port on each VIP to determine whether the port is reachable on the node. If the port is not reachable, the VIP is not assigned to the node. If the port is set to 0
, this check is suppressed. The check script does the needed testing.
When a node running Keepalived passes the check script, the VIP on that node can enter the master
state based on its priority and the priority of the current master and as determined by the preemption strategy.
A cluster administrator can provide a script through the OPENSHIFT_HA_NOTIFY_SCRIPT
variable, and this script is called whenever the state of the VIP on the node changes. Keepalived uses the master
state when it is servicing the VIP, the backup
state when another node is servicing the VIP, or in the fault
state when the check script fails. The notify script is called with the new state whenever the state changes.
You can create an IP failover deployment configuration on OpenShift Container Platform. The IP failover deployment configuration specifies the set of VIP addresses, and the set of nodes on which to service them. A cluster can have multiple IP failover deployment configurations, with each managing its own set of unique VIP addresses. Each node in the IP failover configuration runs an IP failover pod, and this pod runs Keepalived.
When using VIPs to access a pod with host networking, the application pod runs on all nodes that are running the IP failover pods. This enables any of the IP failover nodes to become the master and service the VIPs when needed. If application pods are not running on all nodes with IP failover, either some IP failover nodes never service the VIPs or some application pods never receive any traffic. Use the same selector and replication count, for both IP failover and the application pods, to avoid this mismatch.
While using VIPs to access a service, any of the nodes can be in the IP failover set of nodes, since the service is reachable on all nodes, no matter where the application pod is running. Any of the IP failover nodes can become master at any time. The service can either use external IPs and a service port or it can use a NodePort
. Setting up a NodePort
is a privileged operation.
When using external IPs in the service definition, the VIPs are set to the external IPs, and the IP failover monitoring port is set to the service port. When using a node port, the port is open on every node in the cluster, and the service load-balances traffic from whatever node currently services the VIP. In this case, the IP failover monitoring port is set to the NodePort
in the service definition.
Even though a service VIP is highly available, performance can still be affected. Keepalived makes sure that each of the VIPs is serviced by some node in the configuration, and several VIPs can end up on the same node even when other nodes have none. Strategies that externally load-balance across a set of VIPs can be thwarted when IP failover puts multiple VIPs on the same node.
When you use ExternalIP
, you can set up IP failover to have the same VIP range as the ExternalIP
range. You can also disable the monitoring port. In this case, all of the VIPs appear on same node in the cluster. Any user can set up a service with an ExternalIP
and make it highly available.
There are a maximum of 254 VIPs in the cluster.
4.1. IP failover environment variables Copy linkLink copied to clipboard!
The following table contains the variables used to configure IP failover.
Variable Name | Default | Description |
---|---|---|
|
|
The IP failover pod tries to open a TCP connection to this port on each Virtual IP (VIP). If connection is established, the service is considered to be running. If this port is set to |
|
The interface name that IP failover uses to send Virtual Router Redundancy Protocol (VRRP) traffic. The default value is
If your cluster uses the OVN-Kubernetes network plugin, set this value to | |
|
|
The number of replicas to create. This must match |
|
The list of IP address ranges to replicate. This must be provided. For example, | |
|
|
The offset value used to set the virtual router IDs. Using different offset values allows multiple IP failover configurations to exist within the same cluster. The default offset is |
|
The number of groups to create for VRRP. If not set, a group is created for each virtual IP range specified with the | |
| INPUT |
The name of the iptables chain, to automatically add an |
| The full path name in the pod file system of a script that is periodically run to verify the application is operating. | |
|
| The period, in seconds, that the check script is run. |
| The full path name in the pod file system of a script that is run whenever the state changes. | |
|
|
The strategy for handling a new higher priority host. The |
4.2. Configuring IP failover in your cluster Copy linkLink copied to clipboard!
As a cluster administrator, you can configure IP failover on an entire cluster, or on a subset of nodes, as defined by the label selector. You can also configure multiple IP failover deployments in your cluster, where each one is independent of the others.
The IP failover deployment ensures that a failover pod runs on each of the nodes matching the constraints or the label used.
This pod runs Keepalived, which can monitor an endpoint and use Virtual Router Redundancy Protocol (VRRP) to fail over the virtual IP (VIP) from one node to another if the first node cannot reach the service or endpoint.
For production use, set a selector
that selects at least two nodes, and set replicas
equal to the number of selected nodes.
Prerequisites
-
You are logged in to the cluster as a user with
cluster-admin
privileges. - You created a pull secret.
Red Hat OpenStack Platform (RHOSP) only:
- You installed an RHOSP client (RHCOS documentation) on the target environment.
-
You also downloaded the RHOSP
openrc.sh
rc file (RHCOS documentation).
Procedure
Create an IP failover service account:
oc create sa ipfailover
$ oc create sa ipfailover
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update security context constraints (SCC) for
hostNetwork
:oc adm policy add-scc-to-user privileged -z ipfailover
$ oc adm policy add-scc-to-user privileged -z ipfailover
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy add-scc-to-user hostnetwork -z ipfailover
$ oc adm policy add-scc-to-user hostnetwork -z ipfailover
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat OpenStack Platform (RHOSP) only: Complete the following steps to make a failover VIP address reachable on RHOSP ports.
Use the RHOSP CLI to show the default RHOSP API and VIP addresses in the
allowed_address_pairs
parameter of your RHOSP cluster:openstack port show <cluster_name> -c allowed_address_pairs
$ openstack port show <cluster_name> -c allowed_address_pairs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Output example
*Field* *Value* allowed_address_pairs ip_address='192.168.0.5', mac_address='fa:16:3e:31:f9:cb' ip_address='192.168.0.7', mac_address='fa:16:3e:31:f9:cb'
*Field* *Value* allowed_address_pairs ip_address='192.168.0.5', mac_address='fa:16:3e:31:f9:cb' ip_address='192.168.0.7', mac_address='fa:16:3e:31:f9:cb'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set a different VIP address for the IP failover deployment and make the address reachable on RHOSP ports by entering the following command in the RHOSP CLI. Do not set any default RHOSP API and VIP addresses as the failover VIP address for the IP failover deployment.
Example of adding the
1.1.1.1
failover IP address as an allowed address on RHOSP ports.openstack port set <cluster_name> --allowed-address ip-address=1.1.1.1,mac-address=fa:fa:16:3e:31:f9:cb
$ openstack port set <cluster_name> --allowed-address ip-address=1.1.1.1,mac-address=fa:fa:16:3e:31:f9:cb
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a deployment YAML file to configure IP failover for your deployment. See "Example deployment YAML for IP failover configuration" in a later step.
Specify the following specification in the IP failover deployment so that you pass the failover VIP address to the
OPENSHIFT_HA_VIRTUAL_IPS
environment variable:Example of adding the
1.1.1.1
VIP address toOPENSHIFT_HA_VIRTUAL_IPS
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a deployment YAML file to configure IP failover.
NoteFor Red Hat OpenStack Platform (RHOSP), you do not need to re-create the deployment YAML file. You already created this file as part of the earlier instructions.
Example deployment YAML for IP failover configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the IP failover deployment.
- 2
- The list of IP address ranges to replicate. This must be provided. For example,
1.2.3.4-6,1.2.3.9
. - 3
- The number of groups to create for VRRP. If not set, a group is created for each virtual IP range specified with the
OPENSHIFT_HA_VIP_GROUPS
variable. - 4
- The interface name that IP failover uses to send VRRP traffic. By default,
eth0
is used. - 5
- The IP failover pod tries to open a TCP connection to this port on each VIP. If connection is established, the service is considered to be running. If this port is set to
0
, the test always passes. The default value is80
. - 6
- The offset value used to set the virtual router IDs. Using different offset values allows multiple IP failover configurations to exist within the same cluster. The default offset is
10
, and the allowed range is0
through255
. - 7
- The number of replicas to create. This must match
spec.replicas
value in IP failover deployment configuration. The default value is2
. - 8
- The name of the
iptables
chain to automatically add aniptables
rule to allow the VRRP traffic on. If the value is not set, aniptables
rule is not added. If the chain does not exist, it is not created, and Keepalived operates in unicast mode. The default isINPUT
. - 9
- The full path name in the pod file system of a script that is run whenever the state changes.
- 10
- The full path name in the pod file system of a script that is periodically run to verify the application is operating.
- 11
- The strategy for handling a new higher priority host. The default value is
preempt_delay 300
, which causes a Keepalived instance to take over a VIP after 5 minutes if a lower-priority master is holding the VIP. - 12
- The period, in seconds, that the check script is run. The default value is
2
. - 13
- Create the pull secret before creating the deployment, otherwise you will get an error when creating the deployment.
4.3. Configuring check and notify scripts Copy linkLink copied to clipboard!
Keepalived monitors the health of the application by periodically running an optional user-supplied check script. For example, the script can test a web server by issuing a request and verifying the response. As cluster administrator, you can provide an optional notify script, which is called whenever the state changes.
The check and notify scripts run in the IP failover pod and use the pod file system, not the host file system. However, the IP failover pod makes the host file system available under the /hosts
mount path. When configuring a check or notify script, you must provide the full path to the script. The recommended approach for providing the scripts is to use a ConfigMap
object.
The full path names of the check and notify scripts are added to the Keepalived configuration file, _/etc/keepalived/keepalived.conf
, which is loaded every time Keepalived starts. The scripts can be added to the pod with a ConfigMap
object as described in the following methods.
Check script
When a check script is not provided, a simple default script is run that tests the TCP connection. This default test is suppressed when the monitor port is 0
.
Each IP failover pod manages a Keepalived daemon that manages one or more virtual IP (VIP) addresses on the node where the pod is running. The Keepalived daemon keeps the state of each VIP for that node. A particular VIP on a particular node might be in master
, backup
, or fault
state.
If the check script returns non-zero, the node enters the backup
state, and any VIPs it holds are reassigned.
Notify script
Keepalived passes the following three parameters to the notify script:
-
$1
-group
orinstance
-
$2
- Name of thegroup
orinstance
-
$3
- The new state:master
,backup
, orfault
Prerequisites
-
You installed the OpenShift CLI (
oc
). -
You are logged in to the cluster with a user with
cluster-admin
privileges.
Procedure
Create the desired script and create a
ConfigMap
object to hold it. The script has no input arguments and must return0
forOK
and1
forfail
.The check script,
mycheckscript.sh
:#!/bin/bash # Whatever tests are needed # E.g., send request and verify response exit 0
#!/bin/bash # Whatever tests are needed # E.g., send request and verify response exit 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
ConfigMap
object :oc create configmap mycustomcheck --from-file=mycheckscript.sh
$ oc create configmap mycustomcheck --from-file=mycheckscript.sh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the script to the pod. The
defaultMode
for the mountedConfigMap
object files must able to run by usingoc
commands or by editing the deployment configuration. A value of0755
,493
decimal, is typical:oc set env deploy/ipfailover-keepalived \ OPENSHIFT_HA_CHECK_SCRIPT=/etc/keepalive/mycheckscript.sh
$ oc set env deploy/ipfailover-keepalived \ OPENSHIFT_HA_CHECK_SCRIPT=/etc/keepalive/mycheckscript.sh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc set volume deploy/ipfailover-keepalived --add --overwrite \ --name=config-volume \ --mount-path=/etc/keepalive \ --source='{"configMap": { "name": "mycustomcheck", "defaultMode": 493}}'
$ oc set volume deploy/ipfailover-keepalived --add --overwrite \ --name=config-volume \ --mount-path=/etc/keepalive \ --source='{"configMap": { "name": "mycustomcheck", "defaultMode": 493}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
oc set env
command is whitespace sensitive. There must be no whitespace on either side of the=
sign.TipYou can alternatively edit the
ipfailover-keepalived
deployment configuration:oc edit deploy ipfailover-keepalived
$ oc edit deploy ipfailover-keepalived
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In the
spec.container.env
field, add theOPENSHIFT_HA_CHECK_SCRIPT
environment variable to point to the mounted script file. - 2
- Add the
spec.container.volumeMounts
field to create the mount point. - 3
- Add a new
spec.volumes
field to mention the config map. - 4
- This sets run permission on the files. When read back, it is displayed in decimal,
493
.
Save the changes and exit the editor. This restarts
ipfailover-keepalived
.
4.4. Configuring VRRP preemption Copy linkLink copied to clipboard!
When a Virtual IP (VIP) on a node leaves the fault
state by passing the check script, the VIP on the node enters the backup
state if it has lower priority than the VIP on the node that is currently in the master
state. The nopreempt
strategy does not move master
from the lower priority VIP on the host to the higher priority VIP on the host. With preempt_delay 300
, the default, Keepalived waits the specified 300 seconds and moves master
to the higher priority VIP on the host.
Procedure
To specify preemption enter
oc edit deploy ipfailover-keepalived
to edit the router deployment configuration:oc edit deploy ipfailover-keepalived
$ oc edit deploy ipfailover-keepalived
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set the
OPENSHIFT_HA_PREEMPTION
value:-
preempt_delay 300
: Keepalived waits the specified 300 seconds and movesmaster
to the higher priority VIP on the host. This is the default value. -
nopreempt
: does not movemaster
from the lower priority VIP on the host to the higher priority VIP on the host.
-
4.5. Deploying multiple IP failover instances Copy linkLink copied to clipboard!
Each IP failover pod managed by the IP failover deployment configuration, 1
pod per node or replica, runs a Keepalived daemon. As more IP failover deployment configurations are configured, more pods are created and more daemons join into the common Virtual Router Redundancy Protocol (VRRP) negotiation. This negotiation is done by all the Keepalived daemons and it determines which nodes service which virtual IPs (VIP).
Internally, Keepalived assigns a unique vrrp-id
to each VIP. The negotiation uses this set of vrrp-ids
, when a decision is made, the VIP corresponding to the winning vrrp-id
is serviced on the winning node.
Therefore, for every VIP defined in the IP failover deployment configuration, the IP failover pod must assign a corresponding vrrp-id
. This is done by starting at OPENSHIFT_HA_VRRP_ID_OFFSET
and sequentially assigning the vrrp-ids
to the list of VIPs. The vrrp-ids
can have values in the range 1..255
.
When there are multiple IP failover deployment configurations, you must specify OPENSHIFT_HA_VRRP_ID_OFFSET
so that there is room to increase the number of VIPs in the deployment configuration and none of the vrrp-id
ranges overlap.
4.6. Configuring IP failover for more than 254 addresses Copy linkLink copied to clipboard!
IP failover management is limited to 254 groups of Virtual IP (VIP) addresses. By default OpenShift Container Platform assigns one IP address to each group. You can use the OPENSHIFT_HA_VIP_GROUPS
variable to change this so multiple IP addresses are in each group and define the number of VIP groups available for each Virtual Router Redundancy Protocol (VRRP) instance when configuring IP failover.
Grouping VIPs creates a wider range of allocation of VIPs per VRRP in the case of VRRP failover events, and is useful when all hosts in the cluster have access to a service locally. For example, when a service is being exposed with an ExternalIP
.
As a rule for failover, do not limit services, such as the router, to one specific host. Instead, services should be replicated to each host so that in the case of IP failover, the services do not have to be recreated on the new host.
If you are using OpenShift Container Platform health checks, the nature of IP failover and groups means that all instances in the group are not checked. For that reason, the Kubernetes health checks must be used to ensure that services are live.
Prerequisites
-
You are logged in to the cluster with a user with
cluster-admin
privileges.
Procedure
To change the number of IP addresses assigned to each group, change the value for the
OPENSHIFT_HA_VIP_GROUPS
variable, for example:Example
Deployment
YAML for IP failover configurationCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If
OPENSHIFT_HA_VIP_GROUPS
is set to3
in an environment with seven VIPs, it creates three groups, assigning three VIPs to the first group, and two VIPs to the two remaining groups.
If the number of groups set by OPENSHIFT_HA_VIP_GROUPS
is fewer than the number of IP addresses set to fail over, the group contains more than one IP address, and all of the addresses move as a single unit.
4.7. High availability For ExternalIP Copy linkLink copied to clipboard!
In non-cloud clusters, IP failover and ExternalIP
to a service can be combined. The result is high availability services for users that create services using ExternalIP
.
The approach is to specify an spec.ExternalIP.autoAssignCIDRs
range of the cluster network configuration, and then use the same range in creating the IP failover configuration.
Because IP failover can support up to a maximum of 255 VIPs for the entire cluster, the spec.ExternalIP.autoAssignCIDRs
must be /24
or smaller.
4.8. Removing IP failover Copy linkLink copied to clipboard!
When IP failover is initially configured, the worker nodes in the cluster are modified with an iptables
rule that explicitly allows multicast packets on 224.0.0.18
for Keepalived. Because of the change to the nodes, removing IP failover requires running a job to remove the iptables
rule and removing the virtual IP addresses used by Keepalived.
Procedure
Optional: Identify and delete any check and notify scripts that are stored as config maps:
Identify whether any pods for IP failover use a config map as a volume:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Namespace: default Pod: keepalived-worker-59df45db9c-2x9mn Volumes that use config maps: volume: config-volume configMap: mycustomcheck
Namespace: default Pod: keepalived-worker-59df45db9c-2x9mn Volumes that use config maps: volume: config-volume configMap: mycustomcheck
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the preceding step provided the names of config maps that are used as volumes, delete the config maps:
oc delete configmap <configmap_name>
$ oc delete configmap <configmap_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Identify an existing deployment for IP failover:
oc get deployment -l ipfailover
$ oc get deployment -l ipfailover
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE default ipfailover 2/2 2 2 105d
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE default ipfailover 2/2 2 2 105d
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the deployment:
oc delete deployment <ipfailover_deployment_name>
$ oc delete deployment <ipfailover_deployment_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the
ipfailover
service account:oc delete sa ipfailover
$ oc delete sa ipfailover
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run a job that removes the IP tables rule that was added when IP failover was initially configured:
Create a file such as
remove-ipfailover-job.yaml
with contents that are similar to the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the job:
oc create -f remove-ipfailover-job.yaml
$ oc create -f remove-ipfailover-job.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
job.batch/remove-ipfailover-2h8dm created
job.batch/remove-ipfailover-2h8dm created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Confirm that the job removed the initial configuration for IP failover.
oc logs job/remove-ipfailover-2h8dm
$ oc logs job/remove-ipfailover-2h8dm
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
remove-failover.sh: OpenShift IP Failover service terminating. - Removing ip_vs module ... - Cleaning up ... - Releasing VIPs (interface eth0) ...
remove-failover.sh: OpenShift IP Failover service terminating. - Removing ip_vs module ... - Cleaning up ... - Releasing VIPs (interface eth0) ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 5. Configuring the cluster-wide proxy Copy linkLink copied to clipboard!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure OpenShift Container Platform to use a proxy by modifying the Proxy object for existing clusters or by configuring the proxy settings in the install-config.yaml
file for new clusters.
After you enable a cluster-wide egress proxy for your cluster on a supported platform, Red Hat Enterprise Linux CoreOS (RHCOS) populates the status.noProxy
parameter with the values of the networking.machineNetwork[].cidr
, networking.clusterNetwork[].cidr
, and networking.serviceNetwork[]
fields from your install-config.yaml
file that exists on the supported platform.
As a postinstallation task, you can change the networking.clusterNetwork[].cidr
value, but not the networking.machineNetwork[].cidr
and the networking.serviceNetwork[]
values. For more information, see "Configuring the cluster network range".
For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the status.noProxy
parameter is also populated with the instance metadata endpoint, 169.254.169.254
.
Example of values added to the status:
segment of a Proxy
object by RHCOS
- 1
- Specify IP address blocks from which pod IP addresses are allocated. The default value is
10.128.0.0/14
with a host prefix of/23
. - 2
- Specify the IP address blocks for machines. The default value is
10.0.0.0/16
. - 3
- Specify IP address block for services. The default value is
172.30.0.0/16
. - 4
- You can find the URL of the internal API server by running the
oc get infrastructures.config.openshift.io cluster -o jsonpath='{.status.etcdDiscoveryDomain}'
command.
If your installation type does not include setting the networking.machineNetwork[].cidr
field, you must include the machine IP addresses manually in the .status.noProxy
field to make sure that the traffic between nodes can bypass the proxy.
5.1. Prerequisites Copy linkLink copied to clipboard!
Review the sites that your cluster requires access to and determine whether any of them must bypass the proxy. By default, all cluster system egress traffic is proxied, including calls to the cloud provider API for the cloud that hosts your cluster. The system-wide proxy affects system components only, not user workloads. If necessary, add sites to the spec.noProxy
parameter of the Proxy
object to bypass the proxy.
5.2. Enabling the cluster-wide proxy Copy linkLink copied to clipboard!
The Proxy
object is used to manage the cluster-wide egress proxy. When a cluster is installed or upgraded without the proxy configured, a Proxy
object is still generated but it has a nil spec
. For example:
Only the Proxy
object named cluster
is supported, and no additional proxies can be created.
A cluster administrator can configure the proxy for OpenShift Container Platform by modifying the cluster
Proxy
object.
After you enable the cluster-wide proxy capability for your cluster and you save the Proxy
object file, the Machine Config Operator (MCO) reboots all nodes in your cluster so that each node can access connections that exist outside of the cluster. You do not need to manually reboot these nodes.
Prerequisites
- Cluster administrator permissions
-
OpenShift Container Platform
oc
CLI tool installed
Procedure
Create a config map that contains any additional CA certificates required for proxying HTTPS connections.
NoteYou can skip this step if the identity certificate of the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle.
Create a file called
user-ca-bundle.yaml
, and provide the values of your PEM-encoded certificates:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the config map from the
user-ca-bundle.yaml
file by entering the following command:oc create -f user-ca-bundle.yaml
$ oc create -f user-ca-bundle.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Use the
oc edit
command to modify theProxy
object:oc edit proxy/cluster
$ oc edit proxy/cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the necessary fields for the proxy:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http
. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster. The URL scheme must be either
http
orhttps
. Specify a URL for the proxy that supports the URL scheme. For example, most proxies report an error if they are configured to usehttps
but they only supporthttp
. This failure message may not propagate to the logs and can appear to be a network connection failure instead. If using a proxy that listens forhttps
connections from the cluster, you might need to configure the cluster to accept the CAs and certificates that the proxy uses. - 3
- A comma-separated list of destination domain names, domains, IP addresses (or other network CIDRs), and port numbers to exclude proxying.Note
Port numbers are only supported when configuring IPv6 addresses. Port numbers are not supported when configuring IPv4 addresses.
Preface a domain with
.
to match subdomains only. For example,.y.com
matchesx.y.com
, but noty.com
. Use*
to bypass proxy for all destinations.If your
noproxy
field needs to include a domain address, you must explicitly specify that FQDN, or prefix-matched subdomain, in thenoproxy
field. You cannot use the IP address or CIDR range that encapsulates the domain. This is because the cluster does not wait for DNS to return the IP address before assigning the route connection, and checks explicitly against the request being made. For example, if you have a CIDR block value, such as10.0.0.0/24
, for thenoproxy
field and the field attempts to accesshttps://10.0.0.11
, the addresses successfully match. However, attempting to accesshttps://exampleserver.externaldomain.com
, whose A record entry is10.0.0.11
, fails. An additional value of.externaldomain.com
for yournoproxy
field is necessary.If you scale up compute nodes that are not included in the network defined by the
networking.machineNetwork[].cidr
field from the installation configuration, you must add them to this list to prevent connection issues.This field is ignored if neither the
httpProxy
orhttpsProxy
fields are set. - 4
- One or more URLs external to the cluster to use to perform a readiness check before writing the
httpProxy
andhttpsProxy
values to status. - 5
- A reference to the config map in the
openshift-config
namespace that contains additional CA certificates required for proxying HTTPS connections. Note that the config map must already exist before referencing it here. This field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
- Save the file to apply the changes.
5.3. Removing the cluster-wide proxy Copy linkLink copied to clipboard!
The cluster
Proxy object cannot be deleted. To remove the proxy from a cluster, remove all spec
fields from the Proxy object.
Prerequisites
- Cluster administrator permissions
-
OpenShift Container Platform
oc
CLI tool installed
Procedure
Use the
oc edit
command to modify the proxy:oc edit proxy/cluster
$ oc edit proxy/cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove all
spec
fields from the Proxy object. For example:apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: {}
apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: {}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file to apply the changes.
5.4. Verifying the cluster-wide proxy configuration Copy linkLink copied to clipboard!
After the cluster-wide proxy configuration is deployed, you can verify that it is working as expected. Follow these steps to check the logs and validate the implementation.
Prerequisites
- You have cluster administrator permissions.
-
You have the OpenShift Container Platform
oc
CLI tool installed.
Procedure
Check the proxy configuration status using the
oc
command:oc get proxy/cluster -o yaml
$ oc get proxy/cluster -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify the proxy fields in the output to ensure they match your configuration. Specifically, check the
spec.httpProxy
,spec.httpsProxy
,spec.noProxy
, andspec.trustedCA
fields. Inspect the status of the
Proxy
object:oc get proxy/cluster -o jsonpath='{.status}'
$ oc get proxy/cluster -o jsonpath='{.status}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the logs of the Machine Config Operator (MCO) to ensure that the configuration changes were applied successfully:
oc logs -n openshift-machine-config-operator $(oc get pods -n openshift-machine-config-operator -l k8s-app=machine-config-operator -o name)
$ oc logs -n openshift-machine-config-operator $(oc get pods -n openshift-machine-config-operator -l k8s-app=machine-config-operator -o name)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Look for messages that indicate the proxy settings were applied and the nodes were rebooted if necessary.
Verify that system components are using the proxy by checking the logs of a component that makes external requests, such as the Cluster Version Operator (CVO):
oc logs -n openshift-cluster-version $(oc get pods -n openshift-cluster-version -l k8s-app=machine-config-operator -o name)
$ oc logs -n openshift-cluster-version $(oc get pods -n openshift-cluster-version -l k8s-app=machine-config-operator -o name)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Look for log entries that show that external requests have been routed through the proxy.
Chapter 6. Configuring a custom PKI Copy linkLink copied to clipboard!
Some platform components, such as the web console, use Routes for communication and must trust other components' certificates to interact with them. If you are using a custom public key infrastructure (PKI), you must configure it so its privately signed CA certificates are recognized across the cluster.
You can leverage the Proxy API to add cluster-wide trusted CA certificates. You must do this either during installation or at runtime.
During installation, configure the cluster-wide proxy. You must define your privately signed CA certificates in the
install-config.yaml
file’sadditionalTrustBundle
setting.The installation program generates a ConfigMap that is named
user-ca-bundle
that contains the additional CA certificates you defined. The Cluster Network Operator then creates atrusted-ca-bundle
ConfigMap that merges these CA certificates with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle; this ConfigMap is referenced in the Proxy object’strustedCA
field.-
At runtime, modify the default Proxy object to include your privately signed CA certificates (part of cluster’s proxy enablement workflow). This involves creating a ConfigMap that contains the privately signed CA certificates that should be trusted by the cluster, and then modifying the proxy resource with the
trustedCA
referencing the privately signed certificates' ConfigMap.
The installer configuration’s additionalTrustBundle
field and the proxy resource’s trustedCA
field are used to manage the cluster-wide trust bundle; additionalTrustBundle
is used at install time and the proxy’s trustedCA
is used at runtime.
The trustedCA
field is a reference to a ConfigMap
containing the custom certificate and key pair used by the cluster component.
6.1. Configuring the cluster-wide proxy during installation Copy linkLink copied to clipboard!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml
file.
Prerequisites
-
You have an existing
install-config.yaml
file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxy
object’sspec.noProxy
field to bypass the proxy if necessary.NoteThe
Proxy
objectstatus.noProxy
field is populated with the values of thenetworking.machineNetwork[].cidr
,networking.clusterNetwork[].cidr
, andnetworking.serviceNetwork[]
fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxy
objectstatus.noProxy
field is also populated with the instance metadata endpoint (169.254.169.254
).
Procedure
Edit your
install-config.yaml
file and add the proxy settings. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http
. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.
to match subdomains only. For example,.y.com
matchesx.y.com
, but noty.com
. Use*
to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundle
in theopenshift-config
namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates atrusted-ca-bundle
config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in thetrustedCA
field of theProxy
object. TheadditionalTrustBundle
field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle. - 5
- Optional: The policy to determine the configuration of the
Proxy
object to reference theuser-ca-bundle
config map in thetrustedCA
field. The allowed values areProxyonly
andAlways
. UseProxyonly
to reference theuser-ca-bundle
config map only whenhttp/https
proxy is configured. UseAlways
to always reference theuser-ca-bundle
config map. The default value isProxyonly
.
NoteThe installation program does not support the proxy
readinessEndpoints
field.NoteIf the installer times out, restart and then complete the deployment by using the
wait-for
command of the installer. For example:./openshift-install wait-for install-complete --log-level debug
$ ./openshift-install wait-for install-complete --log-level debug
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy settings in the provided install-config.yaml
file. If no proxy settings are provided, a cluster
Proxy
object is still created, but it will have a nil spec
.
Only the Proxy
object named cluster
is supported, and no additional proxies can be created.
6.2. Enabling the cluster-wide proxy Copy linkLink copied to clipboard!
The Proxy
object is used to manage the cluster-wide egress proxy. When a cluster is installed or upgraded without the proxy configured, a Proxy
object is still generated but it has a nil spec
. For example:
Only the Proxy
object named cluster
is supported, and no additional proxies can be created.
A cluster administrator can configure the proxy for OpenShift Container Platform by modifying the cluster
Proxy
object.
After you enable the cluster-wide proxy capability for your cluster and you save the Proxy
object file, the Machine Config Operator (MCO) reboots all nodes in your cluster so that each node can access connections that exist outside of the cluster. You do not need to manually reboot these nodes.
Prerequisites
- Cluster administrator permissions
-
OpenShift Container Platform
oc
CLI tool installed
Procedure
Create a config map that contains any additional CA certificates required for proxying HTTPS connections.
NoteYou can skip this step if the identity certificate of the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle.
Create a file called
user-ca-bundle.yaml
, and provide the values of your PEM-encoded certificates:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the config map from the
user-ca-bundle.yaml
file by entering the following command:oc create -f user-ca-bundle.yaml
$ oc create -f user-ca-bundle.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Use the
oc edit
command to modify theProxy
object:oc edit proxy/cluster
$ oc edit proxy/cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the necessary fields for the proxy:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http
. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster. The URL scheme must be either
http
orhttps
. Specify a URL for the proxy that supports the URL scheme. For example, most proxies report an error if they are configured to usehttps
but they only supporthttp
. This failure message may not propagate to the logs and can appear to be a network connection failure instead. If using a proxy that listens forhttps
connections from the cluster, you might need to configure the cluster to accept the CAs and certificates that the proxy uses. - 3
- A comma-separated list of destination domain names, domains, IP addresses (or other network CIDRs), and port numbers to exclude proxying.Note
Port numbers are only supported when configuring IPv6 addresses. Port numbers are not supported when configuring IPv4 addresses.
Preface a domain with
.
to match subdomains only. For example,.y.com
matchesx.y.com
, but noty.com
. Use*
to bypass proxy for all destinations.If your
noproxy
field needs to include a domain address, you must explicitly specify that FQDN, or prefix-matched subdomain, in thenoproxy
field. You cannot use the IP address or CIDR range that encapsulates the domain. This is because the cluster does not wait for DNS to return the IP address before assigning the route connection, and checks explicitly against the request being made. For example, if you have a CIDR block value, such as10.0.0.0/24
, for thenoproxy
field and the field attempts to accesshttps://10.0.0.11
, the addresses successfully match. However, attempting to accesshttps://exampleserver.externaldomain.com
, whose A record entry is10.0.0.11
, fails. An additional value of.externaldomain.com
for yournoproxy
field is necessary.If you scale up compute nodes that are not included in the network defined by the
networking.machineNetwork[].cidr
field from the installation configuration, you must add them to this list to prevent connection issues.This field is ignored if neither the
httpProxy
orhttpsProxy
fields are set. - 4
- One or more URLs external to the cluster to use to perform a readiness check before writing the
httpProxy
andhttpsProxy
values to status. - 5
- A reference to the config map in the
openshift-config
namespace that contains additional CA certificates required for proxying HTTPS connections. Note that the config map must already exist before referencing it here. This field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
- Save the file to apply the changes.
6.3. Certificate injection using Operators Copy linkLink copied to clipboard!
Once your custom CA certificate is added to the cluster via ConfigMap, the Cluster Network Operator merges the user-provided and system CA certificates into a single bundle and injects the merged bundle into the Operator requesting the trust bundle injection.
After adding a config.openshift.io/inject-trusted-cabundle="true"
label to the config map, existing data in it is deleted. The Cluster Network Operator takes ownership of a config map and only accepts ca-bundle
as data. You must use a separate config map to store service-ca.crt
by using the service.beta.openshift.io/inject-cabundle=true
annotation or a similar configuration. Adding a config.openshift.io/inject-trusted-cabundle="true"
label and service.beta.openshift.io/inject-cabundle=true
annotation on the same config map can cause issues.
Operators request this injection by creating an empty ConfigMap with the following label:
config.openshift.io/inject-trusted-cabundle="true"
config.openshift.io/inject-trusted-cabundle="true"
An example of the empty ConfigMap:
- 1
- Specifies the empty ConfigMap name.
The Operator mounts this ConfigMap into the container’s local trust store.
Adding a trusted CA certificate is only needed if the certificate is not included in the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle.
Certificate injection is not limited to Operators. The Cluster Network Operator injects certificates across any namespace when an empty ConfigMap is created with the config.openshift.io/inject-trusted-cabundle=true
label.
The ConfigMap can reside in any namespace, but the ConfigMap must be mounted as a volume to each container within a pod that requires a custom CA. For example:
Legal Notice
Copy linkLink copied to clipboard!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.