Chapter 5. Configuring the cluster-wide proxy
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure OpenShift Container Platform to use a proxy by modifying the Proxy object for existing clusters or by configuring the proxy settings in the install-config.yaml file for new clusters.
After you enable a cluster-wide egress proxy for your cluster on a supported platform, Red Hat Enterprise Linux CoreOS (RHCOS) populates the status.noProxy parameter with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your install-config.yaml file that exists on the supported platform.
As a postinstallation task, you can change the networking.clusterNetwork[].cidr value, but not the networking.machineNetwork[].cidr and the networking.serviceNetwork[] values. For more information, see "Configuring the cluster network range".
For installations on Amazon Web Services (AWS), Google Cloud, Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the status.noProxy parameter is also populated with the instance metadata endpoint, 169.254.169.254.
Example of values added to the status: segment of a Proxy object by RHCOS
- 1
- Specify IP address blocks from which pod IP addresses are allocated. The default value is
10.128.0.0/14with a host prefix of/23. - 2
- Specify the IP address blocks for machines. The default value is
10.0.0.0/16. - 3
- Specify IP address block for services. The default value is
172.30.0.0/16. - 4
- You can find the URL of the internal API server by running the
oc get infrastructures.config.openshift.io cluster -o jsonpath='{.status.etcdDiscoveryDomain}'command.
If your installation type does not include setting the networking.machineNetwork[].cidr field, you must include the machine IP addresses manually in the .status.noProxy field to make sure that the traffic between nodes can bypass the proxy.
5.1. Prerequisites Copy linkLink copied to clipboard!
Review the sites that your cluster requires access to and determine whether any of them must bypass the proxy. By default, all cluster system egress traffic is proxied, including calls to the cloud provider API for the cloud that hosts your cluster. The system-wide proxy affects system components only, not user workloads. If necessary, add sites to the spec.noProxy parameter of the Proxy object to bypass the proxy.
5.2. Enabling the cluster-wide proxy Copy linkLink copied to clipboard!
The Proxy object is used to manage the cluster-wide egress proxy. When a cluster is installed or upgraded without the proxy configured, a Proxy object is still generated but it has a nil spec. For example:
Only the Proxy object named cluster is supported, and no additional proxies can be created.
A cluster administrator can configure the proxy for OpenShift Container Platform by modifying the cluster Proxy object.
After you enable the cluster-wide proxy capability for your cluster and you save the Proxy object file, the Machine Config Operator (MCO) reboots all nodes in your cluster so that each node can access connections that exist outside of the cluster. You do not need to manually reboot these nodes.
Prerequisites
- You have cluster administrator permissions.
-
You installed the OpenShift Container Platform
ocCLI tool.
Procedure
Create a config map that contains any additional CA certificates required for proxying HTTPS connections.
NoteYou can skip this step if the identity certificate of the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle.
Create a file called
user-ca-bundle.yaml, and provide the values of your PEM-encoded certificates:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the config map from the
user-ca-bundle.yamlfile by entering the following command:oc create -f user-ca-bundle.yaml
$ oc create -f user-ca-bundle.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Use the
oc editcommand to modify theProxyobject:oc edit proxy/cluster
$ oc edit proxy/clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the necessary fields for the proxy:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster. The URL scheme must be either
httporhttps. Specify a URL for the proxy that supports the URL scheme. For example, most proxies report an error if they are configured to usehttpsbut they only supporthttp. This failure message may not propagate to the logs and can appear to be a network connection failure instead. If using a proxy that listens forhttpsconnections from the cluster, you might need to configure the cluster to accept the CAs and certificates that the proxy uses. - 3
- A comma-separated list of destination domain names, domains, IP addresses (or other network CIDRs), and port numbers to exclude proxying.Note
Port numbers are only supported when configuring IPv6 addresses. Port numbers are not supported when configuring IPv4 addresses.
Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass proxy for all destinations.If your
noproxyfield needs to include a domain address, you must explicitly specify that FQDN, or prefix-matched subdomain, in thenoproxyfield. You cannot use the IP address or CIDR range that encapsulates the domain. This is because the cluster does not wait for DNS to return the IP address before assigning the route connection, and checks explicitly against the request being made. For example, if you have a CIDR block value, such as10.0.0.0/24, for thenoproxyfield and the field attempts to accesshttps://10.0.0.11, the addresses successfully match. However, attempting to accesshttps://exampleserver.externaldomain.com, whose A record entry is10.0.0.11, fails. An additional value of.externaldomain.comfor yournoproxyfield is necessary.If you scale up compute nodes that are not included in the network defined by the
networking.machineNetwork[].cidrfield from the installation configuration, you must add them to this list to prevent connection issues.This field is ignored if neither the
httpProxyorhttpsProxyfields are set. - 4
- One or more URLs external to the cluster to use to perform a readiness check before writing the
httpProxyandhttpsProxyvalues to status. - 5
- A reference to the config map in the
openshift-confignamespace that contains additional CA certificates required for proxying HTTPS connections. Note that the config map must already exist before referencing it here. This field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
- Save the file to apply the changes.
5.3. Removing the cluster-wide proxy Copy linkLink copied to clipboard!
The cluster Proxy object cannot be deleted. To remove the proxy from a cluster, remove all spec fields from the Proxy object.
Prerequisites
- Cluster administrator permissions
-
OpenShift Container Platform
ocCLI tool installed
Procedure
Use the
oc editcommand to modify the proxy:oc edit proxy/cluster
$ oc edit proxy/clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove all
specfields from the Proxy object. For example:apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: {}apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: {}Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file to apply the changes.
5.4. Verifying the cluster-wide proxy configuration Copy linkLink copied to clipboard!
After the cluster-wide proxy configuration is deployed, you can verify that it is working as expected. Follow these steps to check the logs and validate the implementation.
Prerequisites
- You have cluster administrator permissions.
-
You have the OpenShift Container Platform
ocCLI tool installed.
Procedure
Check the proxy configuration status using the
occommand:oc get proxy/cluster -o yaml
$ oc get proxy/cluster -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify the proxy fields in the output to ensure they match your configuration. Specifically, check the
spec.httpProxy,spec.httpsProxy,spec.noProxy, andspec.trustedCAfields. Inspect the status of the
Proxyobject:oc get proxy/cluster -o jsonpath='{.status}'$ oc get proxy/cluster -o jsonpath='{.status}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the logs of the Machine Config Operator (MCO) to ensure that the configuration changes were applied successfully:
oc logs -n openshift-machine-config-operator $(oc get pods -n openshift-machine-config-operator -l k8s-app=machine-config-operator -o name)
$ oc logs -n openshift-machine-config-operator $(oc get pods -n openshift-machine-config-operator -l k8s-app=machine-config-operator -o name)Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Look for messages that indicate the proxy settings were applied and the nodes were rebooted if necessary.
Verify that system components are using the proxy by checking the logs of a component that makes external requests, such as the Cluster Version Operator (CVO):
oc logs -n openshift-cluster-version $(oc get pods -n openshift-cluster-version -l k8s-app=machine-config-operator -o name)
$ oc logs -n openshift-cluster-version $(oc get pods -n openshift-cluster-version -l k8s-app=machine-config-operator -o name)Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Look for log entries that show that external requests have been routed through the proxy.