Chapter 2. Configuring a private cluster
After you install an OpenShift Container Platform version 4.17 cluster, you can set some of its core components to be private.
2.1. About private clusters Copy linkLink copied to clipboard!
By default, OpenShift Container Platform is provisioned using publicly-accessible DNS and endpoints. You can set the DNS, Ingress Controller, and API server to private after you deploy your private cluster.
If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private.
2.1.1. DNS Copy linkLink copied to clipboard!
If you install OpenShift Container Platform on installer-provisioned infrastructure, the installation program creates records in a pre-existing public zone and, where possible, creates a private zone for the cluster’s own DNS resolution. In both the public zone and the private zone, the installation program or cluster creates DNS entries for *.apps
, for the Ingress
object, and api
, for the API server.
The *.apps
records in the public and private zone are identical, so when you delete the public zone, the private zone seamlessly provides all DNS resolution for the cluster.
2.1.2. Ingress Controller Copy linkLink copied to clipboard!
Because the default Ingress
object is created as public, the load balancer is internet-facing and in the public subnets.
The Ingress Operator generates a default certificate for an Ingress Controller to serve as a placeholder until you configure a custom default certificate. Do not use Operator-generated default certificates in production clusters. The Ingress Operator does not rotate its own signing certificate or the default certificates that it generates. Operator-generated default certificates are intended as placeholders for custom default certificates that you configure.
2.1.3. API server Copy linkLink copied to clipboard!
By default, the installation program creates appropriate network load balancers for the API server to use for both internal and external traffic.
On Amazon Web Services (AWS), separate public and private load balancers are created. The load balancers are identical except that an additional port is available on the internal one for use within the cluster. Although the installation program automatically creates or destroys the load balancer based on API server requirements, the cluster does not manage or maintain them. As long as you preserve the cluster’s access to the API server, you can manually modify or move the load balancers. For the public load balancer, port 6443 is open and the health check is configured for HTTPS against the /readyz
path.
On Google Cloud Platform, a single load balancer is created to manage both internal and external API traffic, so you do not need to modify the load balancer.
On Microsoft Azure, both public and private load balancers are created. However, because of limitations in current implementation, you just retain both load balancers in a private cluster.
2.2. Configuring DNS records to be published in a private zone Copy linkLink copied to clipboard!
For all OpenShift Container Platform clusters, whether public or private, DNS records are published in a public zone by default.
You can remove the public zone from the cluster DNS configuration to avoid exposing DNS records to the public. You might want to avoid exposing sensitive information, such as internal domain names, internal IP addresses, or the number of clusters at an organization, or you might simply have no need to publish records publicly. If all the clients that should be able to connect to services within the cluster use a private DNS service that has the DNS records from the private zone, then there is no need to have a public DNS record for the cluster.
After you deploy a cluster, you can modify its DNS to use only a private zone by modifying the DNS
custom resource (CR). Modifying the DNS
CR in this way means that any DNS records that are subsequently created are not published to public DNS servers, which keeps knowledge of the DNS records isolated to internal users. This can be done when you configure the cluster to be private, or if you never want DNS records to be publicly resolvable.
Alternatively, even in a private cluster, you might keep the public zone for DNS records because it allows clients to resolve DNS names for applications running on that cluster. For example, an organization can have machines that connect to the public internet and then establish VPN connections for certain private IP ranges in order to connect to private IP addresses. The DNS lookups from these machines use the public DNS to determine the private addresses of those services, and then connect to the private addresses over the VPN.
Procedure
Review the
DNS
CR for your cluster by running the following command and observing the output:oc get dnses.config.openshift.io/cluster -o yaml
$ oc get dnses.config.openshift.io/cluster -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that the
spec
section contains both a private and a public zone.Patch the
DNS
CR to remove the public zone by running the following command:oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}'
$ oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
dns.config.openshift.io/cluster patched
dns.config.openshift.io/cluster patched
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The Ingress Operator consults the
DNS
CR definition when it creates DNS records forIngressController
objects. If only private zones are specified, only private records are created.ImportantExisting DNS records are not modified when you remove the public zone. You must manually delete previously published public DNS records if you no longer want them to be published publicly.
Verification
Review the
DNS
CR for your cluster and confirm that the public zone was removed, by running the following command and observing the output:oc get dnses.config.openshift.io/cluster -o yaml
$ oc get dnses.config.openshift.io/cluster -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3. Setting the Ingress Controller to private Copy linkLink copied to clipboard!
After you deploy a cluster, you can modify its Ingress Controller to use only a private zone.
Procedure
Modify the default Ingress Controller to use only an internal endpoint:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
ingresscontroller.operator.openshift.io "default" deleted ingresscontroller.operator.openshift.io/default replaced
ingresscontroller.operator.openshift.io "default" deleted ingresscontroller.operator.openshift.io/default replaced
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The public DNS entry is removed, and the private zone entry is updated.
2.4. Restricting the API server to private Copy linkLink copied to clipboard!
After you deploy a cluster to Amazon Web Services (AWS) or Microsoft Azure, you can reconfigure the API server to use only the private zone.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Have access to the web console as a user with
admin
privileges.
Procedure
AWS clusters: Remove the external load balancers:
ImportantYou can run the following steps only for an installer-provisioned infrastructure (IPI) cluster. For a user-provisioned infrastructure (UPI) cluster, you must manually remove or disable the external load balancers.
If your cluster uses a control plane machine set, delete the lines in the control plane machine set custom resource that configure your public or external load balancer:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If your cluster does not use a control plane machine set, you must delete the external load balancers from each control plane machine.
From your terminal, list the cluster machines by running the following command:
oc get machine -n openshift-machine-api
$ oc get machine -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The control plane machines contain
master
in the name.Remove the external load balancer from each control plane machine:
Edit a control plane machine object to by running the following command:
oc edit machines -n openshift-machine-api <control_plane_name>
$ oc edit machines -n openshift-machine-api <control_plane_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the control plane machine object to modify.
Remove the lines that describe the external load balancer, which are marked in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save your changes and exit the object specification.
- Repeat this process for each of the control plane machines.
In the web portal or console for your cloud provider, take the following actions:
Locate and delete the appropriate load balancer component:
- For AWS, delete the external load balancer. The API DNS entry in the private zone already points to the internal load balancer, which uses an identical configuration, so you do not need to modify the internal load balancer.
-
For Azure, delete the
api-internal-v4
rule for the public load balancer.
-
For Azure, configure the Ingress Controller endpoint publishing scope to
Internal
. For more information, see "Configuring the Ingress Controller endpoint publishing scope to Internal". -
For the Azure public load balancer, if you configure the Ingress Controller endpoint publishing scope to
Internal
and there are no existing inbound rules in the public load balancer, you must create an outbound rule explicitly to provide outbound traffic for the backend address pool. For more information, see the Microsoft Azure documentation about adding outbound rules. -
Delete the
api.$clustername.$yourdomain
orapi.$clustername
DNS entry in the public zone.
2.5. Configuring a private storage endpoint on Azure Copy linkLink copied to clipboard!
You can leverage the Image Registry Operator to use private endpoints on Azure, which enables seamless configuration of private storage accounts when OpenShift Container Platform is deployed on private Azure clusters. This allows you to deploy the image registry without exposing public-facing storage endpoints.
Do not configure a private storage endpoint on Microsoft Azure Red Hat OpenShift (ARO), because the endpoint can put your Microsoft Azure Red Hat OpenShift cluster in an unrecoverable state.
You can configure the Image Registry Operator to use private storage endpoints on Azure in one of two ways:
- By configuring the Image Registry Operator to discover the VNet and subnet names
- With user-provided Azure Virtual Network (VNet) and subnet names
2.5.1. Limitations for configuring a private storage endpoint on Azure Copy linkLink copied to clipboard!
The following limitations apply when configuring a private storage endpoint on Azure:
-
When configuring the Image Registry Operator to use a private storage endpoint, public network access to the storage account is disabled. Consequently, pulling images from the registry outside of OpenShift Container Platform only works by setting
disableRedirect: true
in the registry Operator configuration. With redirect enabled, the registry redirects the client to pull images directly from the storage account, which will no longer work due to disabled public network access. For more information, see "Disabling redirect when using a private storage endpoint on Azure". - This operation cannot be undone by the Image Registry Operator.
2.5.2. Configuring a private storage endpoint on Azure by enabling the Image Registry Operator to discover VNet and subnet names Copy linkLink copied to clipboard!
The following procedure shows you how to set up a private storage endpoint on Azure by configuring the Image Registry Operator to discover VNet and subnet names.
Prerequisites
- You have configured the image registry to run on Azure.
Your network has been set up using the Installer Provisioned Infrastructure installation method.
For users with a custom network setup, see "Configuring a private storage endpoint on Azure with user-provided VNet and subnet names".
Procedure
Edit the Image Registry Operator
config
object and setnetworkAccess.type
toInternal
:oc edit configs.imageregistry/cluster
$ oc edit configs.imageregistry/cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Enter the following command to confirm that the Operator has completed provisioning. This might take a few minutes.
oc get configs.imageregistry/cluster -o=jsonpath="{.spec.storage.azure.privateEndpointName}" -w
$ oc get configs.imageregistry/cluster -o=jsonpath="{.spec.storage.azure.privateEndpointName}" -w
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If the registry is exposed by a route, and you are configuring your storage account to be private, you must disable redirect if you want pulls external to the cluster to continue to work. Enter the following command to disable redirect on the Image Operator configuration:
oc patch configs.imageregistry cluster --type=merge -p '{"spec":{"disableRedirect": true}}'
$ oc patch configs.imageregistry cluster --type=merge -p '{"spec":{"disableRedirect": true}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhen redirect is enabled, pulling images from outside of the cluster will not work.
Verification
Fetch the registry service name by running the following command:
oc get imagestream -n openshift
$ oc get imagestream -n openshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME IMAGE REPOSITORY TAGS UPDATED cli image-registry.openshift-image-registry.svc:5000/openshift/cli latest 8 hours ago ...
NAME IMAGE REPOSITORY TAGS UPDATED cli image-registry.openshift-image-registry.svc:5000/openshift/cli latest 8 hours ago ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter debug mode by running the following command:
oc debug node/<node_name>
$ oc debug node/<node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the suggested
chroot
command. For example:chroot /host
$ chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to log in to your container registry:
podman login --tls-verify=false -u unused -p $(oc whoami -t) image-registry.openshift-image-registry.svc:5000
$ podman login --tls-verify=false -u unused -p $(oc whoami -t) image-registry.openshift-image-registry.svc:5000
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Login Succeeded!
Login Succeeded!
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to verify that you can pull an image from the registry:
podman pull --tls-verify=false image-registry.openshift-image-registry.svc:5000/openshift/tools
$ podman pull --tls-verify=false image-registry.openshift-image-registry.svc:5000/openshift/tools
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5.3. Configuring a private storage endpoint on Azure with user-provided VNet and subnet names Copy linkLink copied to clipboard!
Use the following procedure to configure a storage account that has public network access disabled and is exposed behind a private storage endpoint on Azure.
Prerequisites
- You have configured the image registry to run on Azure.
- You must know the VNet and subnet names used for your Azure environment.
- If your network was configured in a separate resource group in Azure, you must also know its name.
Procedure
Edit the Image Registry Operator
config
object and configure the private endpoint using your VNet and subnet names:oc edit configs.imageregistry/cluster
$ oc edit configs.imageregistry/cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Enter the following command to confirm that the Operator has completed provisioning. This might take a few minutes.
oc get configs.imageregistry/cluster -o=jsonpath="{.spec.storage.azure.privateEndpointName}" -w
$ oc get configs.imageregistry/cluster -o=jsonpath="{.spec.storage.azure.privateEndpointName}" -w
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhen redirect is enabled, pulling images from outside of the cluster will not work.
Verification
Fetch the registry service name by running the following command:
oc get imagestream -n openshift
$ oc get imagestream -n openshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME IMAGE REPOSITORY TAGS UPDATED cli image-registry.openshift-image-registry.svc:5000/openshift/cli latest 8 hours ago ...
NAME IMAGE REPOSITORY TAGS UPDATED cli image-registry.openshift-image-registry.svc:5000/openshift/cli latest 8 hours ago ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter debug mode by running the following command:
oc debug node/<node_name>
$ oc debug node/<node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the suggested
chroot
command. For example:chroot /host
$ chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to log in to your container registry:
podman login --tls-verify=false -u unused -p $(oc whoami -t) image-registry.openshift-image-registry.svc:5000
$ podman login --tls-verify=false -u unused -p $(oc whoami -t) image-registry.openshift-image-registry.svc:5000
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Login Succeeded!
Login Succeeded!
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to verify that you can pull an image from the registry:
podman pull --tls-verify=false image-registry.openshift-image-registry.svc:5000/openshift/tools
$ podman pull --tls-verify=false image-registry.openshift-image-registry.svc:5000/openshift/tools
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5.4. Optional: Disabling redirect when using a private storage endpoint on Azure Copy linkLink copied to clipboard!
By default, redirect is enabled when using the image registry. Redirect allows off-loading of traffic from the registry pods into the object storage, which makes pull faster. When redirect is enabled and the storage account is private, users from outside of the cluster are unable to pull images from the registry.
In some cases, users might want to disable redirect so that users from outside of the cluster can pull images from the registry.
Use the following procedure to disable redirect.
Prerequisites
- You have configured the image registry to run on Azure.
- You have configured a route.
Procedure
Enter the following command to disable redirect on the image registry configuration:
oc patch configs.imageregistry cluster --type=merge -p '{"spec":{"disableRedirect": true}}'
$ oc patch configs.imageregistry cluster --type=merge -p '{"spec":{"disableRedirect": true}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Fetch the registry service name by running the following command:
oc get imagestream -n openshift
$ oc get imagestream -n openshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME IMAGE REPOSITORY TAGS UPDATED cli default-route-openshift-image-registry.<cluster_dns>/cli latest 8 hours ago ...
NAME IMAGE REPOSITORY TAGS UPDATED cli default-route-openshift-image-registry.<cluster_dns>/cli latest 8 hours ago ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to log in to your container registry:
podman login --tls-verify=false -u unused -p $(oc whoami -t) default-route-openshift-image-registry.<cluster_dns>
$ podman login --tls-verify=false -u unused -p $(oc whoami -t) default-route-openshift-image-registry.<cluster_dns>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Login Succeeded!
Login Succeeded!
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to verify that you can pull an image from the registry:
podman pull --tls-verify=false default-route-openshift-image-registry.<cluster_dns>
$ podman pull --tls-verify=false default-route-openshift-image-registry.<cluster_dns> /openshift/tools
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow