Este contenido no está disponible en el idioma seleccionado.
Chapter 2. Configuring a private cluster
After you install an OpenShift Container Platform version 4.8 cluster, you can set some of its core components to be private.
2.1. About private clusters Copiar enlaceEnlace copiado en el portapapeles!
By default, OpenShift Container Platform is provisioned using publicly-accessible DNS and endpoints. You can set the DNS, Ingress Controller, and API server to private after you deploy your private cluster.
If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private.
DNS
If you install OpenShift Container Platform on installer-provisioned infrastructure, the installation program creates records in a pre-existing public zone and, where possible, creates a private zone for the cluster’s own DNS resolution. In both the public zone and the private zone, the installation program or cluster creates DNS entries for
*.apps
Ingress
api
The
*.apps
Ingress Controller
Because the default
Ingress
API server
By default, the installation program creates appropriate network load balancers for the API server to use for both internal and external traffic.
On Amazon Web Services (AWS), separate public and private load balancers are created. The load balancers are identical except that an additional port is available on the internal one for use within the cluster. Although the installation program automatically creates or destroys the load balancer based on API server requirements, the cluster does not manage or maintain them. As long as you preserve the cluster’s access to the API server, you can manually modify or move the load balancers. For the public load balancer, port 6443 is open and the health check is configured for HTTPS against the
/readyz
On Google Cloud Platform, a single load balancer is created to manage both internal and external API traffic, so you do not need to modify the load balancer.
On Microsoft Azure, both public and private load balancers are created. However, because of limitations in current implementation, you just retain both load balancers in a private cluster.
2.2. Setting DNS to private Copiar enlaceEnlace copiado en el portapapeles!
After you deploy a cluster, you can modify its DNS to use only a private zone.
Procedure
Review the
custom resource for your cluster:DNS$ oc get dnses.config.openshift.io/cluster -o yamlExample output
apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {}Note that the
section contains both a private and a public zone.specPatch the
custom resource to remove the public zone:DNS$ oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}' dns.config.openshift.io/cluster patchedBecause the Ingress Controller consults the
definition when it createsDNSobjects, when you create or modifyIngressobjects, only private records are created.IngressImportantDNS records for the existing Ingress objects are not modified when you remove the public zone.
Optional: Review the
custom resource for your cluster and confirm that the public zone was removed:DNS$ oc get dnses.config.openshift.io/cluster -o yamlExample output
apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {}
2.3. Setting the Ingress Controller to private Copiar enlaceEnlace copiado en el portapapeles!
After you deploy a cluster, you can modify its Ingress Controller to use only a private zone.
Procedure
Modify the default Ingress Controller to use only an internal endpoint:
$ oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOFExample output
ingresscontroller.operator.openshift.io "default" deleted ingresscontroller.operator.openshift.io/default replacedThe public DNS entry is removed, and the private zone entry is updated.
2.4. Restricting the API server to private Copiar enlaceEnlace copiado en el portapapeles!
After you deploy a cluster to Amazon Web Services (AWS) or Microsoft Azure, you can reconfigure the API server to use only the private zone.
Prerequisites
-
Install the OpenShift CLI ().
oc -
Have access to the web console as a user with privileges.
admin
Procedure
In the web portal or console for AWS or Azure, take the following actions:
Locate and delete appropriate load balancer component.
- For AWS, delete the external load balancer. The API DNS entry in the private zone already points to the internal load balancer, which uses an identical configuration, so you do not need to modify the internal load balancer.
-
For Azure, delete the rule for the load balancer.
api-internal
-
Delete the DNS entry in the public zone.
api.$clustername.$yourdomain
Remove the external load balancers:
ImportantYou can run the following steps only for an installer-provisioned infrastructure (IPI) cluster. For a user-provisioned infrastructure (UPI) cluster, you must manually remove or disable the external load balancers.
From your terminal, list the cluster machines:
$ oc get machine -n openshift-machine-apiExample output
NAME STATE TYPE REGION ZONE AGE lk4pj-master-0 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-master-1 running m4.xlarge us-east-1 us-east-1b 17m lk4pj-master-2 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-worker-us-east-1a-5fzfj running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1a-vbghs running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1b-zgpzg running m4.xlarge us-east-1 us-east-1b 15mYou modify the control plane machines, which contain
in the name, in the following step.masterRemove the external load balancer from each control plane machine.
Edit a control plane
object to remove the reference to the external load balancer:Machine$ oc edit machines -n openshift-machine-api <master_name>1 - 1
- Specify the name of the control plane, or master,
Machineobject to modify.
Remove the lines that describe the external load balancer, which are marked in the following example, and save and exit the object specification:
... spec: providerSpec: value: ... loadBalancers: - name: lk4pj-ext1 type: network2 - name: lk4pj-int type: network-
Repeat this process for each of the machines that contains in the name.
master