This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 3. Post-installation network configuration
After installing OpenShift Container Platform, you can further expand and customize your network to your requirements.
3.1. Configuring network policy with OpenShift SDN 링크 복사링크가 클립보드에 복사되었습니다!
Understand and work with network policy.
3.1.1. About network policy 링크 복사링크가 클립보드에 복사되었습니다!
In a cluster using a Kubernetes Container Network Interface (CNI) plug-in that supports Kubernetes network policy, network isolation is controlled entirely by NetworkPolicy objects. In OpenShift Container Platform 4.5, OpenShift SDN supports using network policy in its default network isolation mode.
When using the OpenShift SDN cluster network provider, the following limitations apply regarding network policies:
-
Egress network policy as specified by the
egressfield is not supported. -
IPBlock is supported by network policy, but without support for
exceptclauses. If you create a policy with an IPBlock section that includes anexceptclause, the SDN pods log warnings and the entire IPBlock section of that policy is ignored.
Network policy does not apply to the host network namespace. Pods with host networking enabled are unaffected by network policy rules.
By default, all pods in a project are accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete NetworkPolicy objects within their own project.
If a pod is matched by selectors in one or more NetworkPolicy objects, then the pod will accept only connections that are allowed by at least one of those NetworkPolicy objects. A pod that is not selected by any NetworkPolicy objects is fully accessible.
The following example NetworkPolicy objects demonstrate supporting different scenarios:
Deny all traffic:
To make a project deny by default, add a
NetworkPolicyobject that matches all pods but accepts no traffic:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Only allow connections from the OpenShift Container Platform Ingress Controller:
To make a project allow only connections from the OpenShift Container Platform Ingress Controller, add the following
NetworkPolicyobject.ImportantFor the OVN-Kubernetes network provider plug-in, when the Ingress Controller is configured to use the
HostNetworkendpoint publishing strategy, there is no supported way to apply network policy so that ingress traffic is allowed and all other traffic is denied.Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the Ingress Controller is configured with
endpointPublishingStrategy: HostNetwork, then the Ingress Controller pod runs on the host network. When running on the host network, the traffic from the Ingress Controller is assigned thenetid:0Virtual Network ID (VNID). Thenetidfor the namespace that is associated with the Ingress Operator is different, so thematchLabelin theallow-from-openshift-ingressnetwork policy does not match traffic from thedefaultIngress Controller. With OpenShift SDN, thedefaultnamespace is assigned thenetid:0VNID and you can allow traffic from thedefaultIngress Controller by labeling yourdefaultnamespace withnetwork.openshift.io/policy-group: ingress.Only accept connections from pods within a project:
To make pods accept connections from other pods in the same project, but reject all other connections from pods in other projects, add the following
NetworkPolicyobject:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Only allow HTTP and HTTPS traffic based on pod labels:
To enable only HTTP and HTTPS access to the pods with a specific label (
role=frontendin following example), add aNetworkPolicyobject similar to the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Accept connections by using both namespace and pod selectors:
To match network traffic by combining namespace and pod selectors, you can use a
NetworkPolicyobject similar to the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
NetworkPolicy objects are additive, which means you can combine multiple NetworkPolicy objects together to satisfy complex network requirements.
For example, for the NetworkPolicy objects defined in previous samples, you can define both allow-same-namespace and allow-http-and-https policies within the same project. Thus allowing the pods with the label role=frontend, to accept any connection allowed by each policy. That is, connections on any port from pods in the same namespace, and connections on ports 80 and 443 from pods in any namespace.
3.1.2. Example NetworkPolicy object 링크 복사링크가 클립보드에 복사되었습니다!
The following annotates an example NetworkPolicy object:
- 1
- The
nameof the NetworkPolicy object. - 2
- A selector describing the pods the policy applies to. The policy object can only select pods in the project that the NetworkPolicy object is defined.
- 3
- A selector matching the pods that the policy object allows ingress traffic from. The selector will match pods in any project.
- 4
- A list of one or more destination ports to accept traffic on.
3.1.3. Creating a network policy 링크 복사링크가 클립보드에 복사되었습니다!
To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a network policy.
If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster.
Prerequisites
-
Your cluster uses a cluster network provider that supports
NetworkPolicyobjects, such as the OpenShift SDN network provider withmode: NetworkPolicyset. This mode is the default for OpenShift SDN. -
You installed the OpenShift CLI (
oc). -
You are logged in to the cluster with a user with
adminprivileges. - You are working in the namespace that the network policy applies to.
Procedure
Create a policy rule:
Create a
<policy_name>.yamlfile:touch <policy_name>.yaml
$ touch <policy_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<policy_name>- Specifies the network policy file name.
Define a network policy in the file that you just created, such as in the following examples:
Deny ingress from all pods in all namespaces
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Allow ingress from all pods in the same namespace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To create the network policy object, enter the following command:
oc apply -f <policy_name>.yaml -n <namespace>
$ oc apply -f <policy_name>.yaml -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<policy_name>- Specifies the network policy file name.
<namespace>- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
Example output
networkpolicy "default-deny" created
networkpolicy "default-deny" createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.4. Deleting a network policy 링크 복사링크가 클립보드에 복사되었습니다!
You can delete a network policy in a namespace.
If you log in with a user with the cluster-admin role, then you can delete any network policy in the cluster.
Prerequisites
-
Your cluster uses a cluster network provider that supports
NetworkPolicyobjects, such as the OpenShift SDN network provider withmode: NetworkPolicyset. This mode is the default for OpenShift SDN. -
You installed the OpenShift CLI (
oc). -
You are logged in to the cluster with a user with
adminprivileges. - You are working in the namespace where the network policy exists.
Procedure
To delete a
NetworkPolicyobject, enter the following command:oc delete networkpolicy <policy_name> -n <namespace>
$ oc delete networkpolicy <policy_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<policy_name>- Specifies the name of the network policy.
<namespace>- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
Example output
networkpolicy.networking.k8s.io/allow-same-namespace deleted
networkpolicy.networking.k8s.io/allow-same-namespace deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.5. Viewing network policies 링크 복사링크가 클립보드에 복사되었습니다!
You can examine the network policies in a namespace.
If you log in with a user with the cluster-admin role, then you can view any network policy in the cluster.
Prerequisites
-
You installed the OpenShift CLI (
oc). -
You are logged in to the cluster with a user with
adminprivileges. - You are working in the namespace where the network policy exists.
Procedure
List network policies in a namespace:
To view
NetworkPolicyobjects defined in a namespace, enter the following command:oc get networkpolicy
$ oc get networkpolicyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To examine a specific network policy, enter the following command:
oc describe networkpolicy <policy_name> -n <namespace>
$ oc describe networkpolicy <policy_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<policy_name>- Specifies the name of the network policy to inspect.
<namespace>- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
For example:
oc describe networkpolicy allow-same-namespace
$ oc describe networkpolicy allow-same-namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Output for
oc describecommandCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.6. Configuring multitenant isolation by using network policy 링크 복사링크가 클립보드에 복사되었습니다!
You can configure your project to isolate it from pods and services in other project namespaces.
Prerequisites
-
Your cluster uses a cluster network provider that supports
NetworkPolicyobjects, such as the OpenShift SDN network provider withmode: NetworkPolicyset. This mode is the default for OpenShift SDN. -
You installed the OpenShift CLI (
oc). -
You are logged in to the cluster with a user with
adminprivileges.
Procedure
Create the following
NetworkPolicyobjects:A policy named
allow-from-openshift-ingress.ImportantFor the OVN-Kubernetes network provider plug-in, when the Ingress Controller is configured to use the
HostNetworkendpoint publishing strategy, there is no supported way to apply network policy so that ingress traffic is allowed and all other traffic is denied.Copy to Clipboard Copied! Toggle word wrap Toggle overflow A policy named
allow-from-openshift-monitoring:Copy to Clipboard Copied! Toggle word wrap Toggle overflow A policy named
allow-same-namespace:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If the
defaultIngress Controller configuration has thespec.endpointPublishingStrategy: HostNetworkvalue set, you must apply a label to thedefaultOpenShift Container Platform namespace to allow network traffic between the Ingress Controller and the project:Determine if your
defaultIngress Controller uses theHostNetworkendpoint publishing strategy:oc get --namespace openshift-ingress-operator ingresscontrollers/default \ --output jsonpath='{.status.endpointPublishingStrategy.type}'$ oc get --namespace openshift-ingress-operator ingresscontrollers/default \ --output jsonpath='{.status.endpointPublishingStrategy.type}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the previous command reports the endpoint publishing strategy as
HostNetwork, set a label on thedefaultnamespace:oc label namespace default 'network.openshift.io/policy-group=ingress'
$ oc label namespace default 'network.openshift.io/policy-group=ingress'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Confirm that the
NetworkPolicyobject exists in your current project by running the following command:oc get networkpolicy <policy-name> -o yaml
$ oc get networkpolicy <policy-name> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the following example, the
allow-from-openshift-ingressNetworkPolicyobject is displayed:oc get -n project1 networkpolicy allow-from-openshift-ingress -o yaml
$ oc get -n project1 networkpolicy allow-from-openshift-ingress -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.7. Creating default network policies for a new project 링크 복사링크가 클립보드에 복사되었습니다!
As a cluster administrator, you can modify the new project template to automatically include NetworkPolicy objects when you create a new project.
3.1.8. Modifying the template for new projects 링크 복사링크가 클립보드에 복사되었습니다!
As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements.
To create your own custom project template:
Procedure
-
Log in as a user with
cluster-adminprivileges. Generate the default project template:
oc adm create-bootstrap-project-template -o yaml > template.yaml
$ oc adm create-bootstrap-project-template -o yaml > template.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Use a text editor to modify the generated
template.yamlfile by adding objects or modifying existing objects. The project template must be created in the
openshift-confignamespace. Load your modified template:oc create -f template.yaml -n openshift-config
$ oc create -f template.yaml -n openshift-configCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the project configuration resource using the web console or CLI.
Using the web console:
-
Navigate to the Administration
Cluster Settings page. - Click Global Configuration to view all configuration resources.
- Find the entry for Project and click Edit YAML.
-
Navigate to the Administration
Using the CLI:
Edit the
project.config.openshift.io/clusterresource:oc edit project.config.openshift.io/cluster
$ oc edit project.config.openshift.io/clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Update the
specsection to include theprojectRequestTemplateandnameparameters, and set the name of your uploaded project template. The default name isproject-request.Project configuration resource with custom project template
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - After you save your changes, create a new project to verify that your changes were successfully applied.
3.1.8.1. Adding network policies to the new project template 링크 복사링크가 클립보드에 복사되었습니다!
As a cluster administrator, you can add network policies to the default template for new projects. OpenShift Container Platform will automatically create all the NetworkPolicy objects specified in the template in the project.
Prerequisites
-
Your cluster uses a default CNI network provider that supports
NetworkPolicyobjects, such as the OpenShift SDN network provider withmode: NetworkPolicyset. This mode is the default for OpenShift SDN. -
You installed the OpenShift CLI (
oc). -
You must log in to the cluster with a user with
cluster-adminprivileges. - You must have created a custom default project template for new projects.
Procedure
Edit the default template for a new project by running the following command:
oc edit template <project_template> -n openshift-config
$ oc edit template <project_template> -n openshift-configCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<project_template>with the name of the default template that you configured for your cluster. The default template name isproject-request.In the template, add each
NetworkPolicyobject as an element to theobjectsparameter. Theobjectsparameter accepts a collection of one or more objects.In the following example, the
objectsparameter collection includes severalNetworkPolicyobjects:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Create a new project to confirm that your network policy objects are created successfully by running the following commands:
Create a new project:
oc new-project <project>
$ oc new-project <project>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<project>with the name for the project you are creating.
Confirm that the network policy objects in the new project template exist in the new project:
oc get networkpolicy
$ oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2. Setting DNS to private 링크 복사링크가 클립보드에 복사되었습니다!
After you deploy a cluster, you can modify its DNS to use only a private zone.
Procedure
Review the
DNScustom resource for your cluster:oc get dnses.config.openshift.io/cluster -o yaml
$ oc get dnses.config.openshift.io/cluster -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that the
specsection contains both a private and a public zone.Patch the
DNScustom resource to remove the public zone:oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}'$ oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}' dns.config.openshift.io/cluster patchedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Because the Ingress Controller consults the
DNSdefinition when it createsIngressobjects, when you create or modifyIngressobjects, only private records are created.ImportantDNS records for the existing Ingress objects are not modified when you remove the public zone.
Optional: Review the
DNScustom resource for your cluster and confirm that the public zone was removed:oc get dnses.config.openshift.io/cluster -o yaml
$ oc get dnses.config.openshift.io/cluster -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3. Enabling the cluster-wide proxy 링크 복사링크가 클립보드에 복사되었습니다!
The Proxy object is used to manage the cluster-wide egress proxy. When a cluster is installed or upgraded without the proxy configured, a Proxy object is still generated but it will have a nil spec. For example:
A cluster administrator can configure the proxy for OpenShift Container Platform by modifying this cluster Proxy object.
Only the Proxy object named cluster is supported, and no additional proxies can be created.
Prerequisites
- Cluster administrator permissions
-
OpenShift Container Platform
ocCLI tool installed
Procedure
Create a ConfigMap that contains any additional CA certificates required for proxying HTTPS connections.
NoteYou can skip this step if the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
Create a file called
user-ca-bundle.yamlwith the following contents, and provide the values of your PEM-encoded certificates:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the ConfigMap from this file:
oc create -f user-ca-bundle.yaml
$ oc create -f user-ca-bundle.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Use the
oc editcommand to modify the Proxy object:oc edit proxy/cluster
$ oc edit proxy/clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the necessary fields for the proxy:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster. If this is not specified, then
httpProxyis used for both HTTP and HTTPS connections. - 3
- A comma-separated list of destination domain names, domains, IP addresses or other network CIDRs to exclude proxying.
Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by thenetworking.machineNetwork[].cidrfield from the installation configuration, you must add them to this list to prevent connection issues.This field is ignored if neither the
httpProxyorhttpsProxyfields are set. - 4
- One or more URLs external to the cluster to use to perform a readiness check before writing the
httpProxyandhttpsProxyvalues to status. - 5
- A reference to the ConfigMap in the
openshift-confignamespace that contains additional CA certificates required for proxying HTTPS connections. Note that the ConfigMap must already exist before referencing it here. This field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
- Save the file to apply the changes.
The URL scheme must be http. The https scheme is currently not supported.
3.4. Cluster Network Operator configuration 링크 복사링크가 클립보드에 복사되었습니다!
The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a CR object that is named cluster. The CR specifies the parameters for the Network API in the operator.openshift.io API group.
After cluster installation, you cannot modify the configuration for the cluster network provider.
3.5. Configuring ingress cluster traffic 링크 복사링크가 클립보드에 복사되었습니다!
OpenShift Container Platform provides the following methods for communicating from outside the cluster with services running in the cluster:
- If you have HTTP/HTTPS, use an Ingress Controller.
- If you have a TLS-encrypted protocol other than HTTPS, such as TLS with the SNI header, use an Ingress Controller.
- Otherwise, use a load balancer, an external IP, or a node port.
| Method | Purpose |
|---|---|
| Allows access to HTTP/HTTPS traffic and TLS-encrypted protocols other than HTTPS, such as TLS with the SNI header. | |
| Automatically assign an external IP by using a load balancer service | Allows traffic to non-standard ports through an IP address assigned from a pool. |
| Allows traffic to non-standard ports through a specific IP address. | |
| Expose a service on all nodes in the cluster. |
3.6. Red Hat OpenShift Service Mesh supported configurations 링크 복사링크가 클립보드에 복사되었습니다!
The following are the only supported configurations for the Red Hat OpenShift Service Mesh:
- Red Hat OpenShift Container Platform version 4.x.
OpenShift Online and OpenShift Dedicated are not supported for Red Hat OpenShift Service Mesh.
- The deployment must be contained to a single OpenShift Container Platform cluster that is not federated.
- This release of Red Hat OpenShift Service Mesh is only available on OpenShift Container Platform x86_64.
- This release only supports configurations where all Service Mesh components are contained in the OpenShift cluster in which it operates. It does not support management of microservices that reside outside of the cluster, or in a multi-cluster scenario.
- This release only supports configurations that do not integrate external services such as virtual machines.
3.6.1. Supported configurations for Kiali on Red Hat OpenShift Service Mesh 링크 복사링크가 클립보드에 복사되었습니다!
- The Kiali observability console is only supported on the two most recent releases of the Chrome, Edge, Firefox, or Safari browsers.
3.6.2. Supported Mixer adapters 링크 복사링크가 클립보드에 복사되었습니다!
This release only supports the following Mixer adapter:
- 3scale Istio Adapter
3.6.3. Red Hat OpenShift Service Mesh installation activities 링크 복사링크가 클립보드에 복사되었습니다!
To install the Red Hat OpenShift Service Mesh Operator, you must first install these Operators:
- Elasticsearch - Based on the open source Elasticsearch project that enables you to configure and manage an Elasticsearch cluster for tracing and logging with Jaeger.
- Jaeger - based on the open source Jaeger project, lets you perform tracing to monitor and troubleshoot transactions in complex distributed systems.
- Kiali - based on the open source Kiali project, provides observability for your service mesh. By using Kiali you can view configurations, monitor traffic, and view and analyze traces in a single console.
After you install the Elasticsearch, Jaeger, and Kiali Operators, then you install the Red Hat OpenShift Service Mesh Operator. The Service Mesh Operator defines and monitors the ServiceMeshControlPlane resources that manage the deployment, updating, and deletion of the Service Mesh components.
- Red Hat OpenShift Service Mesh - based on the open source Istio project, lets you connect, secure, control, and observe the microservices that make up your applications.
Next steps
- Install Red Hat OpenShift Service Mesh in your OpenShift Container Platform environment.
3.7. Optimizing routing 링크 복사링크가 클립보드에 복사되었습니다!
The OpenShift Container Platform HAProxy router scales to optimize performance.
3.7.1. Baseline Ingress Controller (router) performance 링크 복사링크가 클립보드에 복사되었습니다!
The OpenShift Container Platform Ingress Controller, or router, is the Ingress point for all external traffic destined for OpenShift Container Platform services.
When evaluating a single HAProxy router performance in terms of HTTP requests handled per second, the performance varies depending on many factors. In particular:
- HTTP keep-alive/close mode
- Route type
- TLS session resumption client support
- Number of concurrent connections per target route
- Number of target routes
- Back end server page size
- Underlying infrastructure (network/SDN solution, CPU, and so on)
While performance in your specific environment will vary, Red Hat lab tests on a public cloud instance of size 4 vCPU/16GB RAM. A single HAProxy router handling 100 routes terminated by backends serving 1kB static pages is able to handle the following number of transactions per second.
In HTTP keep-alive mode scenarios:
| Encryption | LoadBalancerService | HostNetwork |
|---|---|---|
| none | 21515 | 29622 |
| edge | 16743 | 22913 |
| passthrough | 36786 | 53295 |
| re-encrypt | 21583 | 25198 |
In HTTP close (no keep-alive) scenarios:
| Encryption | LoadBalancerService | HostNetwork |
|---|---|---|
| none | 5719 | 8273 |
| edge | 2729 | 4069 |
| passthrough | 4121 | 5344 |
| re-encrypt | 2320 | 2941 |
Default Ingress Controller configuration with ROUTER_THREADS=4 was used and two different endpoint publishing strategies (LoadBalancerService/HostNetwork) were tested. TLS session resumption was used for encrypted routes. With HTTP keep-alive, a single HAProxy router is capable of saturating 1 Gbit NIC at page sizes as small as 8 kB.
When running on bare metal with modern processors, you can expect roughly twice the performance of the public cloud instance above. This overhead is introduced by the virtualization layer in place on public clouds and holds mostly true for private cloud-based virtualization as well. The following table is a guide to how many applications to use behind the router:
| Number of applications | Application type |
|---|---|
| 5-10 | static file/web server or caching proxy |
| 100-1000 | applications generating dynamic content |
In general, HAProxy can support routes for 5 to 1000 applications, depending on the technology in use. Ingress Controller performance might be limited by the capabilities and performance of the applications behind it, such as language or static versus dynamic content.
Ingress, or router, sharding should be used to serve more routes towards applications and help horizontally scale the routing tier.
3.7.2. Ingress Controller (router) performance optimizations 링크 복사링크가 클립보드에 복사되었습니다!
OpenShift Container Platform no longer supports modifying Ingress Controller deployments by setting environment variables such as ROUTER_THREADS, ROUTER_DEFAULT_TUNNEL_TIMEOUT, ROUTER_DEFAULT_CLIENT_TIMEOUT, ROUTER_DEFAULT_SERVER_TIMEOUT, and RELOAD_INTERVAL.
You can modify the Ingress Controller deployment, but if the Ingress Operator is enabled, the configuration is overwritten.