Este conteúdo não está disponível no idioma selecionado.
Chapter 12. Tutorial: Assigning a consistent egress IP for external traffic
This tutorial teaches you how to configure a set of predictable IP addresses for egress cluster traffic. You can assign a consistent IP address for traffic that leaves your cluster such as security groups which require an IP-based configuration to meet security standards.
By default, Red Hat OpenShift Service on AWS uses the OVN-Kubernetes container network interface (CNI) to assign random IP addresses from a pool. This can make configuring security lockdowns unpredictable or open.
See Configuring an egress IP address for more information.
12.1. Prerequisites Copiar o linkLink copiado para a área de transferência!
- A Red Hat OpenShift Service on AWS cluster deployed with OVN-Kubernetes
-
The OpenShift CLI (
oc) -
The ROSA CLI (
rosa) -
jq
12.2. Setting your environment variables Copiar o linkLink copiado para a área de transferência!
You may set environment variables to make it easier to reuse values.
Procedure
Set your environment variables by running the following command:
NoteReplace the value of the
ROSA_MACHINE_POOL_NAMEvariable to target a different machine pool.$ export ROSA_CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}" | sed 's/-[a-z0-9]\{5\}$//') $ export ROSA_MACHINE_POOL_NAME=worker
12.3. Ensuring capacity Copiar o linkLink copiado para a área de transferência!
The number of IP addresses assigned to each node is limited for each public cloud provider.
Procedure
Verify sufficient capacity by running the following command:
$ oc get node -o json | \ jq '.items[] | { "name": .metadata.name, "ips": (.status.addresses | map(select(.type == "InternalIP") | .address)), "capacity": (.metadata.annotations."cloud.network.openshift.io/egress-ipconfig" | fromjson[] | .capacity.ipv4) }'For example:
--- { "name": "ip-10-10-145-88.ec2.internal", "ips": [ "10.10.145.88" ], "capacity": 14 } { "name": "ip-10-10-154-175.ec2.internal", "ips": [ "10.10.154.175" ], "capacity": 14 } ---
12.4. Creating the egress IP rules Copiar o linkLink copiado para a área de transferência!
Before creating the egress IP rules, identify which egress IPs you will use.
The egress IPs that you select should exist as a part of the subnets in which the worker nodes are provisioned.
Procedure
Reserve the egress IPs that you requested to avoid conflicts with the AWS Virtual Private Cloud (VPC) Dynamic Host Configuration Protocol (DHCP) service.
Request explicit IP reservations on the AWS documentation for CIDR reservations page.
12.5. Assigning an egress IP to a namespace Copiar o linkLink copiado para a área de transferência!
You can assign an egress IP to a namespace on your cluster by using the OpenShift CLI (oc) tool.
Procedure
Create a new project by running the following command:
$ oc new-project demo-egress-nsCreate the egress rule for all pods within the namespace by running the following command:
$ cat <<EOF | oc apply -f - apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: demo-egress-ns spec: # NOTE: these egress IPs are within the subnet range(s) in which my worker nodes # are deployed. egressIPs: - 10.10.100.253 - 10.10.150.253 - 10.10.200.253 namespaceSelector: matchLabels: kubernetes.io/metadata.name: demo-egress-ns EOF
12.6. Assigning an egress IP to a pod Copiar o linkLink copiado para a área de transferência!
Create an egress rule to assign an egress IP to a specified pod.
Procedure
Create a new project by running the following command:
$ oc new-project demo-egress-podCreate the egress rule for the pod by running the following command:
Notespec.namespaceSelectoris a mandatory field.$ cat <<EOF | oc apply -f - apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: demo-egress-pod spec: # NOTE: these egress IPs are within the subnet range(s) in which my worker nodes # are deployed. egressIPs: - 10.10.100.254 - 10.10.150.254 - 10.10.200.254 namespaceSelector: matchLabels: kubernetes.io/metadata.name: demo-egress-pod podSelector: matchLabels: run: demo-egress-pod EOF
12.6.1. Labeling the nodes Copiar o linkLink copiado para a área de transferência!
You can label your nodes by using the OpenShift CLI (oc) tool.
Procedure
Obtain your pending egress IP assignments by running the following command:
$ oc get egressipsFor example:
NAME EGRESSIPS ASSIGNED NODE ASSIGNED EGRESSIPS demo-egress-ns 10.10.100.253 demo-egress-pod 10.10.100.254The egress IP rule that you created only applies to nodes with the
k8s.ovn.org/egress-assignablelabel. Make sure that the label is only on a specific machine pool.Assign the label to your machine pool using the following command:
WarningIf you rely on node labels for your machine pool, this command will replace those labels. Be sure to input your desired labels into the
--labelsfield to ensure your node labels remain.$ rosa update machinepool ${ROSA_MACHINE_POOL_NAME} \ --cluster="${ROSA_CLUSTER_NAME}" \ --labels "k8s.ovn.org/egress-assignable="
12.6.2. Reviewing the egress IPs Copiar o linkLink copiado para a área de transferência!
You can list all of the egress IPs by using OpenShift CLI (oc) tool.
Procedure
Review the egress IP assignments by running the following command:
$ oc get egressipsFor example:
NAME EGRESSIPS ASSIGNED NODE ASSIGNED EGRESSIPS demo-egress-ns 10.10.100.253 ip-10-10-156-122.ec2.internal 10.10.150.253 demo-egress-pod 10.10.100.254 ip-10-10-156-122.ec2.internal 10.10.150.254
12.6.3. Deploying a sample application Copiar o linkLink copiado para a área de transferência!
To test the egress IP rule, create a service that is restricted to the egress IP addresses which we have specified. This simulates an external service that is expecting a small subset of IP addresses.
Procedure
Run the
echoservercommand to replicate a request:$ oc -n default run demo-service --image=gcr.io/google_containers/echoserver:1.4Expose the pod as a service and limit the ingress to the egress IP addresses you specified by running the following command:
$ cat <<EOF | oc apply -f - apiVersion: v1 kind: Service metadata: name: demo-service namespace: default annotations: service.beta.kubernetes.io/aws-load-balancer-scheme: "internal" service.beta.kubernetes.io/aws-load-balancer-internal: "true" spec: selector: run: demo-service ports: - port: 80 targetPort: 8080 type: LoadBalancer externalTrafficPolicy: Local # NOTE: this limits the source IPs that are allowed to connect to our service. It # is being used as part of this demo, restricting connectivity to our egress # IP addresses only. # NOTE: these egress IPs are within the subnet range(s) in which my worker nodes # are deployed. loadBalancerSourceRanges: - 10.10.100.254/32 - 10.10.150.254/32 - 10.10.200.254/32 - 10.10.100.253/32 - 10.10.150.253/32 - 10.10.200.253/32 EOFRetrieve the load balancer hostname and save it as an environment variable by running the following command:
$ export LOAD_BALANCER_HOSTNAME=$(oc get svc -n default demo-service -o json | jq -r '.status.loadBalancer.ingress[].hostname')
12.6.4. Testing the namespace egress Copiar o linkLink copiado para a área de transferência!
You can test the namespace egress by using the OpenShift CLI (oc) tool.
Procedure
Start an interactive shell to test the namespace egress rule:
$ oc run \ demo-egress-ns \ -it \ --namespace=demo-egress-ns \ --env=LOAD_BALANCER_HOSTNAME=$LOAD_BALANCER_HOSTNAME \ --image=registry.access.redhat.com/ubi9/ubi -- \ bashSend a request to the load balancer and ensure that you can successfully connect:
$ curl -s http://$LOAD_BALANCER_HOSTNAMECheck the output for a successful connection:
NoteThe
client_addressis the internal IP address of the load balancer not your egress IP. You can verify that you have configured the client address correctly by connecting with your service limited to.spec.loadBalancerSourceRanges.For example:
CLIENT VALUES: client_address=10.10.207.247 command=GET real path=/ query=nil request_version=1.1 request_uri=http://internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com:8080/ SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* host=internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com user-agent=curl/7.76.1 BODY: -no body in request-Exit the pod by running the following command:
$ exit
12.6.5. Testing the pod egress Copiar o linkLink copiado para a área de transferência!
You can test your pod’s egress by using the OpenShift CLI (oc) tool.
Procedure
Start an interactive shell to test the pod egress rule:
$ oc run \ demo-egress-pod \ -it \ --namespace=demo-egress-pod \ --env=LOAD_BALANCER_HOSTNAME=$LOAD_BALANCER_HOSTNAME \ --image=registry.access.redhat.com/ubi9/ubi -- \ bashSend a request to the load balancer by running the following command:
$ curl -s http://$LOAD_BALANCER_HOSTNAMECheck the output for a successful connection:
NoteThe
client_addressis the internal IP address of the load balancer not your egress IP. You can verify that you have configured the client address correctly by connecting with your service limited to.spec.loadBalancerSourceRanges.For example:
CLIENT VALUES: client_address=10.10.207.247 command=GET real path=/ query=nil request_version=1.1 request_uri=http://internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com:8080/ SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* host=internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com user-agent=curl/7.76.1 BODY: -no body in request-Exit the pod by running the following command:
$ exit
12.6.6. Optional: Testing blocked egress Copiar o linkLink copiado para a área de transferência!
You can test if the egress is blocked by using OpenShift CLI (oc) tool.
Procedure
Test that the traffic is successfully blocked when the egress rules do not apply by running the following command:
$ oc run \ demo-egress-pod-fail \ -it \ --namespace=demo-egress-pod \ --env=LOAD_BALANCER_HOSTNAME=$LOAD_BALANCER_HOSTNAME \ --image=registry.access.redhat.com/ubi9/ubi -- \ bashSend a request to the load balancer by running the following command:
$ curl -s http://$LOAD_BALANCER_HOSTNAME- If the command is unsuccessful, egress is successfully blocked.
Exit the pod by running the following command:
$ exit
12.7. Cleaning up your cluster Copiar o linkLink copiado para a área de transferência!
Before moving to a different tutorial, you can clean up your cluster environment with a few commands.
Procedure
Clean up your cluster by running the following commands:
$ oc delete svc demo-service -n default; \ $ oc delete pod demo-service -n default; \ $ oc delete project demo-egress-ns; \ $ oc delete project demo-egress-pod; \ $ oc delete egressip demo-egress-ns; \ $ oc delete egressip demo-egress-podClean up the assigned node labels by running the following command:
WarningIf you rely on node labels for your machine pool, this command replaces those labels. Input your desired labels into the
--labelsfield to ensure your node labels remain.$ rosa update machinepool ${ROSA_MACHINE_POOL_NAME} \ --cluster="${ROSA_CLUSTER_NAME}" \ --labels ""