Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 13. Tutorial: Assigning a consistent egress IP for external traffic

download PDF

You can assign a consistent IP address for traffic that leaves your cluster such as security groups which require an IP-based configuration to meet security standards.

By default, Red Hat OpenShift Service on AWS (ROSA) uses the OVN-Kubernetes container network interface (CNI) to assign random IP addresses from a pool. This can make configuring security lockdowns unpredictable or open.

See Configuring an egress IP address for more information.

Objectives

  • Learn how to configure a set of predictable IP addresses for egress cluster traffic.

Prerequisites

13.1. Setting your environment variables

  • Set your environment variables by running the following command:

    Note

    Replace the value of the ROSA_MACHINE_POOL_NAME variable to target a different machine pool.

    $ export ROSA_CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}"  | sed 's/-[a-z0-9]\{5\}$//')
    $ export ROSA_MACHINE_POOL_NAME=worker

13.2. Ensuring capacity

The number of IP addresses assigned to each node is limited for each public cloud provider.

  • Verify sufficient capacity by running the following command:

    $ oc get node -o json | \
        jq '.items[] |
            {
                "name": .metadata.name,
                "ips": (.status.addresses | map(select(.type == "InternalIP") | .address)),
                "capacity": (.metadata.annotations."cloud.network.openshift.io/egress-ipconfig" | fromjson[] | .capacity.ipv4)
            }'

    Example output

    ---
    {
      "name": "ip-10-10-145-88.ec2.internal",
      "ips": [
        "10.10.145.88"
      ],
      "capacity": 14
    }
    {
      "name": "ip-10-10-154-175.ec2.internal",
      "ips": [
        "10.10.154.175"
      ],
      "capacity": 14
    }
    ---

13.3. Creating the egress IP rules

  1. Before creating the egress IP rules, identify which egress IPs you will use.

    Note

    The egress IPs that you select should exist as a part of the subnets in which the worker nodes are provisioned.

  2. Optional: Reserve the egress IPs that you requested to avoid conflicts with the AWS Virtual Private Cloud (VPC) Dynamic Host Configuration Protocol (DHCP) service.

    Request explicit IP reservations on the AWS documentation for CIDR reservations page.

13.4. Assigning an egress IP to a namespace

  1. Create a new project by running the following command:

    $ oc new-project demo-egress-ns
  2. Create the egress rule for all pods within the namespace by running the following command:

    $ cat <<EOF | oc apply -f -
    apiVersion: k8s.ovn.org/v1
    kind: EgressIP
    metadata:
      name: demo-egress-ns
    spec:
      # NOTE: these egress IPs are within the subnet range(s) in which my worker nodes
      #       are deployed.
      egressIPs:
        - 10.10.100.253
        - 10.10.150.253
        - 10.10.200.253
      namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: demo-egress-ns
    EOF

13.5. Assigning an egress IP to a pod

  1. Create a new project by running the following command:

    $ oc new-project demo-egress-pod
  2. Create the egress rule for the pod by running the following command:

    Note

    spec.namespaceSelector is a mandatory field.

    $ cat <<EOF | oc apply -f -
    apiVersion: k8s.ovn.org/v1
    kind: EgressIP
    metadata:
      name: demo-egress-pod
    spec:
      # NOTE: these egress IPs are within the subnet range(s) in which my worker nodes
      #       are deployed.
      egressIPs:
        - 10.10.100.254
        - 10.10.150.254
        - 10.10.200.254
      namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: demo-egress-pod
      podSelector:
        matchLabels:
          run: demo-egress-pod
    EOF

13.5.1. Labeling the nodes

  1. Obtain your pending egress IP assignments by running the following command:

    $ oc get egressips

    Example output

    NAME              EGRESSIPS       ASSIGNED NODE   ASSIGNED EGRESSIPS
    demo-egress-ns    10.10.100.253
    demo-egress-pod   10.10.100.254

    The egress IP rule that you created only applies to nodes with the k8s.ovn.org/egress-assignable label. Make sure that the label is only on a specific machine pool.

  2. Assign the label to your machine pool using the following command:

    Warning

    If you rely on node labels for your machine pool, this command will replace those labels. Be sure to input your desired labels into the --labels field to ensure your node labels remain.

    $ rosa update machinepool ${ROSA_MACHINE_POOL_NAME} \
      --cluster="${ROSA_CLUSTER_NAME}" \
      --labels "k8s.ovn.org/egress-assignable="

13.5.2. Reviewing the egress IPs

  • Review the egress IP assignments by running the following command:

    $ oc get egressips

    Example output

    NAME              EGRESSIPS       ASSIGNED NODE                   ASSIGNED EGRESSIPS
    demo-egress-ns    10.10.100.253   ip-10-10-156-122.ec2.internal   10.10.150.253
    demo-egress-pod   10.10.100.254   ip-10-10-156-122.ec2.internal   10.10.150.254

13.6. Verification

13.6.1. Deploying a sample application

To test the egress IP rule, create a service that is restricted to the egress IP addresses which we have specified. This simulates an external service that is expecting a small subset of IP addresses.

  1. Run the echoserver command to replicate a request:

    $ oc -n default run demo-service --image=gcr.io/google_containers/echoserver:1.4
  2. Expose the pod as a service and limit the ingress to the egress IP addresses you specified by running the following command:

    $ cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: Service
    metadata:
      name: demo-service
      namespace: default
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
        service.beta.kubernetes.io/aws-load-balancer-internal: "true"
    spec:
      selector:
        run: demo-service
      ports:
        - port: 80
          targetPort: 8080
      type: LoadBalancer
      externalTrafficPolicy: Local
      # NOTE: this limits the source IPs that are allowed to connect to our service.  It
      #       is being used as part of this demo, restricting connectivity to our egress
      #       IP addresses only.
      # NOTE: these egress IPs are within the subnet range(s) in which my worker nodes
      #       are deployed.
      loadBalancerSourceRanges:
        - 10.10.100.254/32
        - 10.10.150.254/32
        - 10.10.200.254/32
        - 10.10.100.253/32
        - 10.10.150.253/32
        - 10.10.200.253/32
    EOF
  3. Retrieve the load balancer hostname and save it as an environment variable by running the following command:

    $ export LOAD_BALANCER_HOSTNAME=$(oc get svc -n default demo-service -o json | jq -r '.status.loadBalancer.ingress[].hostname')

13.6.2. Testing the namespace egress

  1. Start an interactive shell to test the namespace egress rule:

    $ oc run \
      demo-egress-ns \
      -it \
      --namespace=demo-egress-ns \
      --env=LOAD_BALANCER_HOSTNAME=$LOAD_BALANCER_HOSTNAME \
      --image=registry.access.redhat.com/ubi9/ubi -- \
      bash
  2. Send a request to the load balancer and ensure that you can successfully connect:

    $ curl -s http://$LOAD_BALANCER_HOSTNAME
  3. Check the output for a successful connection:

    Note

    The client_address is the internal IP address of the load balancer not your egress IP. You can verify that you have configured the client address correctly by connecting with your service limited to .spec.loadBalancerSourceRanges.

    Example output

    CLIENT VALUES:
    client_address=10.10.207.247
    command=GET
    real path=/
    query=nil
    request_version=1.1
    request_uri=http://internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com:8080/
    
    SERVER VALUES:
    server_version=nginx: 1.10.0 - lua: 10001
    
    HEADERS RECEIVED:
    accept=*/*
    host=internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com
    user-agent=curl/7.76.1
    BODY:
    -no body in request-

  4. Exit the pod by running the following command:

    $ exit

13.6.3. Testing the pod egress

  1. Start an interactive shell to test the pod egress rule:

    $ oc run \
      demo-egress-pod \
      -it \
      --namespace=demo-egress-pod \
      --env=LOAD_BALANCER_HOSTNAME=$LOAD_BALANCER_HOSTNAME \
      --image=registry.access.redhat.com/ubi9/ubi -- \
      bash
  2. Send a request to the load balancer by running the following command:

    $ curl -s http://$LOAD_BALANCER_HOSTNAME
  3. Check the output for a successful connection:

    Note

    The client_address is the internal IP address of the load balancer not your egress IP. You can verify that you have configured the client address correctly by connecting with your service limited to .spec.loadBalancerSourceRanges.

    Example output

    CLIENT VALUES:
    client_address=10.10.207.247
    command=GET
    real path=/
    query=nil
    request_version=1.1
    request_uri=http://internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com:8080/
    
    SERVER VALUES:
    server_version=nginx: 1.10.0 - lua: 10001
    
    HEADERS RECEIVED:
    accept=*/*
    host=internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com
    user-agent=curl/7.76.1
    BODY:
    -no body in request-

  4. Exit the pod by running the following command:

    $ exit

13.6.4. Optional: Testing blocked egress

  1. Optional: Test that the traffic is successfully blocked when the egress rules do not apply by running the following command:

    $ oc run \
      demo-egress-pod-fail \
      -it \
      --namespace=demo-egress-pod \
      --env=LOAD_BALANCER_HOSTNAME=$LOAD_BALANCER_HOSTNAME \
      --image=registry.access.redhat.com/ubi9/ubi -- \
      bash
  2. Send a request to the load balancer by running the following command:

    $ curl -s http://$LOAD_BALANCER_HOSTNAME
  3. If the command is unsuccessful, egress is successfully blocked.
  4. Exit the pod by running the following command:

    $ exit

13.7. Cleaning up your cluster

  1. Clean up your cluster by running the following commands:

    $ oc delete svc demo-service -n default; \
    $ oc delete pod demo-service -n default; \
    $ oc delete project demo-egress-ns; \
    $ oc delete project demo-egress-pod; \
    $ oc delete egressip demo-egress-ns; \
    $ oc delete egressip demo-egress-pod
  2. Clean up the assigned node labels by running the following command:

    Warning

    If you rely on node labels for your machine pool, this command replaces those labels. Input your desired labels into the --labels field to ensure your node labels remain.

    $ rosa update machinepool ${ROSA_MACHINE_POOL_NAME} \
      --cluster="${ROSA_CLUSTER_NAME}" \
      --labels ""
Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.