Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 6. iPerf
Perform real-time network throughput measurements while using iPerf3
This example is part of a suite of examples showing the different ways you can use Skupper to connect services across cloud providers, data centers, and edge sites.
Overview
This tutorial demonstrates how to perform real-time network throughput measurements across Kubernetes using the iperf3 tool. In this tutorial you:
- deploy iperf3 in three separate clusters
- run iperf3 client test instances
Prerequisites
-
The
kubectl
command-line tool, version 1.15 or later - Access to three clusters to observe performance. As an example, the three clusters might consist of:
- A private cloud cluster running on your local machine (private1)
- Two public cloud clusters running in public cloud providers (public1 and public2)
Procedure
- Clone the repo for this example.
- Install the Skupper command-line tool
- Configure separate console sessions
- Access your clusters
- Set up your namespaces
- Install Skupper in your namespaces
- Check the status of your namespaces
- Link your namespaces
- Deploy the iperf3 servers
- Expose iperf3 from each namespace
Run benchmark tests across the clusters
- Clone the repo for this example. Navigate to the appropriate GitHub repository from https://skupper.io/examples/index.html and clone the repository.
Install the Skupper command-line tool
The
skupper
command-line tool is the entrypoint for installing and configuring Skupper. You need to install theskupper
command only once for each development environment.See the Installation for details about installing the CLI. For configured systems, use the following command:
sudo dnf install skupper-cli
For Windows and other installation options, see Installing Skupper.
Configure separate console sessions
Skupper is designed for use with multiple namespaces, usually on different clusters. The
skupper
andkubectl
commands use your kubeconfig and current context to select the namespace where they operate.Your kubeconfig is stored in a file in your home directory. The
skupper
andkubectl
commands use theKUBECONFIG
environment variable to locate it.A single kubeconfig supports only one active context per user. Since you will be using multiple contexts at once in this exercise, you need to create distinct kubeconfigs.
Start a console session for each of your namespaces. Set the
KUBECONFIG
environment variable to a different path in each session.Console for public1:
export KUBECONFIG=~/.kube/config-public1
Console for public2:
export KUBECONFIG=~/.kube/config-public2
Console for private1:
export KUBECONFIG=~/.kube/config-private1
Access your clusters
The procedure for accessing a Kubernetes cluster varies by provider. Find the instructions for your chosen provider and use them to authenticate and configure access for each console session.
Set up your namespaces
Use
kubectl create namespace
to create the namespaces you wish to use (or use existing namespaces). Usekubectl config set-context
to set the current namespace for each session.Console for public1:
kubectl create namespace public1 kubectl config set-context --current --namespace public1
Console for public2:
kubectl create namespace public2 kubectl config set-context --current --namespace public2
Console for private1:
kubectl create namespace private1 kubectl config set-context --current --namespace private1
Install Skupper in your namespaces
The
skupper init
command installs the Skupper router and controller in the current namespace. Run theskupper init
command in each namespace.Console for public1:
skupper init --enable-console --enable-flow-collector
Console for public2:
skupper init
Console for private1:
skupper init
Sample output:
$ skupper init Waiting for LoadBalancer IP or hostname... Waiting for status... Skupper is now installed in namespace '<namespace>'. Use 'skupper status' to get more information.
Check the status of your namespaces
Use
skupper status
in each console to check that Skupper is installed.Console for public1:
skupper status
Console for public2:
skupper status
Console for private1:
skupper status
Sample output:
Skupper is enabled for namespace "<namespace>" in interior mode. It is connected to 1 other site. It has 1 exposed service. The site console url is: <console-url> The credentials for internal console-auth mode are held in secret: 'skupper-console-users'
As you move through the steps below, you can use
skupper status
at any time to check your progress.Link your namespaces
Creating a link requires use of two
skupper
commands in conjunction,skupper token create
andskupper link create
.The
skupper token create
command generates a secret token that signifies permission to create a link. The token also carries the link details. Then, in a remote namespace, Theskupper link create
command uses the token to create a link to the namespace that generated it.NoteThe link token is truly a secret. Anyone who has the token can link to your namespace. Make sure that only those you trust have access to it.
First, use
skupper token create
in one namespace to generate the token. Then, useskupper link create
in the other to create a link.Console for public1:
skupper token create ~/private1-to-public1-token.yaml skupper token create ~/public2-to-public1-token.yaml
Console for public2:
skupper token create ~/private1-to-public2-token.yaml skupper link create ~/public2-to-public1-token.yaml skupper link status --wait 60
Console for private1:
skupper link create ~/private1-to-public1-token.yaml skupper link create ~/private1-to-public2-token.yaml skupper link status --wait 60
If your console sessions are on different machines, you may need to use
scp
or a similar tool to transfer the token securely. By default, tokens expire after a single use or 15 minutes after creation.Deploy the iperf3 servers
After creating the application router network, deploy
iperf3
in each namespace.Console for private1:
kubectl apply -f deployment-iperf3-a.yaml
Console for public1:
kubectl apply -f deployment-iperf3-b.yaml
Console for public2:
kubectl apply -f deployment-iperf3-c.yaml
Expose iperf3 from each namespace
We have established connectivity between the namespaces and deployed
iperf3
. Before we can test performance, we need access to theiperf3
from each namespace.Console for private1:
skupper expose deployment/iperf3-server-a --port 5201
Console for public1:
skupper expose deployment/iperf3-server-b --port 5201
Console for public2:
skupper expose deployment/iperf3-server-c --port 5201
Run benchmark tests across the clusters
After deploying the iperf3 servers into the private and public cloud clusters, the virtual application network enables communications even though they are running in separate clusters.
Console for private1:
kubectl exec $(kubectl get pod -l application=iperf3-server-a -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-a kubectl exec $(kubectl get pod -l application=iperf3-server-a -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-b kubectl exec $(kubectl get pod -l application=iperf3-server-a -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-c
Console for public1:
kubectl exec $(kubectl get pod -l application=iperf3-server-b -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-a kubectl exec $(kubectl get pod -l application=iperf3-server-b -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-b kubectl exec $(kubectl get pod -l application=iperf3-server-b -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-c
Console for public2:
kubectl exec $(kubectl get pod -l application=iperf3-server-c -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-a kubectl exec $(kubectl get pod -l application=iperf3-server-c -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-b kubectl exec $(kubectl get pod -l application=iperf3-server-c -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-c