Chapter 7. Accessing Kafka using Skupper
Use public cloud resources to process data from a private Kafka cluster
This example is part of a suite of examples showing the different ways you can use Skupper to connect services across cloud providers, data centers, and edge sites.
Overview
This example is a simple Kafka application that shows how you can use Skupper to access a Kafka cluster at a remote site without exposing it to the public internet.
It contains two services:
- A Kafka cluster named "cluster1" running in a private data center. The cluster has a topic named "topic1".
- A Kafka client running in the public cloud. It sends 10 messages to "topic1" and then receives them back.
To set up the Kafka cluster, this example uses the Kubernetes operator from the Strimzi project. The Kafka client is a Java application built using Quarkus.
The example uses two Kubernetes namespaces, "private" and "public", to represent the private data center and public cloud.
Prerequisites
-
The
kubectlcommand-line tool, version 1.15 or later (installation guide) - Access to at least one Kubernetes cluster, from any provider you choose
Procedure
- Clone the repo for this example.
- Install the Skupper command-line tool
- Set up your namespaces
- Deploy the Kafka cluster
- Create your sites
- Link your sites
- Expose the Kafka cluster
- Clone the repo for this example. Navigate to the appropriate GitHub repository from https://skupper.io/examples/index.html and clone the repository.
Install the Skupper command-line tool
This example uses the Skupper command-line tool to deploy Skupper. You need to install the
skuppercommand only once for each development environment.See the Installation for details about installing the CLI. For configured systems, use the following command:
sudo dnf install skupper-cli
sudo dnf install skupper-cliCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set up your namespaces
Skupper is designed for use with multiple Kubernetes namespaces, usually on different clusters. The
skupperandkubectlcommands use your kubeconfig and current context to select the namespace where they operate.Your kubeconfig is stored in a file in your home directory. The
skupperandkubectlcommands use theKUBECONFIGenvironment variable to locate it.A single kubeconfig supports only one active context per user. Since you will be using multiple contexts at once in this exercise, you need to create distinct kubeconfigs.
For each namespace, open a new terminal window. In each terminal, set the
KUBECONFIGenvironment variable to a different path and log in to your cluster. Then create the namespace you wish to use and set the namespace on your current context.NoteThe login procedure varies by provider. See the documentation for yours:
Public:
export KUBECONFIG=~/.kube/config-public # Enter your provider-specific login command kubectl create namespace public kubectl config set-context --current --namespace public
export KUBECONFIG=~/.kube/config-public # Enter your provider-specific login command kubectl create namespace public kubectl config set-context --current --namespace publicCopy to Clipboard Copied! Toggle word wrap Toggle overflow Private:
export KUBECONFIG=~/.kube/config-private # Enter your provider-specific login command kubectl create namespace private kubectl config set-context --current --namespace private
export KUBECONFIG=~/.kube/config-private # Enter your provider-specific login command kubectl create namespace private kubectl config set-context --current --namespace privateCopy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the Kafka cluster
In Private, use the
kubectl createandkubectl applycommands with the listed YAML files to install the operator and deploy the cluster and topic.Private:
kubectl create -f server/strimzi.yaml kubectl apply -f server/cluster1.yaml kubectl wait --for condition=ready --timeout 900s kafka/cluster1
kubectl create -f server/strimzi.yaml kubectl apply -f server/cluster1.yaml kubectl wait --for condition=ready --timeout 900s kafka/cluster1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NOTE:
By default, the Kafka bootstrap server returns broker addresses that include the Kubernetes namespace in their domain name. When, as in this example, the Kafka client is running in a namespace with a different name from that of the Kafka cluster, this prevents the client from resolving the Kafka brokers.
To make the Kafka brokers reachable, set the
advertisedHostproperty of each broker to a domain name that the Kafka client can resolve at the remote site. In this example, this is achieved with the following listener configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow See Advertised addresses for brokers for more information.
Create your sites
A Skupper site is a location where components of your application are running. Sites are linked together to form a network for your application. In Kubernetes, a site is associated with a namespace.
For each namespace, use
skupper initto create a site. This deploys the Skupper router and controller. Then useskupper statusto see the outcome.Public:
skupper init skupper status
skupper init skupper statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Private:
skupper init skupper status
skupper init skupper statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As you move through the steps below, you can use
skupper statusat any time to check your progress.Link your sites
A Skupper link is a channel for communication between two sites. Links serve as a transport for application connections and requests.
Creating a link requires use of two
skuppercommands in conjunction,skupper token createandskupper link create.The
skupper token createcommand generates a secret token that signifies permission to create a link. The token also carries the link details. Then, in a remote site, Theskupper link createcommand uses the token to create a link to the site that generated it.NoteThe link token is truly a secret. Anyone who has the token can link to your site. Make sure that only those you trust have access to it.
First, use
skupper token createin site Public to generate the token. Then, useskupper link createin site Private to link the sites.Public:
skupper token create ~/secret.token
skupper token create ~/secret.tokenCopy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output:
skupper token create ~/secret.token
$ skupper token create ~/secret.token Token written to ~/secret.tokenCopy to Clipboard Copied! Toggle word wrap Toggle overflow Private:
skupper link create ~/secret.token
skupper link create ~/secret.tokenCopy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output:
skupper link create ~/secret.token
$ skupper link create ~/secret.token Site configured to link to https://10.105.193.154:8081/ed9c37f6-d78a-11ec-a8c7-04421a4c5042 (name=link1) Check the status of the link using 'skupper link status'.Copy to Clipboard Copied! Toggle word wrap Toggle overflow If your terminal sessions are on different machines, you may need to use
scpor a similar tool to transfer the token securely. By default, tokens expire after a single use or 15 minutes after creation.Expose the Kafka cluster
In Private, use
skupper exposewith the--headlessoption to expose the Kafka cluster as a headless service on the Skupper network.Then, in Public, use the
kubectl get servicecommand to check that thecluster1-kafka-brokersservice appears after a moment.Private:
skupper expose statefulset/cluster1-kafka --headless --port 9092
skupper expose statefulset/cluster1-kafka --headless --port 9092Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output:
skupper expose statefulset/cluster1-kafka --headless --port 9092
$ skupper expose statefulset/cluster1-kafka --headless --port 9092 statefulset cluster1-kafka exposed as cluster1-kafka-brokersCopy to Clipboard Copied! Toggle word wrap Toggle overflow Public:
kubectl get service/cluster1-kafka-brokers
kubectl get service/cluster1-kafka-brokersCopy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output:
kubectl get service/cluster1-kafka-brokers
$ kubectl get service/cluster1-kafka-brokers NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cluster1-kafka-brokers ClusterIP None <none> 9092/TCP 2sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the client
Use the
kubectl runcommand to execute the client program in Public.Public:
kubectl run client --attach --rm --restart Never --image quay.io/skupper/kafka-example-client --env BOOTSTRAPSERVERS=cluster1-kafka-brokers:9092
kubectl run client --attach --rm --restart Never --image quay.io/skupper/kafka-example-client --env BOOTSTRAPSERVERS=cluster1-kafka-brokers:9092Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To see the client code, look in the client directory of this project.