Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 7. Accessing Kafka using Skupper
Use public cloud resources to process data from a private Kafka cluster
This example is part of a suite of examples showing the different ways you can use Skupper to connect services across cloud providers, data centers, and edge sites.
Overview
This example is a simple Kafka application that shows how you can use Skupper to access a Kafka cluster at a remote site without exposing it to the public internet.
It contains two services:
- A Kafka cluster named "cluster1" running in a private data center. The cluster has a topic named "topic1".
- A Kafka client running in the public cloud. It sends 10 messages to "topic1" and then receives them back.
To set up the Kafka cluster, this example uses the Kubernetes operator from the Strimzi project. The Kafka client is a Java application built using Quarkus.
The example uses two Kubernetes namespaces, "private" and "public", to represent the private data center and public cloud.
Prerequisites
-
The
kubectl
command-line tool, version 1.15 or later (installation guide) - Access to at least one Kubernetes cluster, from any provider you choose
Procedure
- Clone the repo for this example.
- Install the Skupper command-line tool
- Set up your namespaces
- Deploy the Kafka cluster
- Create your sites
- Link your sites
- Expose the Kafka cluster
- Clone the repo for this example. Navigate to the appropriate GitHub repository from https://skupper.io/examples/index.html and clone the repository.
Install the Skupper command-line tool
This example uses the Skupper command-line tool to deploy Skupper. You need to install the
skupper
command only once for each development environment.See the Installation for details about installing the CLI. For configured systems, use the following command:
sudo dnf install skupper-cli
Set up your namespaces
Skupper is designed for use with multiple Kubernetes namespaces, usually on different clusters. The
skupper
andkubectl
commands use your kubeconfig and current context to select the namespace where they operate.Your kubeconfig is stored in a file in your home directory. The
skupper
andkubectl
commands use theKUBECONFIG
environment variable to locate it.A single kubeconfig supports only one active context per user. Since you will be using multiple contexts at once in this exercise, you need to create distinct kubeconfigs.
For each namespace, open a new terminal window. In each terminal, set the
KUBECONFIG
environment variable to a different path and log in to your cluster. Then create the namespace you wish to use and set the namespace on your current context.NoteThe login procedure varies by provider. See the documentation for yours:
Public:
export KUBECONFIG=~/.kube/config-public # Enter your provider-specific login command kubectl create namespace public kubectl config set-context --current --namespace public
Private:
export KUBECONFIG=~/.kube/config-private # Enter your provider-specific login command kubectl create namespace private kubectl config set-context --current --namespace private
Deploy the Kafka cluster
In Private, use the
kubectl create
andkubectl apply
commands with the listed YAML files to install the operator and deploy the cluster and topic.Private:
kubectl create -f server/strimzi.yaml kubectl apply -f server/cluster1.yaml kubectl wait --for condition=ready --timeout 900s kafka/cluster1
Sample output:
$ kubectl create -f server/strimzi.yaml customresourcedefinition.apiextensions.k8s.io/kafkas.kafka.strimzi.io created rolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-entity-operator-delegation created clusterrolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator created rolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-topic-operator-delegation created customresourcedefinition.apiextensions.k8s.io/kafkausers.kafka.strimzi.io created customresourcedefinition.apiextensions.k8s.io/kafkarebalances.kafka.strimzi.io created deployment.apps/strimzi-cluster-operator created customresourcedefinition.apiextensions.k8s.io/kafkamirrormaker2s.kafka.strimzi.io created clusterrole.rbac.authorization.k8s.io/strimzi-entity-operator created clusterrole.rbac.authorization.k8s.io/strimzi-cluster-operator-global created clusterrolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-kafka-broker-delegation created rolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator created clusterrole.rbac.authorization.k8s.io/strimzi-cluster-operator-namespaced created clusterrole.rbac.authorization.k8s.io/strimzi-topic-operator created clusterrolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-kafka-client-delegation created clusterrole.rbac.authorization.k8s.io/strimzi-kafka-client created serviceaccount/strimzi-cluster-operator created clusterrole.rbac.authorization.k8s.io/strimzi-kafka-broker created customresourcedefinition.apiextensions.k8s.io/kafkatopics.kafka.strimzi.io created customresourcedefinition.apiextensions.k8s.io/kafkabridges.kafka.strimzi.io created customresourcedefinition.apiextensions.k8s.io/kafkaconnectors.kafka.strimzi.io created customresourcedefinition.apiextensions.k8s.io/kafkaconnects2is.kafka.strimzi.io created customresourcedefinition.apiextensions.k8s.io/kafkaconnects.kafka.strimzi.io created customresourcedefinition.apiextensions.k8s.io/kafkamirrormakers.kafka.strimzi.io created configmap/strimzi-cluster-operator created $ kubectl apply -f server/cluster1.yaml kafka.kafka.strimzi.io/cluster1 created kafkatopic.kafka.strimzi.io/topic1 created $ kubectl wait --for condition=ready --timeout 900s kafka/cluster1 kafka.kafka.strimzi.io/cluster1 condition met
NOTE:
By default, the Kafka bootstrap server returns broker addresses that include the Kubernetes namespace in their domain name. When, as in this example, the Kafka client is running in a namespace with a different name from that of the Kafka cluster, this prevents the client from resolving the Kafka brokers.
To make the Kafka brokers reachable, set the
advertisedHost
property of each broker to a domain name that the Kafka client can resolve at the remote site. In this example, this is achieved with the following listener configuration:spec: kafka: listeners: - name: plain port: 9092 type: internal tls: false configuration: brokers: - broker: 0 advertisedHost: cluster1-kafka-0.cluster1-kafka-brokers
See Advertised addresses for brokers for more information.
Create your sites
A Skupper site is a location where components of your application are running. Sites are linked together to form a network for your application. In Kubernetes, a site is associated with a namespace.
For each namespace, use
skupper init
to create a site. This deploys the Skupper router and controller. Then useskupper status
to see the outcome.Public:
skupper init skupper status
Sample output:
$ skupper init Waiting for LoadBalancer IP or hostname... Waiting for status... Skupper is now installed in namespace 'public'. Use 'skupper status' to get more information. $ skupper status Skupper is enabled for namespace "public". It is not connected to any other sites. It has no exposed services.
Private:
skupper init skupper status
Sample output:
$ skupper init Waiting for LoadBalancer IP or hostname... Waiting for status... Skupper is now installed in namespace 'private'. Use 'skupper status' to get more information. $ skupper status Skupper is enabled for namespace "private". It is not connected to any other sites. It has no exposed services.
As you move through the steps below, you can use
skupper status
at any time to check your progress.Link your sites
A Skupper link is a channel for communication between two sites. Links serve as a transport for application connections and requests.
Creating a link requires use of two
skupper
commands in conjunction,skupper token create
andskupper link create
.The
skupper token create
command generates a secret token that signifies permission to create a link. The token also carries the link details. Then, in a remote site, Theskupper link create
command uses the token to create a link to the site that generated it.NoteThe link token is truly a secret. Anyone who has the token can link to your site. Make sure that only those you trust have access to it.
First, use
skupper token create
in site Public to generate the token. Then, useskupper link create
in site Private to link the sites.Public:
skupper token create ~/secret.token
Sample output:
$ skupper token create ~/secret.token Token written to ~/secret.token
Private:
skupper link create ~/secret.token
Sample output:
$ skupper link create ~/secret.token Site configured to link to https://10.105.193.154:8081/ed9c37f6-d78a-11ec-a8c7-04421a4c5042 (name=link1) Check the status of the link using 'skupper link status'.
If your terminal sessions are on different machines, you may need to use
scp
or a similar tool to transfer the token securely. By default, tokens expire after a single use or 15 minutes after creation.Expose the Kafka cluster
In Private, use
skupper expose
with the--headless
option to expose the Kafka cluster as a headless service on the Skupper network.Then, in Public, use the
kubectl get service
command to check that thecluster1-kafka-brokers
service appears after a moment.Private:
skupper expose statefulset/cluster1-kafka --headless --port 9092
Sample output:
$ skupper expose statefulset/cluster1-kafka --headless --port 9092 statefulset cluster1-kafka exposed as cluster1-kafka-brokers
Public:
kubectl get service/cluster1-kafka-brokers
Sample output:
$ kubectl get service/cluster1-kafka-brokers NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cluster1-kafka-brokers ClusterIP None <none> 9092/TCP 2s
Run the client
Use the
kubectl run
command to execute the client program in Public.Public:
kubectl run client --attach --rm --restart Never --image quay.io/skupper/kafka-example-client --env BOOTSTRAPSERVERS=cluster1-kafka-brokers:9092
Sample output:
$ kubectl run client --attach --rm --restart Never --image quay.io/skupper/kafka-example-client --env BOOTSTRAPSERVERS=cluster1-kafka-brokers:9092 [...] Received message 1 Received message 2 Received message 3 Received message 4 Received message 5 Received message 6 Received message 7 Received message 8 Received message 9 Received message 10 Result: OK [...]
To see the client code, look in the client directory of this project.