이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 4. Skupper Camel Integration Example
Twitter, Telegram and PostgreSQL integration routes deployed across Kubernetes clusters using Skupper
This example is part of a suite of examples showing the different ways you can use Skupper to connect services across cloud providers, data centers, and edge sites.
Overview
In this example we can see how to integrate different Camel integration routers that can be deployed across multiple Kubernetes clusters using Skupper.
The main idea of this project is to show a Camel integration deployed in a public cluster which searches tweets that contain the word 'skupper'. Those results are sent to a private cluster that has a database deployed. A third public cluster will ping the database and send new results to a Telegram channel.
In order to run this example you will need to create a Telegram channel and a Twitter Account to use its credentials.
It contains the following components:
-
A Twitter Camel integration that searches in the Twitter feed for results containing the word
skupper(public). - A PostgreSQL Camel sink that receives the data from the Twitter Camel router and sends it to the database (public).
- A PostgreSQL database that contains the results (private).
- A Telegram Camel integration that polls the database and sends the results to a Telegram channel (public).
Prerequisites
-
The
kubectlcommand-line tool, version 1.15 or later -
The
skuppercommand-line tool, the latest version - Access to at least one Kubernetes cluster, from any provider you choose
Kamelinstallation to deploy the Camel integrations per namespace.kamel install
kamel installCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
A
Twitter Developer Accountin order to use the Twiter API (you need to add the credentials inconfig.propertiesfile) -
Create a
TelegramBot and Channel to publish messages (you need to add the credentials inconfig.propertiesfile)
Procedure
- Configure separate console sessions
- Access your clusters
- Set up your namespaces
- Install Skupper in your namespaces
- Check the status of your namespaces
- Link your namespaces
- Deploy and expose the database in the private cluster
- Create the table to store the tweets
- Deploy Twitter Camel Integration in the public cluster
- Deploy Telegram Camel integration in the public cluster
Configure separate console sessions
Skupper is designed for use with multiple namespaces, typically on different clusters. The
skuppercommand uses your kubeconfig and current context to select the namespace where it operates.Your kubeconfig is stored in a file in your home directory. The
skupperandkubectlcommands use theKUBECONFIGenvironment variable to locate it.A single kubeconfig supports only one active context per user. Since you will be using multiple contexts at once in this exercise, you need to create distinct kubeconfigs.
Start a console session for each of your namespaces. Set the
KUBECONFIGenvironment variable to a different path in each session.Console for private1:
export KUBECONFIG=~/.kube/config-private1
export KUBECONFIG=~/.kube/config-private1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Console for public1:
export KUBECONFIG=~/.kube/config-public1
export KUBECONFIG=~/.kube/config-public1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Console for public2:
export KUBECONFIG=~/.kube/config-public2
export KUBECONFIG=~/.kube/config-public2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Access your clusters
The methods for accessing your clusters vary by Kubernetes provider. Find the instructions for your chosen providers and use them to authenticate and configure access for each console session. See the following links for more information:
Set up your namespaces
Use
kubectl create namespaceto create the namespaces you wish to use (or use existing namespaces). Usekubectl config set-contextto set the current namespace for each session.Console for private1:
kubectl create namespace private1 kubectl config set-context --current --namespace private1
kubectl create namespace private1 kubectl config set-context --current --namespace private1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Console for public1:
kubectl create namespace public1 kubectl config set-context --current --namespace public1
kubectl create namespace public1 kubectl config set-context --current --namespace public1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Console for public2:
kubectl create namespace public2 kubectl config set-context --current --namespace public2
kubectl create namespace public2 kubectl config set-context --current --namespace public2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install Skupper in your namespaces
The
skupper initcommand installs the Skupper router and service controller in the current namespace. Run theskupper initcommand in each namespace.Console for private1:
skupper init
skupper initCopy to Clipboard Copied! Toggle word wrap Toggle overflow Console for public1:
skupper init
skupper initCopy to Clipboard Copied! Toggle word wrap Toggle overflow Console for public2:
skupper init
skupper initCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of your namespaces
Use
skupper statusin each console to check that Skupper is installed.Console for private1:
skupper status
skupper statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow Console for public1:
skupper status
skupper statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow Console for public2:
skupper status
skupper statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow You should see output like this for each namespace:
Skupper is enabled for namespace "<namespace>" in interior mode. It is not connected to any other sites. It has no exposed services. The site console url is: http://<address>:8080 The credentials for internal console-auth mode are held in secret: 'skupper-console-users'
Skupper is enabled for namespace "<namespace>" in interior mode. It is not connected to any other sites. It has no exposed services. The site console url is: http://<address>:8080 The credentials for internal console-auth mode are held in secret: 'skupper-console-users'Copy to Clipboard Copied! Toggle word wrap Toggle overflow As you move through the steps below, you can use
skupper statusat any time to check your progress.Link your namespaces
Creating a link requires use of two
skuppercommands in conjunction,skupper token createandskupper link create.The
skupper token createcommand generates a secret token that signifies permission to create a link. The token also carries the link details. Then, in a remote namespace, Theskupper link createcommand uses the token to create a link to the namespace that generated it.NoteThe link token is truly a secret. Anyone who has the token can link to your namespace. Make sure that only those you trust have access to it.
First, use
skupper token createin one namespace to generate the token. Then, useskupper link createin the other to create a link.Console for public1:
skupper token create ~/public1.token --uses 2
skupper token create ~/public1.token --uses 2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Console for public2:
skupper link create ~/public1.token skupper link status --wait 30 skupper token create ~/public2.token
skupper link create ~/public1.token skupper link status --wait 30 skupper token create ~/public2.tokenCopy to Clipboard Copied! Toggle word wrap Toggle overflow Console for private1:
skupper link create ~/public1.token skupper link create ~/public2.token skupper link status --wait 30
skupper link create ~/public1.token skupper link create ~/public2.token skupper link status --wait 30Copy to Clipboard Copied! Toggle word wrap Toggle overflow If your console sessions are on different machines, you may need to use
scpor a similar tool to transfer the token.Deploy and expose the database in the private cluster
Use
kubectl applyto deploy the database inprivate1. Then expose the deployment.Console for private1:
kubectl create -f src/main/resources/database/postgres-svc.yaml skupper expose deployment postgres --address postgres --port 5432 -n private1
kubectl create -f src/main/resources/database/postgres-svc.yaml skupper expose deployment postgres --address postgres --port 5432 -n private1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the table to store the tweets
Console for private1:
kubectl run pg-shell -i --tty --image quay.io/skupper/simple-pg --env="PGUSER=postgresadmin" --env="PGPASSWORD=admin123" --env="PGHOST=$(kubectl get service postgres -o=jsonpath='{.spec.clusterIP}')" -- bash psql --dbname=postgresdb CREATE EXTENSION IF NOT EXISTS "uuid-ossp"; CREATE TABLE tw_feedback (id uuid DEFAULT uuid_generatev4 (),sigthning VARCHAR(255),created TIMESTAMP default CURRENTTIMESTAMP,PRIMARY KEY(id));kubectl run pg-shell -i --tty --image quay.io/skupper/simple-pg --env="PGUSER=postgresadmin" --env="PGPASSWORD=admin123" --env="PGHOST=$(kubectl get service postgres -o=jsonpath='{.spec.clusterIP}')" -- bash psql --dbname=postgresdb CREATE EXTENSION IF NOT EXISTS "uuid-ossp"; CREATE TABLE tw_feedback (id uuid DEFAULT uuid_generatev4 (),sigthning VARCHAR(255),created TIMESTAMP default CURRENTTIMESTAMP,PRIMARY KEY(id));Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy Twitter Camel Integration in the public cluster
First, we need to deploy the
TwitterRoutecomponent in Kubernetes by using kamel. This component will poll Twitter every 5000 ms for tweets that include the wordskupper. Subsequently, it will send the results to thepostgresql-sink, that should be installed in the same cluster as well. The kamelet sink will insert the results in the postgreSQL database.Console for public1:
src/main/resources/scripts/setUpPublic1Cluster.sh
src/main/resources/scripts/setUpPublic1Cluster.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy Telegram Camel integration in the public cluster
In this step we will install the secret in Kubernetes that contains the database credentials, in order to be used by the
TelegramRoutecomponent. After that we will deployTelegramRouteusing kamel in the Kubernetes cluster. This component will poll the database every 3 seconds and gather the results inserted during the last 3 seconds.Console for public2:
src/main/resources/scripts/setUpPublic2Cluster.sh
src/main/resources/scripts/setUpPublic2Cluster.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow Test the application
To be able to see the whole flow at work, you need to post a tweet containing the word
skupperand after that you will see a new message in the Telegram channel with the titleNew feedback about SkupperConsole for private1:
kubectl attach pg-shell -c pg-shell -i -t psql --dbname=postgresdb SELECT * FROM twfeedback;
kubectl attach pg-shell -c pg-shell -i -t psql --dbname=postgresdb SELECT * FROM twfeedback;Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output:
id | sigthning | created --------------------------------------+-----------------+---------------------------- 95655229-747a-4787-8133-923ef0a1b2ca | Testing skupper | 2022-03-10 19:35:08.412542
id | sigthning | created --------------------------------------+-----------------+---------------------------- 95655229-747a-4787-8133-923ef0a1b2ca | Testing skupper | 2022-03-10 19:35:08.412542Copy to Clipboard Copied! Toggle word wrap Toggle overflow Console for public1:
kamel logs twitter-route
kamel logs twitter-routeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output:
"[1] 2022-03-10 19:35:08,397 INFO [postgresql-sink-1] (Camel (camel-1) thread #0 - twitter-search://skupper) Testing skupper"
"[1] 2022-03-10 19:35:08,397 INFO [postgresql-sink-1] (Camel (camel-1) thread #0 - twitter-search://skupper) Testing skupper"Copy to Clipboard Copied! Toggle word wrap Toggle overflow