이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 43. PostgreSQL Sink
Send data to a PostgreSQL Database.
This Kamelet expects a JSON as body. The mapping between the JSON fields and parameters is done by key, so if you have the following query:
'INSERT INTO accounts (username,city) VALUES (:#username,:#city)'
The Kamelet needs to receive as input something like:
'{ "username":"oscerd", "city":"Rome"}'
43.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the postgresql-sink
Kamelet:
Property | Name | Description | Type | Default | Example |
---|---|---|---|---|---|
databaseName * | Database Name | The Database Name we are pointing | string | ||
password * | Password | The password to use for accessing a secured PostgreSQL Database | string | ||
query * | Query | The Query to execute against the PostgreSQL Database | string |
| |
serverName * | Server Name | Server Name for the data source | string |
| |
username * | Username | The username to use for accessing a secured PostgreSQL Database | string | ||
serverPort | Server Port | Server Port for the data source | string |
|
Fields marked with an asterisk (*) are mandatory.
43.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the postgresql-sink
Kamelet relies upon the presence of the following dependencies:
- camel:jackson
- camel:kamelet
- camel:sql
- mvn:org.postgresql:postgresql
- mvn:org.apache.commons:commons-dbcp2:2.7.0.redhat-00001
43.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the postgresql-sink
.
43.3.1. Knative Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the postgresql-sink
Kamelet as a Knative sink by binding it to a Knative object.
postgresql-sink-binding.yaml
43.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
43.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
postgresql-sink-binding.yaml
file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f postgresql-sink-binding.yaml
oc apply -f postgresql-sink-binding.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
43.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind channel:mychannel postgresql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
kamel bind channel:mychannel postgresql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
43.3.2. Kafka Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the postgresql-sink
Kamelet as a Kafka sink by binding it to a Kafka topic.
postgresql-sink-binding.yaml
43.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic
in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
43.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
postgresql-sink-binding.yaml
file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f postgresql-sink-binding.yaml
oc apply -f postgresql-sink-binding.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
43.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic postgresql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic postgresql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.