이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 1. AWS DynamoDB Sink
Send data to AWS DynamoDB service. The sent data will insert/update/delete an item on the given AWS DynamoDB table.
Access Key/Secret Key are the basic method for authenticating to the AWS DynamoDB service. These parameters are optional, because the Kamelet also provides the following option 'useDefaultCredentialsProvider'.
When using a default Credentials Provider the AWS DynamoDB client will load the credentials through this provider and won’t use the static credential. This is the reason for not having access key and secret key as mandatory parameters for this Kamelet.
This Kamelet expects a JSON field as body. The mapping between the JSON fields and table attribute values is done by key, so if you have the input as follows:
{"username":"oscerd", "city":"Rome"}
The Kamelet will insert/update an item in the given AWS DynamoDB table and set the attributes 'username' and 'city' respectively. Please note that the JSON object must include the primary key values that define the item.
1.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the aws-ddb-sink
Kamelet:
Property | Name | Description | Type | Default | Example |
---|---|---|---|---|---|
region * | AWS Region | The AWS region to connect to | string |
| |
table * | Table | Name of the DynamoDB table to look at | string | ||
accessKey | Access Key | The access key obtained from AWS | string | ||
operation | Operation | The operation to perform (one of PutItem, UpdateItem, DeleteItem) | string |
|
|
overrideEndpoint | Endpoint Overwrite | Set the need for overiding the endpoint URI. This option needs to be used in combination with uriEndpointOverride setting. | boolean |
| |
secretKey | Secret Key | The secret key obtained from AWS | string | ||
uriEndpointOverride | Overwrite Endpoint URI | Set the overriding endpoint URI. This option needs to be used in combination with overrideEndpoint option. | string | ||
useDefaultCredentialsProvider | Default Credentials Provider | Set whether the DynamoDB client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. | boolean |
| |
writeCapacity | Write Capacity | The provisioned throughput to reserved for writing resources to your table | integer |
|
Fields marked with an asterisk (*) are mandatory.
1.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the aws-ddb-sink
Kamelet relies upon the presence of the following dependencies:
- mvn:org.apache.camel.kamelets:camel-kamelets-utils:1.8.0
- camel:core
- camel:jackson
- camel:aws2-ddb
- camel:kamelet
1.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the aws-ddb-sink
.
1.3.1. Knative Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the aws-ddb-sink
Kamelet as a Knative sink by binding it to a Knative object.
aws-ddb-sink-binding.yaml
1.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
1.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
aws-ddb-sink-binding.yaml
file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-ddb-sink-binding.yaml
oc apply -f aws-ddb-sink-binding.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind channel:mychannel aws-ddb-sink -p "sink.region=eu-west-1" -p "sink.table=The Table"
kamel bind channel:mychannel aws-ddb-sink -p "sink.region=eu-west-1" -p "sink.table=The Table"
This command creates the KameletBinding in the current namespace on the cluster.
1.3.2. Kafka Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the aws-ddb-sink
Kamelet as a Kafka sink by binding it to a Kafka topic.
aws-ddb-sink-binding.yaml
1.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic
in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
1.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
aws-ddb-sink-binding.yaml
file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-ddb-sink-binding.yaml
oc apply -f aws-ddb-sink-binding.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-ddb-sink -p "sink.region=eu-west-1" -p "sink.table=The Table"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-ddb-sink -p "sink.region=eu-west-1" -p "sink.table=The Table"
This command creates the KameletBinding in the current namespace on the cluster.
1.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
https://github.com/openshift-integration/kamelet-catalog/aws-ddb-sink.kamelet.yaml