Red Hat Camel K is deprecated
Red Hat Camel K is deprecated and the End of Life date for this product is June 30, 2025. For help migrating to the current go-to solution, Red Hat build of Apache Camel, see the Migration Guide.Chapter 1. AWS DynamoDB Sink
Send data to AWS DynamoDB service. The sent data will insert/update/delete an item on the given AWS DynamoDB table.
Access Key/Secret Key are the basic method for authenticating to the AWS DynamoDB service. These parameters are optional, because the Kamelet also provides the following option 'useDefaultCredentialsProvider'.
When using a default Credentials Provider the AWS DynamoDB client will load the credentials through this provider and won’t use the static credential. This is the reason for not having access key and secret key as mandatory parameters for this Kamelet.
This Kamelet expects a JSON field as body. The mapping between the JSON fields and table attribute values is done by key, so if you have the input as follows:
{"username":"oscerd", "city":"Rome"}
The Kamelet will insert/update an item in the given AWS DynamoDB table and set the attributes 'username' and 'city' respectively. Please note that the JSON object must include the primary key values that define the item.
1.1. Configuration Options
The following table summarizes the configuration options available for the aws-ddb-sink
Kamelet:
Property | Name | Description | Type | Default | Example |
---|---|---|---|---|---|
region * | AWS Region | The AWS region to connect to | string |
| |
table * | Table | Name of the DynamoDB table to look at | string | ||
accessKey | Access Key | The access key obtained from AWS | string | ||
operation | Operation | The operation to perform (one of PutItem, UpdateItem, DeleteItem) | string |
|
|
overrideEndpoint | Endpoint Overwrite | Set the need for overiding the endpoint URI. This option needs to be used in combination with uriEndpointOverride setting. | boolean |
| |
secretKey | Secret Key | The secret key obtained from AWS | string | ||
uriEndpointOverride | Overwrite Endpoint URI | Set the overriding endpoint URI. This option needs to be used in combination with overrideEndpoint option. | string | ||
useDefaultCredentialsProvider | Default Credentials Provider | Set whether the DynamoDB client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. | boolean |
| |
writeCapacity | Write Capacity | The provisioned throughput to reserved for writing resources to your table | integer |
|
Fields marked with an asterisk (*) are mandatory.
1.2. Dependencies
At runtime, the aws-ddb-sink
Kamelet relies upon the presence of the following dependencies:
- mvn:org.apache.camel.kamelets:camel-kamelets-utils:1.8.0
- camel:core
- camel:jackson
- camel:aws2-ddb
- camel:kamelet
1.3. Usage
This section describes how you can use the aws-ddb-sink
.
1.3.1. Knative Sink
You can use the aws-ddb-sink
Kamelet as a Knative sink by binding it to a Knative object.
aws-ddb-sink-binding.yaml
apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-ddb-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-ddb-sink properties: region: "eu-west-1" table: "The Table"
1.3.1.1. Prerequisite
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
1.3.1.2. Procedure for using the cluster CLI
-
Save the
aws-ddb-sink-binding.yaml
file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-ddb-sink-binding.yaml
1.3.1.3. Procedure for using the Kamel CLI
Configure and run the sink by using the following command:
kamel bind channel:mychannel aws-ddb-sink -p "sink.region=eu-west-1" -p "sink.table=The Table"
This command creates the KameletBinding in the current namespace on the cluster.
1.3.2. Kafka Sink
You can use the aws-ddb-sink
Kamelet as a Kafka sink by binding it to a Kafka topic.
aws-ddb-sink-binding.yaml
apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-ddb-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-ddb-sink properties: region: "eu-west-1" table: "The Table"
1.3.2.1. Prerequisites
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic
in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
1.3.2.2. Procedure for using the cluster CLI
-
Save the
aws-ddb-sink-binding.yaml
file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-ddb-sink-binding.yaml
1.3.2.3. Procedure for using the Kamel CLI
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-ddb-sink -p "sink.region=eu-west-1" -p "sink.table=The Table"
This command creates the KameletBinding in the current namespace on the cluster.
1.4. Kamelet source file
https://github.com/openshift-integration/kamelet-catalog/aws-ddb-sink.kamelet.yaml