이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 1. AWS DynamoDB Sink


Send data to AWS DynamoDB service. The sent data will insert/update/delete an item on the given AWS DynamoDB table.

Access Key/Secret Key are the basic method for authenticating to the AWS DynamoDB service. These parameters are optional, because the Kamelet also provides the following option 'useDefaultCredentialsProvider'.

When using a default Credentials Provider the AWS DynamoDB client will load the credentials through this provider and won’t use the static credential. This is the reason for not having access key and secret key as mandatory parameters for this Kamelet.

This Kamelet expects a JSON field as body. The mapping between the JSON fields and table attribute values is done by key, so if you have the input as follows:

{"username":"oscerd", "city":"Rome"}

The Kamelet will insert/update an item in the given AWS DynamoDB table and set the attributes 'username' and 'city' respectively. Please note that the JSON object must include the primary key values that define the item.

1.1. Configuration Options

The following table summarizes the configuration options available for the aws-ddb-sink Kamelet:

PropertyNameDescriptionTypeDefaultExample

region *

AWS Region

The AWS region to connect to

string

 

"eu-west-1"

table *

Table

Name of the DynamoDB table to look at

string

  

accessKey

Access Key

The access key obtained from AWS

string

  

operation

Operation

The operation to perform (one of PutItem, UpdateItem, DeleteItem)

string

"PutItem"

"PutItem"

overrideEndpoint

Endpoint Overwrite

Set the need for overiding the endpoint URI. This option needs to be used in combination with uriEndpointOverride setting.

boolean

false

 

secretKey

Secret Key

The secret key obtained from AWS

string

  

uriEndpointOverride

Overwrite Endpoint URI

Set the overriding endpoint URI. This option needs to be used in combination with overrideEndpoint option.

string

  

useDefaultCredentialsProvider

Default Credentials Provider

Set whether the DynamoDB client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.

boolean

false

 

writeCapacity

Write Capacity

The provisioned throughput to reserved for writing resources to your table

integer

1

 
Note

Fields marked with an asterisk (*) are mandatory.

1.2. Dependencies

At runtime, the aws-ddb-sink Kamelet relies upon the presence of the following dependencies:

  • mvn:org.apache.camel.kamelets:camel-kamelets-utils:1.8.0
  • camel:core
  • camel:jackson
  • camel:aws2-ddb
  • camel:kamelet

1.3. Usage

This section describes how you can use the aws-ddb-sink.

1.3.1. Knative Sink

You can use the aws-ddb-sink Kamelet as a Knative sink by binding it to a Knative object.

aws-ddb-sink-binding.yaml

apiVersion: camel.apache.org/v1alpha1
kind: KameletBinding
metadata:
  name: aws-ddb-sink-binding
spec:
  source:
    ref:
      kind: Channel
      apiVersion: messaging.knative.dev/v1
      name: mychannel
  sink:
    ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1alpha1
      name: aws-ddb-sink
    properties:
      region: "eu-west-1"
      table: "The Table"

1.3.1.1. Prerequisite

Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.

1.3.1.2. Procedure for using the cluster CLI

  1. Save the aws-ddb-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration.
  2. Run the sink by using the following command:

    oc apply -f aws-ddb-sink-binding.yaml

1.3.1.3. Procedure for using the Kamel CLI

Configure and run the sink by using the following command:

kamel bind channel:mychannel aws-ddb-sink -p "sink.region=eu-west-1" -p "sink.table=The Table"

This command creates the KameletBinding in the current namespace on the cluster.

1.3.2. Kafka Sink

You can use the aws-ddb-sink Kamelet as a Kafka sink by binding it to a Kafka topic.

aws-ddb-sink-binding.yaml

apiVersion: camel.apache.org/v1alpha1
kind: KameletBinding
metadata:
  name: aws-ddb-sink-binding
spec:
  source:
    ref:
      kind: KafkaTopic
      apiVersion: kafka.strimzi.io/v1beta1
      name: my-topic
  sink:
    ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1alpha1
      name: aws-ddb-sink
    properties:
      region: "eu-west-1"
      table: "The Table"

1.3.2.1. Prerequisites

Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.

1.3.2.2. Procedure for using the cluster CLI

  1. Save the aws-ddb-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration.
  2. Run the sink by using the following command:

    oc apply -f aws-ddb-sink-binding.yaml

1.3.2.3. Procedure for using the Kamel CLI

Configure and run the sink by using the following command:

kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-ddb-sink -p "sink.region=eu-west-1" -p "sink.table=The Table"

This command creates the KameletBinding in the current namespace on the cluster.

1.4. Kamelet source file

https://github.com/openshift-integration/kamelet-catalog/tree/kamelet-catalog-1.8//aws-ddb-sink.kamelet.yaml

Red Hat logoGithubRedditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

© 2024 Red Hat, Inc.