이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 11. AWS S3 Sink
Upload data to AWS S3.
The Kamelet expects the following headers to be set:
-
file
/ce-file
: as the file name to upload
If the header won’t be set the exchange ID will be used as file name.
11.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the aws-s3-sink
Kamelet:
Property | Name | Description | Type | Default | Example |
---|---|---|---|---|---|
accessKey * | Access Key | The access key obtained from AWS. | string | ||
bucketNameOrArn * | Bucket Name | The S3 Bucket name or ARN. | string | ||
region * | AWS Region | The AWS region to connect to. | string |
| |
secretKey * | Secret Key | The secret key obtained from AWS. | string | ||
autoCreateBucket | Autocreate Bucket | Setting the autocreation of the S3 bucket bucketName. | boolean |
|
Fields marked with an asterisk (*) are mandatory.
11.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the aws-s3-sink
Kamelet relies upon the presence of the following dependencies:
- camel:aws2-s3
- camel:kamelet
11.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the aws-s3-sink
.
11.3.1. Knative Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the aws-s3-sink
Kamelet as a Knative sink by binding it to a Knative object.
aws-s3-sink-binding.yaml
11.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
11.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
aws-s3-sink-binding.yaml
file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-s3-sink-binding.yaml
oc apply -f aws-s3-sink-binding.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind channel:mychannel aws-s3-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
kamel bind channel:mychannel aws-s3-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
This command creates the KameletBinding in the current namespace on the cluster.
11.3.2. Kafka Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the aws-s3-sink
Kamelet as a Kafka sink by binding it to a Kafka topic.
aws-s3-sink-binding.yaml
11.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic
in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
11.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
aws-s3-sink-binding.yaml
file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-s3-sink-binding.yaml
oc apply -f aws-s3-sink-binding.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-s3-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-s3-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
This command creates the KameletBinding in the current namespace on the cluster.