Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 4. AWS S3 Sink
Upload data to AWS S3.
The Kamelet expects the following headers to be set:
-
file
/ce-file
: as the file name to upload
If the header won’t be set the exchange ID will be used as file name.
4.1. Configuration Options Copier lienLien copié sur presse-papiers!
The following table summarizes the configuration options available for the aws-s3-sink
Kamelet:
Property | Name | Description | Type | Default | Example |
---|---|---|---|---|---|
accessKey * | Access Key | The access key obtained from AWS. | string | ||
bucketNameOrArn * | Bucket Name | The S3 Bucket name or ARN. | string | ||
region * | AWS Region | The AWS region to connect to. | string |
| |
secretKey * | Secret Key | The secret key obtained from AWS. | string | ||
autoCreateBucket | Autocreate Bucket | Setting the autocreation of the S3 bucket bucketName. | boolean |
|
Fields marked with an asterisk (*) are mandatory.
4.2. Dependencies Copier lienLien copié sur presse-papiers!
At runtime, the aws-s3-sink
Kamelet relies upon the presence of the following dependencies:
- camel:aws2-s3
- camel:kamelet
4.3. Usage Copier lienLien copié sur presse-papiers!
This section describes how you can use the aws-s3-sink
.
4.3.1. Knative Sink Copier lienLien copié sur presse-papiers!
You can use the aws-s3-sink
Kamelet as a Knative sink by binding it to a Knative object.
aws-s3-sink-binding.yaml
4.3.1.1. Prerequisite Copier lienLien copié sur presse-papiers!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
4.3.1.2. Procedure for using the cluster CLI Copier lienLien copié sur presse-papiers!
-
Save the
aws-s3-sink-binding.yaml
file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-s3-sink-binding.yaml
oc apply -f aws-s3-sink-binding.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.1.3. Procedure for using the Kamel CLI Copier lienLien copié sur presse-papiers!
Configure and run the sink by using the following command:
kamel bind channel:mychannel aws-s3-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
kamel bind channel:mychannel aws-s3-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
This command creates the KameletBinding in the current namespace on the cluster.
4.3.2. Kafka Sink Copier lienLien copié sur presse-papiers!
You can use the aws-s3-sink
Kamelet as a Kafka sink by binding it to a Kafka topic.
aws-s3-sink-binding.yaml
4.3.2.1. Prerequisites Copier lienLien copié sur presse-papiers!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic
in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
4.3.2.2. Procedure for using the cluster CLI Copier lienLien copié sur presse-papiers!
-
Save the
aws-s3-sink-binding.yaml
file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-s3-sink-binding.yaml
oc apply -f aws-s3-sink-binding.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.2.3. Procedure for using the Kamel CLI Copier lienLien copié sur presse-papiers!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-s3-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-s3-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
This command creates the KameletBinding in the current namespace on the cluster.