此内容没有您所选择的语言版本。
Chapter 12. AWS S3 Source
Receive data from AWS S3.
12.1. Configuration Options 复制链接链接已复制到粘贴板!
				The following table summarizes the configuration options available for the aws-s3-source Kamelet:
			
| Property | Name | Description | Type | Default | Example | 
|---|---|---|---|---|---|
|   accessKey *  |   Access Key  |   The access key obtained from AWS  |   string  | ||
|   bucketNameOrArn *  |   Bucket Name  |   The S3 Bucket name or ARN  |   string  | ||
|   region *  |   AWS Region  |   The AWS region to connect to  |   string  |   
								  | |
|   secretKey *  |   Secret Key  |   The secret key obtained from AWS  |   string  | ||
|   autoCreateBucket  |   Autocreate Bucket  |   Setting the autocreation of the S3 bucket bucketName.  |   boolean  |   
								  | |
|   deleteAfterRead  |   Auto-delete Objects  |   Delete objects after consuming them  |   boolean  |   
								  | 
Fields marked with an asterisk (*) are mandatory.
12.2. Dependencies 复制链接链接已复制到粘贴板!
				At runtime, the aws-s3-source Kamelet relies upon the presence of the following dependencies:
			
- camel:kamelet
 - camel:aws2-s3
 
12.3. Usage 复制链接链接已复制到粘贴板!
				This section describes how you can use the aws-s3-source.
			
12.3.1. Knative Source 复制链接链接已复制到粘贴板!
					You can use the aws-s3-source Kamelet as a Knative source by binding it to a Knative object.
				
aws-s3-source-binding.yaml
12.3.1.1. Prerequisite 复制链接链接已复制到粘贴板!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
12.3.1.2. Procedure for using the cluster CLI 复制链接链接已复制到粘贴板!
- 
								Save the 
aws-s3-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f aws-s3-source-binding.yaml
oc apply -f aws-s3-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow 
12.3.1.3. Procedure for using the Kamel CLI 复制链接链接已复制到粘贴板!
Configure and run the source by using the following command:
kamel bind aws-s3-source -p "source.accessKey=The Access Key" -p "source.bucketNameOrArn=The Bucket Name" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" channel:mychannel
kamel bind aws-s3-source -p "source.accessKey=The Access Key" -p "source.bucketNameOrArn=The Bucket Name" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
12.3.2. Kafka Source 复制链接链接已复制到粘贴板!
					You can use the aws-s3-source Kamelet as a Kafka source by binding it to a Kafka topic.
				
aws-s3-source-binding.yaml
12.3.2.1. Prerequisites 复制链接链接已复制到粘贴板!
						Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
					
12.3.2.2. Procedure for using the cluster CLI 复制链接链接已复制到粘贴板!
- 
								Save the 
aws-s3-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f aws-s3-source-binding.yaml
oc apply -f aws-s3-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow 
12.3.2.3. Procedure for using the Kamel CLI 复制链接链接已复制到粘贴板!
Configure and run the source by using the following command:
kamel bind aws-s3-source -p "source.accessKey=The Access Key" -p "source.bucketNameOrArn=The Bucket Name" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind aws-s3-source -p "source.accessKey=The Access Key" -p "source.bucketNameOrArn=The Bucket Name" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.