Chapter 40. MongoDB Sink
Send documents to MongoDB.
This Kamelet expects a JSON as body.
Properties you can set as headers:
-
db-upsert
/ce-dbupsert
: if the database should create the element if it does not exist. Boolean value.
40.1. Configuration Options
The following table summarizes the configuration options available for the mongodb-sink
Kamelet:
Property | Name | Description | Type | Default | Example |
---|---|---|---|---|---|
collection * | MongoDB Collection | Sets the name of the MongoDB collection to bind to this endpoint. | string | ||
database * | MongoDB Database | Sets the name of the MongoDB database to target. | string | ||
hosts * | MongoDB Hosts | Comma separated list of MongoDB Host Addresses in host:port format. | string | ||
createCollection | Collection | Create collection during initialisation if it doesn’t exist. | boolean |
| |
password | MongoDB Password | User password for accessing MongoDB. | string | ||
username | MongoDB Username | Username for accessing MongoDB. | string | ||
writeConcern | Write Concern | Configure the level of acknowledgment requested from MongoDB for write operations, possible values are ACKNOWLEDGED, W1, W2, W3, UNACKNOWLEDGED, JOURNALED, MAJORITY. | string |
Fields marked with an asterisk (*) are mandatory.
40.2. Dependencies
At runtime, the mongodb-sink
Kamelet relies upon the presence of the following dependencies:
- camel:kamelet
- camel:mongodb
- camel:jackson
40.3. Usage
This section describes how you can use the mongodb-sink
.
40.3.1. Knative Sink
You can use the mongodb-sink
Kamelet as a Knative sink by binding it to a Knative object.
mongodb-sink-binding.yaml
apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mongodb-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mongodb-sink properties: collection: "The MongoDB Collection" database: "The MongoDB Database" hosts: "The MongoDB Hosts"
40.3.1.1. Prerequisite
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
40.3.1.2. Procedure for using the cluster CLI
-
Save the
mongodb-sink-binding.yaml
file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f mongodb-sink-binding.yaml
40.3.1.3. Procedure for using the Kamel CLI
Configure and run the sink by using the following command:
kamel bind channel:mychannel mongodb-sink -p "sink.collection=The MongoDB Collection" -p "sink.database=The MongoDB Database" -p "sink.hosts=The MongoDB Hosts"
This command creates the KameletBinding in the current namespace on the cluster.
40.3.2. Kafka Sink
You can use the mongodb-sink
Kamelet as a Kafka sink by binding it to a Kafka topic.
mongodb-sink-binding.yaml
apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mongodb-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mongodb-sink properties: collection: "The MongoDB Collection" database: "The MongoDB Database" hosts: "The MongoDB Hosts"
40.3.2.1. Prerequisites
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic
in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
40.3.2.2. Procedure for using the cluster CLI
-
Save the
mongodb-sink-binding.yaml
file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f mongodb-sink-binding.yaml
40.3.2.3. Procedure for using the Kamel CLI
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic mongodb-sink -p "sink.collection=The MongoDB Collection" -p "sink.database=The MongoDB Database" -p "sink.hosts=The MongoDB Hosts"
This command creates the KameletBinding in the current namespace on the cluster.