Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 229. MongoDB GridFS Component


Available as of Camel version 2.18

Maven users will need to add the following dependency to their pom.xml for this component:

<dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-mongodb-gridfs</artifactId>
    <version>x.y.z</version>
    <!-- use the same version as your Camel core version -->
</dependency>
Copy to Clipboard Toggle word wrap

229.1. URI format

mongodb-gridfs:connectionBean?database=databaseName&bucket=bucketName[&moreOptions...]
Copy to Clipboard Toggle word wrap

229.2. MongoDB GridFS options

The MongoDB GridFS component has no options.

The MongoDB GridFS endpoint is configured using URI syntax:

mongodb-gridfs:connectionBean
Copy to Clipboard Toggle word wrap

with the following path and query parameters:

229.2.1. Path Parameters (1 parameters):

Expand
NameDescriptionDefaultType

connectionBean

Required Name of com.mongodb.Mongo to use.

 

String

229.2.2. Query Parameters (17 parameters):

Expand
NameDescriptionDefaultType

bucket (common)

Sets the name of the GridFS bucket within the database. Default is fs.

fs

String

database (common)

Required Sets the name of the MongoDB database to target

 

String

readPreference (common)

Sets a MongoDB ReadPreference on the Mongo connection. Read preferences set directly on the connection will be overridden by this setting. The link com.mongodb.ReadPreferencevalueOf(String) utility method is used to resolve the passed readPreference value. Some examples for the possible values are nearest, primary or secondary etc.

 

ReadPreference

writeConcern (common)

Set the WriteConcern for write operations on MongoDB using the standard ones. Resolved from the fields of the WriteConcern class by calling the link WriteConcernvalueOf(String) method.

 

WriteConcern

writeConcernRef (common)

Set the WriteConcern for write operations on MongoDB, passing in the bean ref to a custom WriteConcern which exists in the Registry. You can also use standard WriteConcerns by passing in their key. See the link setWriteConcern(String) setWriteConcern method.

 

WriteConcern

bridgeErrorHandler (consumer)

Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.

false

boolean

delay (consumer)

Sets the delay between polls within the Consumer. Default is 500ms

500

long

fileAttributeName (consumer)

If the QueryType uses a FileAttribute, this sets the name of the attribute that is used. Default is camel-processed.

camel-processed

String

initialDelay (consumer)

Sets the initialDelay before the consumer will start polling. Default is 1000ms

1000

long

persistentTSCollection (consumer)

If the QueryType uses a persistent timestamp, this sets the name of the collection within the DB to store the timestamp.

camel-timestamps

String

persistentTSObject (consumer)

If the QueryType uses a persistent timestamp, this is the ID of the object in the collection to store the timestamp.

camel-timestamp

String

query (consumer)

Additional query parameters (in JSON) that are used to configure the query used for finding files in the GridFsConsumer

 

String

queryStrategy (consumer)

Sets the QueryStrategy that is used for polling for new files. Default is Timestamp

TimeStamp

QueryStrategy

exceptionHandler (consumer)

To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this options is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.

 

ExceptionHandler

exchangePattern (consumer)

Sets the exchange pattern when the consumer creates an exchange.

 

ExchangePattern

operation (producer)

Sets the operation this endpoint will execute against GridRS.

 

String

synchronous (advanced)

Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported).

false

boolean

229.3. Configuration of database in Spring XML

The following Spring XML creates a bean defining the connection to a MongoDB instance.

<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">
    <bean id="mongoBean" class="com.mongodb.Mongo">
        <constructor-arg name="host" value="${mongodb.host}" />
        <constructor-arg name="port" value="${mongodb.port}" />
    </bean>
</beans>
Copy to Clipboard Toggle word wrap

229.4. Sample route

The following route defined in Spring XML executes the operation findOne on a collection.

Get a file from GridFS

<route>
  <from uri="direct:start" />
  <!-- using bean 'mongoBean' defined above -->
  <to uri="mongodb-gridfs:mongoBean?database=${mongodb.database}&amp;operation=findOne" />
  <to uri="direct:result" />
</route>
Copy to Clipboard Toggle word wrap

 

229.5. GridFS operations - producer endpoint

229.5.1. count

Returns the total number of file in the collection, returning an Integer as the OUT message body.

// from("direct:count").to("mongodb-gridfs?database=tickets&operation=count");
Integer result = template.requestBodyAndHeader("direct:count", "irrelevantBody");
assertTrue("Result is not of type Long", result instanceof Integer);
Copy to Clipboard Toggle word wrap

You can provide a filename header to provide a count of files matching that filename.

Map<String, Object> headers = new HashMap<String, Object>();
headers.put(Exchange.FILE_NAME, "filename.txt");
Integer count = template.requestBodyAndHeaders("direct:count", query, headers);
Copy to Clipboard Toggle word wrap

229.5.2. listAll

Returns an Reader that lists all the filenames and their IDs in a tab separated stream.

// from("direct:listAll").to("mongodb-gridfs?database=tickets&operation=listAll");
Reader result = template.requestBodyAndHeader("direct:listAll", "irrelevantBody");

filename1.txt   1252314321
filename2.txt   2897651254
Copy to Clipboard Toggle word wrap

 

229.5.3. findOne

Finds a file in the GridFS system and sets the body to an InputStream of the content.   Also provides the metadata has headers.  It uses Exchange.FILE_NAME from the incoming headers to determine the file to find.

// from("direct:findOne").to("mongodb-gridfs?database=tickets&operation=findOne");
Map<String, Object> headers = new HashMap<String, Object>();
headers.put(Exchange.FILE_NAME, "filename.txt");
InputStream result = template.requestBodyAndHeaders("direct:findOne", "irrelevantBody", headers);
Copy to Clipboard Toggle word wrap

 

229.5.4. create

Creates a new file in the GridFs database. It uses the Exchange.FILE_NAME from the incoming headers for the name and the body contents (as an InputStream) as the content.

// from("direct:create").to("mongodb-gridfs?database=tickets&operation=create");
Map<String, Object> headers = new HashMap<String, Object>();
headers.put(Exchange.FILE_NAME, "filename.txt");
InputStream stream = ... the data for the file ...
template.requestBodyAndHeaders("direct:create", stream, headers);
Copy to Clipboard Toggle word wrap

229.5.5. remove

Removes a file from the GridFS database.

// from("direct:remove").to("mongodb-gridfs?database=tickets&operation=remove");
Map<String, Object> headers = new HashMap<String, Object>();
headers.put(Exchange.FILE_NAME, "filename.txt");
template.requestBodyAndHeaders("direct:remove", "", headers);
Copy to Clipboard Toggle word wrap

229.6. GridFS Consumer

See also

Retour au début
Red Hat logoGithubredditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance. Découvrez nos récentes mises à jour.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez le Blog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

Theme

© 2025 Red Hat