Chapter 229. MongoDB GridFS Component


Available as of Camel version 2.18

Maven users will need to add the following dependency to their pom.xml for this component:

<dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-mongodb-gridfs</artifactId>
    <version>x.y.z</version>
    <!-- use the same version as your Camel core version -->
</dependency>

229.1. URI format

mongodb-gridfs:connectionBean?database=databaseName&bucket=bucketName[&moreOptions...]

229.2. MongoDB GridFS options

The MongoDB GridFS component has no options.

The MongoDB GridFS endpoint is configured using URI syntax:

mongodb-gridfs:connectionBean

with the following path and query parameters:

229.2.1. Path Parameters (1 parameters):

NameDescriptionDefaultType

connectionBean

Required Name of com.mongodb.Mongo to use.

 

String

229.2.2. Query Parameters (17 parameters):

NameDescriptionDefaultType

bucket (common)

Sets the name of the GridFS bucket within the database. Default is fs.

fs

String

database (common)

Required Sets the name of the MongoDB database to target

 

String

readPreference (common)

Sets a MongoDB ReadPreference on the Mongo connection. Read preferences set directly on the connection will be overridden by this setting. The com.mongodb.ReadPreference#valueOf(String) utility method is used to resolve the passed readPreference value. Some examples for the possible values are nearest, primary or secondary etc.

 

ReadPreference

writeConcern (common)

Set the WriteConcern for write operations on MongoDB using the standard ones. Resolved from the fields of the WriteConcern class by calling the WriteConcern#valueOf(String) method.

 

WriteConcern

writeConcernRef (common)

Set the WriteConcern for write operations on MongoDB, passing in the bean ref to a custom WriteConcern which exists in the Registry. You can also use standard WriteConcerns by passing in their key. See the link #setWriteConcern(String) setWriteConcern method.

 

WriteConcern

bridgeErrorHandler (consumer)

Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.

false

boolean

delay (consumer)

Sets the delay between polls within the Consumer. Default is 500ms

500

long

fileAttributeName (consumer)

If the QueryType uses a FileAttribute, this sets the name of the attribute that is used. Default is camel-processed.

camel-processed

String

initialDelay (consumer)

Sets the initialDelay before the consumer will start polling. Default is 1000ms

1000

long

persistentTSCollection (consumer)

If the QueryType uses a persistent timestamp, this sets the name of the collection within the DB to store the timestamp.

camel-timestamps

String

persistentTSObject (consumer)

If the QueryType uses a persistent timestamp, this is the ID of the object in the collection to store the timestamp.

camel-timestamp

String

query (consumer)

Additional query parameters (in JSON) that are used to configure the query used for finding files in the GridFsConsumer

 

String

queryStrategy (consumer)

Sets the QueryStrategy that is used for polling for new files. Default is Timestamp

TimeStamp

QueryStrategy

exceptionHandler (consumer)

To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.

 

ExceptionHandler

exchangePattern (consumer)

Sets the exchange pattern when the consumer creates an exchange.

 

ExchangePattern

operation (producer)

Sets the operation this endpoint will execute against GridRS.

 

String

synchronous (advanced)

Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported).

false

boolean

229.3. Spring Boot Auto-Configuration

The component supports 2 options, which are listed below.

NameDescriptionDefaultType

camel.component.mongodb-gridfs.enabled

Enable mongodb-gridfs component

true

Boolean

camel.component.mongodb-gridfs.resolve-property-placeholders

Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders.

true

Boolean

229.4. Configuration of database in Spring XML

The following Spring XML creates a bean defining the connection to a MongoDB instance.

<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">
    <bean id="mongoBean" class="com.mongodb.Mongo">
        <constructor-arg name="host" value="${mongodb.host}" />
        <constructor-arg name="port" value="${mongodb.port}" />
    </bean>
</beans>

229.5. Sample route

The following route defined in Spring XML executes the operation findOne on a collection.

Get a file from GridFS

<route>
  <from uri="direct:start" />
  <!-- using bean 'mongoBean' defined above -->
  <to uri="mongodb-gridfs:mongoBean?database=${mongodb.database}&amp;operation=findOne" />
  <to uri="direct:result" />
</route>

 

229.6. GridFS operations - producer endpoint

229.6.1. count

Returns the total number of file in the collection, returning an Integer as the OUT message body.

// from("direct:count").to("mongodb-gridfs?database=tickets&operation=count");
Integer result = template.requestBodyAndHeader("direct:count", "irrelevantBody");
assertTrue("Result is not of type Long", result instanceof Integer);

You can provide a filename header to provide a count of files matching that filename.

Map<String, Object> headers = new HashMap<String, Object>();
headers.put(Exchange.FILE_NAME, "filename.txt");
Integer count = template.requestBodyAndHeaders("direct:count", query, headers);

229.6.2. listAll

Returns an Reader that lists all the filenames and their IDs in a tab separated stream.

// from("direct:listAll").to("mongodb-gridfs?database=tickets&operation=listAll");
Reader result = template.requestBodyAndHeader("direct:listAll", "irrelevantBody");

filename1.txt   1252314321
filename2.txt   2897651254

 

229.6.3. findOne

Finds a file in the GridFS system and sets the body to an InputStream of the content.   Also provides the metadata has headers.  It uses Exchange.FILE_NAME from the incoming headers to determine the file to find.

// from("direct:findOne").to("mongodb-gridfs?database=tickets&operation=findOne");
Map<String, Object> headers = new HashMap<String, Object>();
headers.put(Exchange.FILE_NAME, "filename.txt");
InputStream result = template.requestBodyAndHeaders("direct:findOne", "irrelevantBody", headers);

 

229.6.4. create

Creates a new file in the GridFs database. It uses the Exchange.FILE_NAME from the incoming headers for the name and the body contents (as an InputStream) as the content.

// from("direct:create").to("mongodb-gridfs?database=tickets&operation=create");
Map<String, Object> headers = new HashMap<String, Object>();
headers.put(Exchange.FILE_NAME, "filename.txt");
InputStream stream = ... the data for the file ...
template.requestBodyAndHeaders("direct:create", stream, headers);

229.6.5. remove

Removes a file from the GridFS database.

// from("direct:remove").to("mongodb-gridfs?database=tickets&operation=remove");
Map<String, Object> headers = new HashMap<String, Object>();
headers.put(Exchange.FILE_NAME, "filename.txt");
template.requestBodyAndHeaders("direct:remove", "", headers);

229.7. GridFS Consumer

See also

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.