此内容没有您所选择的语言版本。

Chapter 320. SQL Component


Available as of Camel version 1.4

The sql: component allows you to work with databases using JDBC queries. The difference between this component and JDBC component is that in case of SQL the query is a property of the endpoint and it uses message payload as parameters passed to the query.

This component uses spring-jdbc behind the scenes for the actual SQL handling.

Maven users will need to add the following dependency to their pom.xml for this component:

<dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-sql</artifactId>
    <version>x.x.x</version>
    <!-- use the same version as your Camel core version -->
</dependency>

The SQL component also supports:

320.1. URI format

Important

From Camel 2.11 onwards this component can create both consumer (e.g. from()) and producer endpoints (e.g. to()). In previous versions, it could only act as a producer.

Note

This component can be used as a Transactional Client.

The SQL component uses the following endpoint URI notation:

sql:select * from table where id=# order by name[?options]

From Camel 2.11 onwards you can use named parameters by using #name_of_the_parameter style as shown:

sql:select * from table where id=:#myId order by name[?options]

When using named parameters, Camel will lookup the names from, in the given precedence:
1. From message body if its a java.util.Map
2. From message headers

If a named parameter cannot be resolved, an exception is thrown.

From Camel 2.14 onward you can use Simple expressions as parameters as shown:

sql:select * from table where id=:#${property.myId} order by name[?options]

Notice that the standard ? symbol that denotes the parameters to an SQL query is substituted with the # symbol, because the ? symbol is used to specify options for the endpoint. The ? symbol replacement can be configured on endpoint basis.

From Camel 2.17 onwards you can externalize your SQL queries to files in the classpath or file system as shown:

sql:classpath:sql/myquery.sql[?options]

And the myquery.sql file is in the classpath and is just a plain text:

-- this is a comment
select *
from table
where
  id = :#${property.myId}
order by
  name

In the file you can use multilines and format the SQL as you wish. And also use comments such as the – dash line.

You can append query options to the URI in the following format, ?option=value&option=value&…​

320.2. Options

The SQL component supports 3 options which are listed below.

NameDescriptionDefaultType

dataSource (common)

Sets the DataSource to use to communicate with the database.

 

DataSource

usePlaceholder (advanced)

Sets whether to use placeholder and replace all placeholder characters with sign in the SQL queries. This option is default true

true

boolean

resolveProperty Placeholders (advanced)

Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders.

true

boolean

The SQL endpoint is configured using URI syntax:

sql:query

with the following path and query parameters:

320.2.1. Path Parameters (1 parameters):

NameDescriptionDefaultType

query

Required Sets the SQL query to perform. You can externalize the query by using file: or classpath: as prefix and specify the location of the file.

 

String

320.2.2. Query Parameters (45 parameters):

NameDescriptionDefaultType

allowNamedParameters (common)

Whether to allow using named parameters in the queries.

true

boolean

dataSource (common)

Sets the DataSource to use to communicate with the database.

 

DataSource

dataSourceRef (common)

Deprecated Sets the reference to a DataSource to lookup from the registry, to use for communicating with the database.

 

String

outputClass (common)

Specify the full package and class name to use as conversion when outputType=SelectOne.

 

String

outputHeader (common)

Store the query result in a header instead of the message body. By default, outputHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If outputHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved.

 

String

outputType (common)

Make the output of consumer or producer to SelectList as List of Map, or SelectOne as single Java object in the following way: a) If the query has only single column, then that JDBC Column object is returned. (such as SELECT COUNT( ) FROM PROJECT will return a Long object. b) If the query has more than one column, then it will return a Map of that result. c) If the outputClass is set, then it will convert the query result into an Java bean object by calling all the setters that match the column names. It will assume your class has a default constructor to create instance with. d) If the query resulted in more than one rows, it throws an non-unique result exception.

SelectList

SqlOutputType

separator (common)

The separator to use when parameter values is taken from message body (if the body is a String type), to be inserted at placeholders. Notice if you use named parameters, then a Map type is used instead. The default value is comma.

,

char

breakBatchOnConsumeFail (consumer)

Sets whether to break batch if onConsume failed.

false

boolean

bridgeErrorHandler (consumer)

Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.

false

boolean

expectedUpdateCount (consumer)

Sets an expected update count to validate when using onConsume.

-1

int

maxMessagesPerPoll (consumer)

Sets the maximum number of messages to poll

 

int

onConsume (consumer)

After processing each row then this query can be executed, if the Exchange was processed successfully, for example to mark the row as processed. The query can have parameter.

 

String

onConsumeBatchComplete (consumer)

After processing the entire batch, this query can be executed to bulk update rows etc. The query cannot have parameters.

 

String

onConsumeFailed (consumer)

After processing each row then this query can be executed, if the Exchange failed, for example to mark the row as failed. The query can have parameter.

 

String

routeEmptyResultSet (consumer)

Sets whether empty resultset should be allowed to be sent to the next hop. Defaults to false. So the empty resultset will be filtered out.

false

boolean

sendEmptyMessageWhenIdle (consumer)

If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.

false

boolean

transacted (consumer)

Enables or disables transaction. If enabled then if processing an exchange failed then the consumer break out processing any further exchanges to cause a rollback eager

false

boolean

useIterator (consumer)

Sets how resultset should be delivered to route. Indicates delivery as either a list or individual object. defaults to true.

true

boolean

exceptionHandler (consumer)

To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this options is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.

 

ExceptionHandler

exchangePattern (consumer)

Sets the exchange pattern when the consumer creates an exchange.

 

ExchangePattern

pollStrategy (consumer)

A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.

 

PollingConsumerPoll Strategy

processingStrategy (consumer)

Allows to plugin to use a custom org.apache.camel.component.sql.SqlProcessingStrategy to execute queries when the consumer has processed the rows/batch.

 

SqlProcessingStrategy

batch (producer)

Enables or disables batch mode

false

boolean

noop (producer)

If set, will ignore the results of the SQL query and use the existing IN message as the OUT message for the continuation of processing

false

boolean

useMessageBodyForSql (producer)

Whether to use the message body as the SQL and then headers for parameters. If this option is enabled then the SQL in the uri is not used.

false

boolean

alwaysPopulateStatement (producer)

If enabled then the populateStatement method from org.apache.camel.component.sql.SqlPrepareStatementStrategy is always invoked, also if there is no expected parameters to be prepared. When this is false then the populateStatement is only invoked if there is 1 or more expected parameters to be set; for example this avoids reading the message body/headers for SQL queries with no parameters.

false

boolean

parametersCount (producer)

If set greater than zero, then Camel will use this count value of parameters to replace instead of querying via JDBC metadata API. This is useful if the JDBC vendor could not return correct parameters count, then user may override instead.

 

int

placeholder (advanced)

Specifies a character that will be replaced to in SQL query. Notice, that it is simple String.replaceAll() operation and no SQL parsing is involved (quoted strings will also change).

#

String

prepareStatementStrategy (advanced)

Allows to plugin to use a custom org.apache.camel.component.sql.SqlPrepareStatementStrategy to control preparation of the query and prepared statement.

 

SqlPrepareStatement Strategy

synchronous (advanced)

Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported).

false

boolean

templateOptions (advanced)

Configures the Spring JdbcTemplate with the key/values from the Map

 

Map

usePlaceholder (advanced)

Sets whether to use placeholder and replace all placeholder characters with sign in the SQL queries. This option is default true

true

boolean

backoffErrorThreshold (scheduler)

The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.

 

int

backoffIdleThreshold (scheduler)

The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.

 

int

backoffMultiplier (scheduler)

To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.

 

int

delay (scheduler)

Milliseconds before the next poll. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour).

500

long

greedy (scheduler)

If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.

false

boolean

initialDelay (scheduler)

Milliseconds before the first poll starts. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour).

1000

long

runLoggingLevel (scheduler)

The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.

TRACE

LoggingLevel

scheduledExecutorService (scheduler)

Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.

 

ScheduledExecutor Service

scheduler (scheduler)

To use a cron scheduler from either camel-spring or camel-quartz2 component

none

ScheduledPollConsumer Scheduler

schedulerProperties (scheduler)

To configure additional properties when using a custom scheduler or any of the Quartz2, Spring based scheduler.

 

Map

startScheduler (scheduler)

Whether the scheduler should be auto started.

true

boolean

timeUnit (scheduler)

Time unit for initialDelay and delay options.

MILLISECONDS

TimeUnit

useFixedDelay (scheduler)

Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.

true

boolean

320.3. Treatment of the message body

The SQL component tries to convert the message body to an object of java.util.Iterator type and then uses this iterator to fill the query parameters (where each query parameter is represented by a # symbol (or configured placeholder) in the endpoint URI). If the message body is not an array or collection, the conversion results in an iterator that iterates over only one object, which is the body itself.

For example, if the message body is an instance of java.util.List, the first item in the list is substituted into the first occurrence of # in the SQL query, the second item in the list is substituted into the second occurrence of #, and so on.

If batch is set to true, then the interpretation of the inbound message body changes slightly – instead of an iterator of parameters, the component expects an iterator that contains the parameter iterators; the size of the outer iterator determines the batch size.

From Camel 2.16 onwards you can use the option useMessageBodyForSql that allows to use the message body as the SQL statement, and then the SQL parameters must be provided in a header with the key SqlConstants.SQL_PARAMETERS. This allows the SQL component to work more dynamic as the SQL query is from the message body.

320.4. Result of the query

For select operations, the result is an instance of List<Map<String, Object>> type, as returned by the JdbcTemplate.queryForList() method. For update operations, the result is the number of updated rows, returned as an Integer.

By default, the result is placed in the message body.  If the outputHeader parameter is set, the result is placed in the header.  This is an alternative to using a full message enrichment pattern to add headers, it provides a concise syntax for querying a sequence or some other small value into a header.  It is convenient to use outputHeader and outputType together:

from("jms:order.inbox")
    .to("sql:select order_seq.nextval from dual?outputHeader=OrderId&outputType=SelectOne")
    .to("jms:order.booking");

320.5. Using StreamList

From Camel 2.18 onwards the producer supports outputType=StreamList that uses an iterator to stream the output of the query. This allows to process the data in a streaming fashion which for example can be used by the Splitter EIP to process each row one at a time, and load data from the database as needed.

from("direct:withSplitModel")
        .to("sql:select * from projects order by id?outputType=StreamList&outputClass=org.apache.camel.component.sql.ProjectModel")
        .to("log:stream")
        .split(body()).streaming()
            .to("log:row")
            .to("mock:result")
        .end();

 

320.6. Header values

When performing update operations, the SQL Component stores the update count in the following message headers:

HeaderDescription

CamelSqlUpdateCount

The number of rows updated for update operations, returned as an Integer object. This header is not provided when using outputType=StreamList.

CamelSqlRowCount

The number of rows returned for select operations, returned as an Integer object. This header is not provided when using outputType=StreamList.

CamelSqlQuery

Camel 2.8: Query to execute. This query takes precedence over the query specified in the endpoint URI. Note that query parameters in the header are represented by a ? instead of a # symbol

When performing insert operations, the SQL Component stores the rows with the generated keys and number of these rown in the following message headers (Available as of Camel 2.12.4, 2.13.1):

HeaderDescription

CamelSqlGeneratedKeysRowCount

The number of rows in the header that contains generated keys.

CamelSqlGeneratedKeyRows

Rows that contains the generated keys (a list of maps of keys).

320.7. Generated keys

Available as of Camel 2.12.4, 2.13.1 and 2.14

If you insert data using SQL INSERT, the RDBMS may support auto generated keys. You can instruct the SQL producer to return the generated keys in headers.
To do this, set the header CamelSqlRetrieveGeneratedKeys=true. Then the generated keys will be provided as headers with the keys listed in the table above.

You can see more details in this unit test.

320.8. Configuration

You can now set a reference to a DataSource in the URI directly:

sql:select * from table where id=# order by name?dataSource=myDS

320.9. Sample

In the sample below we execute a query and retrieve the result as a List of rows, where each row is a Map<String, Object and the key is the column name.

First, we set up a table to use for our sample. As this is based on an unit test, we do it in java:

The SQL script createAndPopulateDatabase.sql we execute looks like as described below:

Then we configure our route and our sql component. Notice that we use a direct endpoint in front of the sql endpoint. This allows us to send an exchange to the direct endpoint with the URI, direct:simple, which is much easier for the client to use than the long sql: URI. Note that the DataSource is looked up up in the registry, so we can use standard Spring XML to configure our DataSource.

And then we fire the message into the direct endpoint that will route it to our sql component that queries the database.

We could configure the DataSource in Spring XML as follows:

 <jee:jndi-lookup id="myDS" jndi-name="jdbc/myDataSource"/>

320.9.1. Using named parameters

Available as of Camel 2.11

In the given route below, we want to get all the projects from the projects table. Notice the SQL query has 2 named parameters, :#lic and :#min.
Camel will then lookup for these parameters from the message body or message headers. Notice in the example above we set two headers with constant value
for the named parameters:

   from("direct:projects")
     .setHeader("lic", constant("ASF"))
     .setHeader("min", constant(123))
     .to("sql:select * from projects where license = :#lic and id > :#min order by id")

Though if the message body is a java.util.Map then the named parameters will be taken from the body.

   from("direct:projects")
     .to("sql:select * from projects where license = :#lic and id > :#min order by id")

320.9.2. Using expression parameters

Available as of Camel 2.14

In the given route below, we want to get all the project from the database. It uses the body of the exchange for defining the license and uses the value of a property as the second parameter.

from("direct:projects")
  .setBody(constant("ASF"))
  .setProperty("min", constant(123))
  .to("sql:select * from projects where license = :#${body} and id > :#${property.min} order by id")

320.9.3. Using IN queries with dynamic values

Available as of Camel 2.17

From Camel 2.17 onwards the SQL producer allows to use SQL queries with IN statements where the IN values is dynamic computed. For example from the message body or a header etc.

To use IN you need to:

  • Prefix the parameter name with in:
  • Add ( ) around the parameter

An example explains this better. The following query is used:

-- this is a comment
select *
from projects
where project in (:#in:names)
order by id

In the following route:

from("direct:query")
    .to("sql:classpath:sql/selectProjectsIn.sql")
    .to("log:query")
    .to("mock:query");

Then the IN query can use a header with the key names with the dynamic values such as:

// use an array
template.requestBodyAndHeader("direct:query", "Hi there!", "names", new String[]{"Camel", "AMQ"});


// use a list
List<String> names = new ArrayList<String>();
names.add("Camel");
names.add("AMQ");

template.requestBodyAndHeader("direct:query", "Hi there!", "names", names);


// use a string separated values with comma
template.requestBodyAndHeader("direct:query", "Hi there!", "names", "Camel,AMQ");

The query can also be specified in the endpoint instead of being externalized (notice that externalizing makes maintaining the SQL queries easier)

from("direct:query")
    .to("sql:select * from projects where project in (:#in:names) order by id")
    .to("log:query")
    .to("mock:query");

 

320.10. Using the JDBC-based idempotent repository

Available as of Camel 2.7: In this section we will use the JDBC based idempotent repository.

Tip

Abstract class From Camel 2.9 onwards there is an abstract class org.apache.camel.processor.idempotent.jdbc.AbstractJdbcMessageIdRepository you can extend to build custom JDBC idempotent repository.

First we have to create the database table which will be used by the idempotent repository. For Camel 2.7, we use the following schema:

CREATE TABLE CAMEL_MESSAGEPROCESSED ( processorName VARCHAR(255),
messageId VARCHAR(100) )

 

In Camel 2.8, we added the createdAt column:

CREATE TABLE CAMEL_MESSAGEPROCESSED ( processorName VARCHAR(255),
messageId VARCHAR(100), createdAt TIMESTAMP )
Warning

The SQL Server TIMESTAMP type is a fixed-length binary-string type. It does not map to any of the JDBC time types: DATE, TIME, or TIMESTAMP.

320.10.1. Customize the JdbcMessageIdRepository

Starting with Camel 2.9.1 you have a few options to tune the org.apache.camel.processor.idempotent.jdbc.JdbcMessageIdRepository for your needs:

ParameterDefault ValueDescription

createTableIfNotExists

true

Defines whether or not Camel should try to create the table if it doesn’t exist.

tableExistsString

SELECT 1 FROM CAMEL_MESSAGEPROCESSED WHERE 1 = 0

This query is used to figure out whether the table already exists or not. It must throw an exception to indicate the table doesn’t exist.

createString

CREATE TABLE CAMEL_MESSAGEPROCESSED (processorName VARCHAR(255), messageId VARCHAR(100), createdAt TIMESTAMP)

The statement which is used to create the table.

queryString

SELECT COUNT(*) FROM CAMEL_MESSAGEPROCESSED WHERE processorName = ? AND messageId = ?

The query which is used to figure out whether the message already exists in the repository (the result is not equals to '0'). It takes two parameters. This first one is the processor name (String) and the second one is the message id (String).

insertString

INSERT INTO CAMEL_MESSAGEPROCESSED (processorName, messageId, createdAt) VALUES (?, ?, ?)

The statement which is used to add the entry into the table. It takes three parameter. The first one is the processor name (String), the second one is the message id (String) and the third one is the timestamp (java.sql.Timestamp) when this entry was added to the repository.

deleteString

DELETE FROM CAMEL_MESSAGEPROCESSED WHERE processorName = ? AND messageId = ?

The statement which is used to delete the entry from the database. It takes two parameter. This first one is the processor name (String) and the second one is the message id (String).

320.11. Using the JDBC-based aggregation repository

Available as of Camel 2.6

Note

Using JdbcAggregationRepository in Camel 2.6

In Camel 2.6, the JdbcAggregationRepository is provided in the camel-jdbc-aggregator component. From Camel 2.7 onwards, the JdbcAggregationRepository is provided in the camel-sql component.

JdbcAggregationRepository is an AggregationRepository which on the fly persists the aggregated messages. This ensures that you will not lose messages, as the default aggregator will use an in memory only AggregationRepository. The JdbcAggregationRepository allows together with Camel to provide persistent support for the Aggregator.

Only when an Exchange has been successfully processed it will be marked as complete which happens when the confirm method is invoked on the AggregationRepository. This means if the same Exchange fails again it will be retried until successful.

You can use option maximumRedeliveries to limit the maximum number of redelivery attempts for a given recovered Exchange. You must also set the deadLetterUri option so Camel knows where to send the Exchange when the maximumRedeliveries was hit.

You can see some examples in the unit tests of camel-sql, for example this test.

320.11.1. Database

To be operational, each aggregator uses two table: the aggregation and completed one. By convention the completed has the same name as the aggregation one suffixed with "_COMPLETED". The name must be configured in the Spring bean with the RepositoryName property. In the following example aggregation will be used.

The table structure definition of both table are identical: in both case a String value is used as key (id) whereas a Blob contains the exchange serialized in byte array.
However one difference should be remembered: the id field does not have the same content depending on the table.
In the aggregation table id holds the correlation Id used by the component to aggregate the messages. In the completed table, id holds the id of the exchange stored in corresponding the blob field.

Here is the SQL query used to create the tables, just replace "aggregation" with your aggregator repository name.

CREATE TABLE aggregation ( id varchar(255) NOT NULL, exchange blob NOT
NULL, constraint aggregation_pk PRIMARY KEY (id) ); CREATE TABLE
aggregation_completed ( id varchar(255) NOT NULL, exchange blob NOT
NULL, constraint aggregation_completed_pk PRIMARY KEY (id) );

320.11.2. Storing body and headers as text

Available as of Camel 2.11

You can configure the JdbcAggregationRepository to store message body and select(ed) headers as String in separate columns. For example to store the body, and the following two headers companyName and accountName use the following SQL:

CREATE TABLE aggregationRepo3 ( id varchar(255) NOT NULL, exchange blob
NOT NULL, body varchar(1000), companyName varchar(1000), accountName
varchar(1000), constraint aggregationRepo3_pk PRIMARY KEY (id) ); CREATE
TABLE aggregationRepo3_completed ( id varchar(255) NOT NULL, exchange
blob NOT NULL, body varchar(1000), companyName varchar(1000),
accountName varchar(1000), constraint aggregationRepo3_completed_pk
PRIMARY KEY (id) );

 

And then configure the repository to enable this behavior as shown below:

<bean id="repo3"
class="org.apache.camel.processor.aggregate.jdbc.JdbcAggregationRepository">
<property name="repositoryName" value="aggregationRepo3"/> <property
name="transactionManager" ref="txManager3"/> <property name="dataSource"
ref="dataSource3"/> <!-- configure to store the message body and
following headers as text in the repo --> <property
name="storeBodyAsText" value="true"/> <property
name="headersToStoreAsText"> <list> <value>companyName</value>
<value>accountName</value> </list> </property> </bean>

320.11.3. Codec (Serialization)

Because they can contain any type of payload, Exchanges are not serializable by design. It is converted into a byte array to be stored in a database BLOB field. All those conversions are handled by the JdbcCodec class. One detail of the code requires your attention: the ClassLoadingAwareObjectInputStream.

The ClassLoadingAwareObjectInputStream has been reused from the Apache ActiveMQ project. It wraps an ObjectInputStream and use it with the ContextClassLoader rather than the currentThread one. The benefit is to be able to load classes exposed by other bundles. This allows the exchange body and headers to have custom types object references.

320.11.4. Transaction

A Spring PlatformTransactionManager is required to orchestrate transactions.

320.11.4.1. Service (Start/Stop)

The start method verifies the connection of the database and the presence of the required tables. If anything is wrong it will fail during starting.

320.11.5. Aggregator configuration

Depending on the targeted environment, the aggregator might need some configuration. As you already know, each aggregator should have its own repository (with the corresponding pair of table created in the database) and a data source. If the default lobHandler is not adapted to your database system, it can be injected with the lobHandler property.

Here is the declaration for Oracle:

<bean id="lobHandler"
class="org.springframework.jdbc.support.lob.OracleLobHandler"> <property
name="nativeJdbcExtractor" ref="nativeJdbcExtractor"/> </bean> <bean
id="nativeJdbcExtractor"
class="org.springframework.jdbc.support.nativejdbc.CommonsDbcpNativeJdbcExtractor"/>
<bean id="repo"
class="org.apache.camel.processor.aggregate.jdbc.JdbcAggregationRepository">
<property name="transactionManager" ref="transactionManager"/> <property
name="repositoryName" value="aggregation"/> <property name="dataSource"
ref="dataSource"/> <!-- Only with Oracle, else use default --> <property
name="lobHandler" ref="lobHandler"/> </bean>

320.11.6. Optimistic locking

From Camel 2.12 onwards you can turn on optimisticLocking and use this JDBC based aggregation repository in a clustered environment where multiple Camel applications shared the same database for the aggregation repository. If there is a race condition there JDBC driver will throw a vendor specific exception which the JdbcAggregationRepository can react upon. To know which caused exceptions from the JDBC driver is regarded as an optimistick locking error we need a mapper to do this. Therefore there is a org.apache.camel.processor.aggregate.jdbc.JdbcOptimisticLockingExceptionMapper allows you to implement your custom logic if needed. There is a default implementation org.apache.camel.processor.aggregate.jdbc.DefaultJdbcOptimisticLockingExceptionMapper which works as follows:

The following check is done:

  • If the caused exception is an SQLException then the SQLState is checked if starts with 23.
  • If the caused exception is a DataIntegrityViolationException
  • If the caused exception class name has "ConstraintViolation" in its name.
  • Optional checking for FQN class name matches if any class names has been configured

You can in addition add FQN classnames, and if any of the caused exception (or any nested) equals any of the FQN class names, then its an optimistick locking error.

Here is an example, where we define 2 extra FQN class names from the JDBC vendor:

<bean id="repo"
class="org.apache.camel.processor.aggregate.jdbc.JdbcAggregationRepository">
<property name="transactionManager" ref="transactionManager"/> <property
name="repositoryName" value="aggregation"/> <property name="dataSource"
ref="dataSource"/> <property name"jdbcOptimisticLockingExceptionMapper"
ref="myExceptionMapper"/> </bean> <!-- use the default mapper with extra
FQN class names from our JDBC driver --> <bean id="myExceptionMapper"
class="org.apache.camel.processor.aggregate.jdbc.DefaultJdbcOptimisticLockingExceptionMapper">
<property name="classNames"> <util:set>
<value>com.foo.sql.MyViolationExceptoion</value>
<value>com.foo.sql.MyOtherViolationExceptoion</value> </util:set>
</property> </bean>

320.12. Camel SQL Starter

A starter module is available to spring-boot users. When using the starter, the DataSource can be directly configured using spring-boot properties.

# Example for a mysql datasource
spring.datasource.url=jdbc:mysql://localhost/test
spring.datasource.username=dbuser
spring.datasource.password=dbpass
spring.datasource.driver-class-name=com.mysql.jdbc.Driver

To use this feature, add the following dependencies to your spring boot pom.xml file:

<dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-sql-starter</artifactId>
    <version>${camel.version}</version> <!-- use the same version as your Camel core version -->
</dependency>

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-jdbc</artifactId>
    <version>${spring-boot-version}</version>
</dependency>

You should also include the specific database driver, if needed.

320.13. See Also

  • Configuring Camel
  • Component
  • Endpoint
  • Getting Started
  • SQL Stored Procedure
  • JDBC
Red Hat logoGithubRedditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

© 2024 Red Hat, Inc.